venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title Center Loss Regularization for Continual Learning Abstract The ability to learn different tasks sequentially is essential to the development of artificial intelligence. In general, neural networks lack this capability, the major obstacle being catastrophic forgetting. It occurs when the incrementally available information from non-stationary data distributions is continually acquired, disrupting what the model has already learned. Our approach remembers old tasks by projecting the representations of new tasks close to that of old tasks while keeping the decision boundaries unchanged. We employ the center loss as a regularization penalty that enforces new tasks’ features to have the same class centers as old tasks and makes the features highly discriminative. This, in turn, leads to the least forgetting of already learned information. This method is easy to implement, requires minimal computational and memory overhead, and allows the neural network to maintain high performance across many sequentially encountered tasks. We also demonstrate that using the center loss in conjunction with the memory replay outperforms other replay-based strategies. Along with standard MNIST variants for continual learning, we apply our method to continual domain adaptation scenarios with the Digits and PACS datasets. We demonstrate that our approach is scalable, effective, and gives competitive performance compared to state-of-the-art continual learning methods. 1 INTRODUCTION Humans have the ability to continuously evolve, accumulate and transfer acquired knowledge to learn new skills throughout their lifetime. In contrast, in the classical machine learning paradigm, typically referred to as isolated learning (Chen & Liu, 2018), systems are capable of achieving high performance in learning isolated tasks or narrow domains without using previously learned knowledge. This makes them different from real-world settings where systems are expected to learn consecutive tasks with changing data distributions and unknown task boundaries. In this scenario, the intelligent agent should learn continually without forgetting the already acquired knowledge. Thus, continual learning, or traditionally called lifelong learning (Chen & Liu, 2018; Thrun, 1995; 1996; 1998; Thrun & Pratt, 2012), becomes necessary for artificial general intelligence. A significant problem in continual learning is catastrophic forgetting in neural networks, also known as catastrophic interference (McCloskey & Cohen, 1989). The newly learned information may interfere and disrupt the already learned knowledge, leading to a performance loss on old tasks (Ratcliff, 1990). The extent to which the system should be prone to refine and integrate new knowledge and retain previous information was termed as a stability-plasticity dilemma and is well-studied in many previous works. (Grossberg, 1982; 2013; Mermillod et al., 2013). This issue of catastrophic forgetting is known to exist in many different types of neural networks, from standard backpropagation networks to unsupervised networks like self-organizing maps (Richardson & Thomas, 2008; Mermillod et al., 2013). There have been several attempts to overcome catastrophic forgetting in neural networks, and the various approaches are discussed in the next section. 1.1 CONTINUAL LEARNING APPROACHES In general, current continual learning methods can be broadly categorized into three different types of strategies based on how they attempt to solve the problem of catastrophic forgetting. Architectural approaches mitigate catastrophic forgetting by modifying the architectural properties of the networks, e.g., adding more neurons or layers or incorporating weight freezing strategies. One of the earliest strategies in this category was Progressive Neural Networks (PNN) proposed by Rusu et al. (2016) that retains a pool of pre-trained models as knowledge and learns lateral connections among them to learn the task at hand. Another simpler strategy, Copy Weights with Re-init (CWR), was proposed (Lomonaco & Maltoni, 2017) where consolidated knowledge is maintained by isolating the subsets of weights for each class and learning rest task-specific parameters. Scalability remains an issue in this category of approaches as the network parameters explode as the number of tasks increases. Replay or rehearsal-based approaches maintain a memory buffer with samples from previous tasks to replay with the examples from the current task to strengthen the old memories. (Rolnick et al., 2018). Lopez-Paz & Ranzato (2017) proposed Gradient Episodic Memory (GEM) that favors positive backward transfer and hence mitigating forgetting by using episodic memory of samples from previous tasks. Later, Chaudhry et al. (2019) proposed a more efficient and faster version, called Averaged GEM (A-GEM). Inspired by the suggestion that the hippocampus is better paralleled with a generative model than a replay buffer (Ramirez et al., 2013; Stickgold & Walker, 2007), the replay approach was improved further by replacing the memory buffer with a generative model which could generate unlimited pseudo data from past tasks (Shin et al., 2017; Van de Ven & Tolias, 2018). Instead of using stored input samples, the replay of latent representations to mitigate forgetting is also explored in many recent works (Pellegrini et al., 2020; van de Ven et al., 2020). Regularization-based approaches attenuate catastrophic forgetting by imposing constraints on the update of the network weights (Parisi et al., 2019). It is generally formulated via additional regularization terms that penalize changes in the weights or predictions of neural network. Learning Without Forgetting (LwF) (Li & Hoiem, 2017) distills knowledge with the network’s previous version to enforce the predictions of current and previous tasks to be similar. Many recent regularization based methods apply penalty on network parameters and estimate the importance of different network parameters. Kirkpatrick et al. (2017) proposed the Elastic Weight Consolidation (EWC), which imposes a quadratic penalty on the difference between the old and new task parameters to slow down the learning on certain weights based on their importance for previous tasks (Parisi et al., 2019). In Synaptic Intelligence (SI) (Zenke et al., 2017), individual synapses are allowed to estimate their importance by computing the path integral of the gradient vector field along the parameter trajectory. Whereas Memory Aware Synapses (MAS) (Aljundi et al., 2018) computes importance based on the sensitivity of predicted output function to each parameter. On the other hand, instead of imposing penalty directly on weights, Less-Forgetful Learning (LFL) (Jung et al., 2018) regularizes the L2 distance between the new and old feature representations to preserve the previously learned input-output mappings by computing auxiliary activations with the old task parameters. Recent regularization works also build upon the traditional Bayesian online learning framework with variational inference (Nguyen et al., 2017; Ahn et al., 2019; Adel et al., 2020). 1.2 MOTIVATION The architectural approaches suffer from scalability issues as the number of parameters increases with the number of tasks (Parisi et al., 2019). On the other hand, rehearsal-based strategies generally require large memory buffers to store old task data for high performance. Moreover, in real-world scenarios, it is not always possible to have access to old task data. The generative replay-based methods attempt to solve this issue but are often difficult to train and computationally expensive. On the contrary, the regularization-based strategies assume that all the information essential about the old task is contained in the network weights (Kirkpatrick et al., 2017). The over-parameterization in neural networks makes it possible for the solution to a new task to be found close to the solution for the old task (Hecht-Nielsen, 1992; Sussmann, 1992; Kirkpatrick et al., 2017). Thus, the regularization strategies are generally memory efficient and computationally less expensive than the other two approaches. Our approach belongs to this category as it focuses on alleviating the catastrophic forgetting problem by regularizing the network to project the new task representations close to the old task representations while keeping the decision boundaries unchanged. We achieve this using the center loss (Wen et al., 2016) as a regularization penalty to minimize forgetting. We show that our approach successfully prevents catastrophic forgetting in a computationally efficient manner without accessing the data from old tasks. 1.3 CONTRIBUTIONS The contributions of this paper are as follows: 1. We propose a novel regularization-based continual learning strategy which we refer to as center loss regularization (CLR). 2. We compare our approach to different continual learning strategies in domain incremental scenarios and show that our approach is scalable, computationally efficient while storing minimal additional parameters. 3. We show that our approach gives a competitive performance with the state-of-the-art techniques when applied in continual domain adaptation scenarios. 2 CENTER LOSS REGULARIZATION (CLR) Deep neural networks excel at learning the hierarchical internal representations from raw input data by stacking multiple layers, which allows the system to learn complex function mappings from input to output (Farabet et al., 2013). These representations become increasingly invariant to small changes in the input as we go up the layers towards the output layer by preserving the vital information about the input related to the task (Guest & Love, 2019). The portion till the last hidden layer is considered the feature extractor, and the last fully connected layer is regarded as a linear classifier as the features extracted from the feature extractor are usually linearly separable due to the softmax activation in the top layer (Wen et al., 2016). The Less-Forgetful Learning (LFL) approach (Jung et al., 2016) demonstrated that catastrophic forgetting can be prevented if the representations of the new task are projected close to the learned representations of the old task keeping the decision boundaries unchanged. Ramasesh et al. (2020) empirically demonstrated that if the higher layers are stabilized while learning subsequent tasks, the forgetting can be mitigated significantly. In our approach, we freeze the weights of the last fully connected classification layer to keep the decision boundaries unchanged similar to LFL. However, it is non-trivial to make the learned features for new tasks localize nearby corresponding old task features in the latent space. The LFL solves it by using the L2 distance between the current model’s features and the computed features using the old task model as a regularization penalty to preserve the previously learned input-output mappings. However, this approach is highly memory-intensive and computationally expensive since it requires storing the entire model trained on the old task and does forward pass on it to compute representations for each novel task. Wen et al. (2016) introduced the center loss and demonstrated that the joint supervision of softmax loss and center loss helps to increase the inter-class dispersion and intra-class compactness of deeply learned features. During the training process, the model also learns the centers for each class features around which the deeply learned features are typically clustered (Wen et al., 2016). We exploit these properties of center loss in building our continual learning strategy. More details on the center loss are provided in the Appendix D. We use the center loss as a regularization penalty along with softmax loss. To enforce the model to learn the features for the new task in the proximity of corresponding old task features, we utilize the already learned class feature centers of the old task instead of storing the old model weights like LFL. While learning the new task, we enforce the new task features to be close to the already learned feature centers using the center loss, making the model project the features of all tasks in the same localized region, clustered around corresponding class feature centers. For the new task, we freeze the class centers and reduce the learning rate of feature extractor parameters to prevent significant changes in it while training with the new task. We also provide the findings of our ablation study in the Section 3.3 to analyze the effects of letting the centers and decision boundaries change while learning new tasks. As our method needs to store only the feature centers for each class throughout the lifetime, the memory requirement is significantly lower than other approaches where the agent needs to store the model weights or maintain the replay buffer. The extra memory requirements are discussed in detail in Section 3.3.1 Lt = Ls + λLc (1) Lc = 1 2 m∑ i=1 ‖fL−1(xi; θ(n))− c(o)yi ‖ 2 2 (2) θ̂(n) = argmin θ(n) Lt(x, c (o); θ(n)) +R(θ(n)) (3) The equation 1 represents the joint loss, a combination of the softmax loss Ls and the center loss Lc, which is minimized during training. Minimizing Ls helps solve the current/new task, whereas the term Lc helps retain already learned knowledge and avoid catastrophic forgetting. A scalar λ is used for balancing the two loss functions. The equation 2 defines the modified center loss Lc, where c(o) denotes the learned values of feature centers from old task. The centers are kept frozen during the subsequent tasks. This enforces the new task representations to have the same feature centers as the old task, leading the old task and new task representations to stay within proximity in the feature space, reducing the catastrophic forgetting. We obtain the equation 3 as final objective function where R(·) denotes a general regularization term, such as weight decay. Finally, we propose the center loss regularization algorithm, as shown in Algorithm 1. N denotes the number of training iterations, D(n) denotes the new task data, θ(o) and θ(n) denote the network parameters for the old and new task, respectively. Note that we use a single-headed model in our experiments as we primarily target domain-incremental scenario of continual learning where task identity need not be inferred at test time (Van de Ven & Tolias, 2019). The system learns to adapt to changing input distributions, but the task structure remains the same. Algorithm 1 Center Loss Regularization (CLR) Input: θ(o), c(o), N,D(n) Output: θ̂(n) 1: θ(n) ← θ(o) . initialize weights 2: Freeze the weights of the softmax classification layer. 3: for i← 1, N do . training iteration 4: for each minibatch B ∈ D(n) do 5: Backpropagate and update θ(n) for mini-batch B to minimize loss Lt +R(θ(n)) 6: end for 7: end for 8: θ̂(n) ← θ(n) 9: return θ̂(n) 3 EXPERIMENTS In this section, we first detail the experimental protocols for evaluating lifelong learning algorithms (Section 3.1). We have compared our proposed method against different continual learning methods, which are specified in Section 3.2. Finally, we report our results in Section 3.3. 3.1 EXPERIMENTAL PROTOCOLS Permuted & Rotated MNIST (Kirkpatrick et al., 2017) are variants of the original MNIST dataset (LeCun, 1998). In Permuted MNIST, a fixed permutation of the image pixels is applied to each task’s training and test set. In Rotated MNIST, each task consists of images rotated by a fixed angle between 0 and 180 degrees. We chose 10000 training and 5000 testing samples for both these variants. Each task has a test set with the same rotation transformation as the training set for that task. In these experiments, the model encounters 10 tasks in sequence, each with a unique rotation angle. We evaluate the model on the test sets of the respective datasets after training on each task. We use the fully connected neural network (MLP) with two hidden layers of 100 units, each with ReLU activation for our experiments. We train the network using Adam optimizer (Kingma & Ba, 2014) on mini-batches of 64 samples for 1 epoch over training set per task with a learning rate of 3 · 10−3 for the first task and 3 · 10−4 for subsequent tasks. Our method can also be applied in the supervised continual domain adaptation settings where the model needs to adapt to the new domains without degrading the performance on the previously seen domains (Jung et al., 2018; Volpi et al., 2021). Digit Recognition We consider four widely used datasets for digit recognition in continual domain adaptation setting: MNIST (LeCun, 1998), SVHN (Netzer et al., 2011), MNIST-M (Ganin & Lempitsky, 2015) and SYN (Ganin & Lempitsky, 2015). To assess the performance of our approach, we train our network on these datasets, one by one in the sequence MNIST → MNIST-M → SYN → SVHN, considering each dataset as one task. This way, the network adapts to harder domains continually. For this protocol, 10000 training and 5000 testing samples are chosen for each dataset, and all images are resized to 28 x 28 pixels. We convert the MNIST dataset to 3-channel images by repeating the original channel 3 times for compatibility with the remaining datasets. We use the ResNet18 (He et al., 2016) architecture and train the network on each domain for 20 epochs, with a batch size of 64. We use Adam optimizer (Kingma & Ba, 2014) with a learning rate of 3 · 10−4, which is reduced to 3 · 10−5 after the first domain. PACS dataset (Li et al., 2017) is typically used to assess the domain generalization scenarios. It consists of four domains, namely Photo (1,670 images), Art Painting (2,048 images), Cartoon (2,344 images), and Sketch (3,929 images). Each domain contains seven categories. We use this dataset to evaluate our approach in a continual domain adaptation setting. We trained the network in sequence Sketches → Cartoons → Paintings → Photos where images become more realistic with the new domain. For each domain, the dataset is split into 70% training and the remaining 30% as testing samples. All images are resized to 224 x 224 pixels and standardized with the mean and standard deviation of the ImageNet (Deng et al., 2009) dataset. This is due to the fact that this protocol uses ResNet18 (He et al., 2016) network trained on ImageNet dataset. We train the network on each domain for 5 epochs, with a batch size of 64. We use Adam optimizer (Kingma & Ba, 2014) with a learning rate of 3 · 10−4, which is reduced to 3 · 10−5 after the first domain. 3.2 METHODS We compare the performance of the proposed approach with that of the state-of-the-art regularization-based strategies. First, we test the fine-tuning approach in which a single model is trained across all tasks, which is the naive approach. Second, we test the LwF (Li & Hoiem, 2017) method, which uses only new task data to train the network while preserving the original capabilities. Further, we compare the performance with EWC (Kirkpatrick et al., 2017), SI (Zenke et al., 2017) and MAS (Aljundi et al., 2018) methods which try to estimate synaptic importance of different network parameters and use that information to regularize the weights. We also test the LFL (Jung et al., 2018) method, which tries to position the features extracted by the new network, close to the features extracted by the old network. Moreover, we explore frameworks like Uncertaintyregularized Continual Learning (UCL) (Ahn et al., 2019) and Variational Continual Learning (VCL) (Nguyen et al., 2017) based on variational inference. VCL employs a projection operator through KL divergence minimization. UCL solves the drawbacks of VCL by proposing the concept of nodewise uncertainty. Then, we consider two oracle methods: If we assume access to every domain at every point in time, we can either train on samples from the joint distribution from the beginning oracle (all), or grow the distribution over iterations oracle (cumulative). With access to samples from all domains, oracles are not generally exposed to catastrophic forgetting; yet, their performance is not necessarily an upper bound (Lomonaco & Maltoni, 2017). We also compare our approach with several replay-based strategies. We use the basic experiencereplay method as a baseline and examine if using CLR as surrogate loss along with experience replay can enhance the performance. Further, we compare its performance with state-of-the-art memory-based methods like average gradient episodic memory (A-GEM) (Chaudhry et al., 2019) and GDumb (Prabhu et al., 2020) methods. We also compare CLR with recent continual domain adaptation technique called domain randomization and meta-learning (Meta-DR) (Volpi et al., 2021) for continual domain adaptation experiments. For Permuted and Rotated MNIST tasks, we experiment with four different memory sizes of 10, 20, 50, and 100 examples per task. For the Digits dataset, we experiment with 100, 200, 300 replay examples per task. For the PACS dataset, we experiment with 10, 20, 30 replay examples per task. The metrics used in our experiments to evaluate all considered methods are the average accuracy (ACC) and backward transfer (BWT). ACC is the average of the accuracy of the model across all encountered tasks, and BWT represents the amount of forgetting at the end of training on all tasks (Lopez-Paz & Ranzato, 2017). The formulae for the metrics are presented in detail in Appendix B. Each experiment in the paper is carried out for 5 trials, and the final values are reported as mean and standard deviation of results. 3.3 RESULTS Table 1 compares the performances of different regularization-based approaches on the Rotated and Permuted MNIST datasets. From this table, we can observe that our approach CLR outperforms all other methods. VCL shows competitive performance for Permuted MNIST, but the amount of forgetting (BWT) is worse than CLR. Whereas, LwF shows competitive performance in terms of forgetting but less adaptability to new information. Figure 1 presents the evolution of the average accuracy (ACC) and the first-task accuracy throughout all the tasks for the MNIST variants. This shows that the center loss regularization helps to mitigate the problem of catastrophic forgetting on earlier tasks while maintaining high performance on all the tasks. In Table 2, we report the results of replaying samples from episodic memory along with our proposed approach for both MNIST variants. We can observe that using center loss regularization significantly improves the performance over the plain experience replay strategy. Such improvement is consistent as we increase the memory size. It also outperforms the state-of-the-art memory-based methods like A-GEM and GDumb. This shows that our approach can also be used as surrogate loss along with replay-based strategies to enhance performance and reduce forgetting. We provide the ablation study of our approach in Table 3. We demonstrate how the performance changes if we do not freeze the centers and classifier weights after the training on the first task is completed. The first row in the table represents the approach proposed in Section 2. We try out multiple values of hyperparameter λ for each experiment and put the best performing results for each row. These results show that the best performance is generally achieved when we freeze the centers and the decision boundaries after the first task. Moreover, we also demonstrate how the hyperparameter λ affects the overall performance of CLR in Appendix Figure 3. We report in Table 4 and Table 5 the performance of our proposed method compared with different methods in continual domain adaptation setting on Digits and PACS datasets respectively. We note that our approach outperforms the naive, EWC, and SI strategies with a significant margin for both benchmarks. Our method demonstrates competitive performance compared to LwF and LFL strategies. Having access to the samples from older tasks for replay can help reduce catastrophic forgetting significantly compared to regularization methods which do not have access to the data of old domains. Thus, we also examine if using CLR along with Experience Replay (ER) can help boost the performance. For both the benchmarks, we observe that using CLR with ER can significantly improve the overall performance, and it is consistent with the increase in the memory size. The memory size column denotes the number of replay samples per task. These results suggest that the center loss regularization helps the model successfully adapt to new domains without considerable performance degradation on old domains. 3.3.1 COMPARISON OF ADDITIONAL MEMORY REQUIREMENT Generally, the regularization-based methods store the old network parameters for regularization or knowledge distillation. In Table 6, we compare the extra memory requirement of different regularization-based methods in terms of the number of additional parameters other than the base network parameters. Further, we explain why CLR is the cheapest option from an additional memory requirement perspective compared to other regularization techniques. In order to quantify the importance of weights to previous tasks, EWC needs to compute and store the diagonal of the Fisher matrix for each task, which has the same number of elements as the network parameters. Additionally, optimal parameters from previous tasks are also stored in order to compute the knowledge distillation loss. Moreover, a few samples from previous tasks are maintained in our experiments to compute the Fisher matrix after each task is completed. Thus, given that k is the number of encountered tasks and p is the number of network parameters, the space complexity for additional memory usage in EWC becomes O(k ∗ p). In contrast to EWC, SI computes parameter-specific importance online. So, it does not require storing extra parameters for each task but maintains only the previous task model parameters and regularization strength for each network parameter to compute the surrogate loss. Thus, the space complexity for additional memory becomes O(p) in the case of SI. MAS also requires storing synaptic importances for each model parameter, needing a similar amount of memory as SI. VCL stores variance term for each weight parameter for current and previous models; hence, it requires thrice the number of additional parameters than the original model. UCL solves this drawback of VCL by computing uncertainty (importance) at node-level, reducing the total required parameters to almost half compared to VCL (Ahn et al., 2019). However, CLR outdoes both of them with lesser parameters. Moreover, both LwF and LFL require storing the old network in the memory to compute the knowledge distillation loss for the previous task. Like SI, these two methods LFL and LwF, have the space complexity for additional memory as O(p). CLR attempts to achieve a similar objective as the LFL projecting old and new task features in close proximity. However, the CLR is computationally and memory-wise more efficient than the LFL and LwF because the CLR does not need to store the old model and forward pass it, significantly reducing the memory usage and training time. Hence, all these methods require a large amount of extra memory, which further increases with the number of tasks or the network size. On the other hand, CLR only stores the feature centers of each class, which significantly reduces the additional memory requirement compared to the previous methods. Thus, the space complexity of extra memory usage in CLR comes down toO(n∗d), where n is the number of classes in each task, and each feature center is d-dimensional. 4 LIMITATIONS AND FUTURE DIRECTIONS There are several exciting research directions to extend our work for continual learning. Our method requires the knowledge of task boundaries which may not always be available. CLR does not leverage the task descriptors, which may be exploited to obtain a positive forward transfer. Further, novel approaches can also be developed to exploit our approach for supporting task-incremental and class-incremental learning, where task-IDs need to be inferred. In this paper, we applied CLR to solve the supervised classification problem in continual learning. It would be an interesting research direction to exploit the properties of CLR to solve other problems like regression and dimensionality reduction in the continual learning setting. Moreover, the effects of using other discriminative representation learning approaches (Hadsell et al., 2006; Sun, 2015; Deng et al., 2017; Zhang et al., 2017; Liu et al., 2017; Chen et al., 2017; Wan et al., 2018; Qi & Zhang, 2018; Wang et al., 2018a;b) can be studied for continual learning. 5 CONCLUSION In this paper, we proposed a new regularization-based strategy for continual learning, referred to as center loss regularization (CLR). It utilizes the power of center loss to learn discriminative features and use the learned feature centers to project new task features in the proximity of old task features to transfer knowledge and avoid catastrophic forgetting. Our method was effective in overcoming catastrophic forgetting when applied to the standard continual learning benchmarks as well as continual domain adaptation benchmarks. Our method is scalable and computationally effective, and it does not store previous data and requires minimal additional network parameters. Our extensive experiments consistently demonstrate the competitive performance of CLR against the state-of-the-art regularization strategies for continual learning. A TRAINING DETAILS We used one NVIDIA GeForce TITAN X GPU for training the models in all our experiments with the CUDA version 10.1 on a Linux machine. We built our models and implemented different methods using PyTorch deep learning framework and Avalanche library (Lomonaco et al., 2021) for Continual Learning. Hyperparameter search was done using the grid-search. Next, we provide the best hyperparameter values, specific to the methods for our experiments. The λ denotes the importance of the regularization penalty for EWC, LwF, SI, LFL and MAS. Whereas β controls the speed of standard deviation (σ) for the weight parameter in UCL. VCL does not need any additional hyperparameters. In CLR, λ denotes the importance of the center loss and α denotes the rate with which the centers are allowed to change in the first task. B EVALUATION METRICS Lopez-Paz & Ranzato (2017) introduced Average Accuracy (ACC) and Backward Transfer (BWT) evaluation metrics for continual learning, which we use in our experiments to evaluate our approach. For evaluation, we maintain a test set for each of the T tasks. After learning on the new task ti, we evaluate the model’s test performance on all T tasks. The formulas for calculating ACC and BWT are as follows. Ai,j is the test classification accuracy of the model on task ti after observing the last data sample from task tj . Average Accuracy (ACC) = 1 T T∑ i=1 Ai,T (4) Backward Transfer (BWT) = 1 T − 1 T−1∑ i=1 Ai,T −Ai,i (5) The larger the value of ACC, the better is the model. Whereas, if the values of ACC of two models are similar, then the model with higher BWT is usually considered. C ADDITIONAL PLOTS Figure 2 shows the plots of average accuracy (over 10 tasks) at the end of training on each task. This is different from the plots presented in Figure 1 where we take the average over encountered tasks only. In Figure 3, we show the performance of CLR for various values of hyperparameter λ for various protocols. The value of λ at the highest/peak point of each protocol’s graph was chosen for the corresponding experiment. D DISCRIMINATIVE FEATURE LEARNING WITH CENTER LOSS Typical deep neural network architecture comprises an input layer, followed by several hidden layers with non-linear activation functions and the output layer. The output layer generally has a softmax activation function for multi-class classification. This last fully connected layer acts as a linear classifier that separates the deeply learned features produced by the last hidden layer. The softmax loss forces the deep features of different classes to stay apart. The discriminative power of learned features is enhanced if the intra-class compactness and inter-class separability are maximized simultaneously. Though the features learned using the softmax loss are separable, they are not discriminative enough for open-set supervised problems and often exhibit high intra-class variance. This adversely affects the generalization capabilities of neural networks. Several works (Wen et al., 2016; Deng et al., 2017; Zhang et al., 2017; Liu et al., 2017; Wang et al., 2018b;a; Chen et al., 2017; Wan et al., 2018; Qi & Zhang, 2018) have proposed variants of softmax loss to enhance the discriminative power. The siamese network Koch et al. (2015) based approaches which use contrastive loss Sun (2015); Hadsell et al. (2006) and triplet loss Schroff et al. (2015), learn the embeddings directly. These approaches face the problem of semi-hard sample mining and combinatorial explosion in the number of pairs or triplets, which significantly affect the effective model training Deng et al. (2019). There are also angular margin penalty-based approaches that have shown significant improvements over softmax loss and have been explored in various directions, especially for large-scale face recognition Liu et al. (2017); Wang et al. (2018b;a); Liu et al. (2016); Deng et al. (2019). Wen et al. (2016) introduced the center loss for discriminative feature learning to solve deep face recognition. The joint supervision of softmax loss and center loss is used to obtain the inter-class dispersion and intra-class compactness by simultaneously learning the centers and minimizing the distances between the deep features and their corresponding class centers. The center loss has the same requirement as the softmax loss and needs no complex recombination of the training samples like contrastive loss and triplet loss which suffer from dramatic data expansion. The center loss is defined as follows: Lc(x; θ, c) = 1 2 m∑ i=1 ‖fL−1(xi; θ)− cyi‖22 (6) In Equation 6, the xi denotes the ith sample, belonging to the yith class, cyi ∈ Rd denotes the yi th class center of deep features. The size of mini-batch and size of the feature dimension is m and d, respectively. L is the total number of layers, and fL−1 is the feature vector of layer L− 1, which is just before the softmax classifier layer, and the θ denotes the network parameters. The formulation effectively characterizes the intra-class variations. In each iteration, the centers are computed by averaging the features of the corresponding classes. The deep features learned using the center loss are highly discriminative, clustered around the corresponding class centers, and linearly separable by the final fully connected layer, which acts as a linear classifier. Figure 4 presents the visualizations of features obtained using softmax loss on the left and using joint supervision of softmax and center loss on the right. Wen et al. (2016) provides detailed analysis and extensive experiments on center loss and its application in discriminative feature learning.
1. What is the focus and contribution of the paper on continual learning? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its description and experimental evaluation? 3. Do you have any concerns about the method's conceptual foundation or its relationship to other approaches in the field? 4. How might the paper be improved regarding clarity, motivation, and analysis of related work?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a new regularization strategy for continual learning, called center loss regularization (CLR). For each previous task, the average of the embeddings of the task is stored. For new tasks, the new embeddings are forced to stay close to the old centers using an L2 regularization. The paper has an extensive experimental evaluation, showing that the method performs better on a number of standard benchmarks, up to small-medium sizes (mostly MNIST variants). Review My main concern is that the description of the method is very short, and after reading through the paper several points remain unclear. The description is composed of ì3 equations (1)-(3), a short description, and a pseudo-code. However: What is "m" in (2)? Is it the batch size or the size of an external replay buffer? Or is it the number of tasks seen so far? The description of the paper hints at the third option, but it is not very clear. The pseudo-code (Algorithm 1) is very generic and could be applied to any neural network training. The authors talk multiple time of "projecting the representations of new tasks close to that of old tasks", but theiy are just forcing a distance measure, not explicitly projecting them (unless I am missing something). What is the "softmax loss Ls"? Is this a cross-entropy applied on top of the output of the network? The paper is also missing a clear motivation. Keeping the embeddings close to multiple centers seems highly restrictive. What is the rationale behind this application? It would probably benefit also from discussing the idea of the center loss in the introduction itself. The paper should also provide closer analysis of the relation of the method wrt the literature. Currently, this is done only for LFL (beginning of Section 2), but what is the relation with A-GEM (which also exploits averages) or embedding regularization (which also exploits embeddings)? Section 1.1 also requires a more careful proofreading (e.g., "and learning rest task-specific parameters").
ICLR
Title Center Loss Regularization for Continual Learning Abstract The ability to learn different tasks sequentially is essential to the development of artificial intelligence. In general, neural networks lack this capability, the major obstacle being catastrophic forgetting. It occurs when the incrementally available information from non-stationary data distributions is continually acquired, disrupting what the model has already learned. Our approach remembers old tasks by projecting the representations of new tasks close to that of old tasks while keeping the decision boundaries unchanged. We employ the center loss as a regularization penalty that enforces new tasks’ features to have the same class centers as old tasks and makes the features highly discriminative. This, in turn, leads to the least forgetting of already learned information. This method is easy to implement, requires minimal computational and memory overhead, and allows the neural network to maintain high performance across many sequentially encountered tasks. We also demonstrate that using the center loss in conjunction with the memory replay outperforms other replay-based strategies. Along with standard MNIST variants for continual learning, we apply our method to continual domain adaptation scenarios with the Digits and PACS datasets. We demonstrate that our approach is scalable, effective, and gives competitive performance compared to state-of-the-art continual learning methods. 1 INTRODUCTION Humans have the ability to continuously evolve, accumulate and transfer acquired knowledge to learn new skills throughout their lifetime. In contrast, in the classical machine learning paradigm, typically referred to as isolated learning (Chen & Liu, 2018), systems are capable of achieving high performance in learning isolated tasks or narrow domains without using previously learned knowledge. This makes them different from real-world settings where systems are expected to learn consecutive tasks with changing data distributions and unknown task boundaries. In this scenario, the intelligent agent should learn continually without forgetting the already acquired knowledge. Thus, continual learning, or traditionally called lifelong learning (Chen & Liu, 2018; Thrun, 1995; 1996; 1998; Thrun & Pratt, 2012), becomes necessary for artificial general intelligence. A significant problem in continual learning is catastrophic forgetting in neural networks, also known as catastrophic interference (McCloskey & Cohen, 1989). The newly learned information may interfere and disrupt the already learned knowledge, leading to a performance loss on old tasks (Ratcliff, 1990). The extent to which the system should be prone to refine and integrate new knowledge and retain previous information was termed as a stability-plasticity dilemma and is well-studied in many previous works. (Grossberg, 1982; 2013; Mermillod et al., 2013). This issue of catastrophic forgetting is known to exist in many different types of neural networks, from standard backpropagation networks to unsupervised networks like self-organizing maps (Richardson & Thomas, 2008; Mermillod et al., 2013). There have been several attempts to overcome catastrophic forgetting in neural networks, and the various approaches are discussed in the next section. 1.1 CONTINUAL LEARNING APPROACHES In general, current continual learning methods can be broadly categorized into three different types of strategies based on how they attempt to solve the problem of catastrophic forgetting. Architectural approaches mitigate catastrophic forgetting by modifying the architectural properties of the networks, e.g., adding more neurons or layers or incorporating weight freezing strategies. One of the earliest strategies in this category was Progressive Neural Networks (PNN) proposed by Rusu et al. (2016) that retains a pool of pre-trained models as knowledge and learns lateral connections among them to learn the task at hand. Another simpler strategy, Copy Weights with Re-init (CWR), was proposed (Lomonaco & Maltoni, 2017) where consolidated knowledge is maintained by isolating the subsets of weights for each class and learning rest task-specific parameters. Scalability remains an issue in this category of approaches as the network parameters explode as the number of tasks increases. Replay or rehearsal-based approaches maintain a memory buffer with samples from previous tasks to replay with the examples from the current task to strengthen the old memories. (Rolnick et al., 2018). Lopez-Paz & Ranzato (2017) proposed Gradient Episodic Memory (GEM) that favors positive backward transfer and hence mitigating forgetting by using episodic memory of samples from previous tasks. Later, Chaudhry et al. (2019) proposed a more efficient and faster version, called Averaged GEM (A-GEM). Inspired by the suggestion that the hippocampus is better paralleled with a generative model than a replay buffer (Ramirez et al., 2013; Stickgold & Walker, 2007), the replay approach was improved further by replacing the memory buffer with a generative model which could generate unlimited pseudo data from past tasks (Shin et al., 2017; Van de Ven & Tolias, 2018). Instead of using stored input samples, the replay of latent representations to mitigate forgetting is also explored in many recent works (Pellegrini et al., 2020; van de Ven et al., 2020). Regularization-based approaches attenuate catastrophic forgetting by imposing constraints on the update of the network weights (Parisi et al., 2019). It is generally formulated via additional regularization terms that penalize changes in the weights or predictions of neural network. Learning Without Forgetting (LwF) (Li & Hoiem, 2017) distills knowledge with the network’s previous version to enforce the predictions of current and previous tasks to be similar. Many recent regularization based methods apply penalty on network parameters and estimate the importance of different network parameters. Kirkpatrick et al. (2017) proposed the Elastic Weight Consolidation (EWC), which imposes a quadratic penalty on the difference between the old and new task parameters to slow down the learning on certain weights based on their importance for previous tasks (Parisi et al., 2019). In Synaptic Intelligence (SI) (Zenke et al., 2017), individual synapses are allowed to estimate their importance by computing the path integral of the gradient vector field along the parameter trajectory. Whereas Memory Aware Synapses (MAS) (Aljundi et al., 2018) computes importance based on the sensitivity of predicted output function to each parameter. On the other hand, instead of imposing penalty directly on weights, Less-Forgetful Learning (LFL) (Jung et al., 2018) regularizes the L2 distance between the new and old feature representations to preserve the previously learned input-output mappings by computing auxiliary activations with the old task parameters. Recent regularization works also build upon the traditional Bayesian online learning framework with variational inference (Nguyen et al., 2017; Ahn et al., 2019; Adel et al., 2020). 1.2 MOTIVATION The architectural approaches suffer from scalability issues as the number of parameters increases with the number of tasks (Parisi et al., 2019). On the other hand, rehearsal-based strategies generally require large memory buffers to store old task data for high performance. Moreover, in real-world scenarios, it is not always possible to have access to old task data. The generative replay-based methods attempt to solve this issue but are often difficult to train and computationally expensive. On the contrary, the regularization-based strategies assume that all the information essential about the old task is contained in the network weights (Kirkpatrick et al., 2017). The over-parameterization in neural networks makes it possible for the solution to a new task to be found close to the solution for the old task (Hecht-Nielsen, 1992; Sussmann, 1992; Kirkpatrick et al., 2017). Thus, the regularization strategies are generally memory efficient and computationally less expensive than the other two approaches. Our approach belongs to this category as it focuses on alleviating the catastrophic forgetting problem by regularizing the network to project the new task representations close to the old task representations while keeping the decision boundaries unchanged. We achieve this using the center loss (Wen et al., 2016) as a regularization penalty to minimize forgetting. We show that our approach successfully prevents catastrophic forgetting in a computationally efficient manner without accessing the data from old tasks. 1.3 CONTRIBUTIONS The contributions of this paper are as follows: 1. We propose a novel regularization-based continual learning strategy which we refer to as center loss regularization (CLR). 2. We compare our approach to different continual learning strategies in domain incremental scenarios and show that our approach is scalable, computationally efficient while storing minimal additional parameters. 3. We show that our approach gives a competitive performance with the state-of-the-art techniques when applied in continual domain adaptation scenarios. 2 CENTER LOSS REGULARIZATION (CLR) Deep neural networks excel at learning the hierarchical internal representations from raw input data by stacking multiple layers, which allows the system to learn complex function mappings from input to output (Farabet et al., 2013). These representations become increasingly invariant to small changes in the input as we go up the layers towards the output layer by preserving the vital information about the input related to the task (Guest & Love, 2019). The portion till the last hidden layer is considered the feature extractor, and the last fully connected layer is regarded as a linear classifier as the features extracted from the feature extractor are usually linearly separable due to the softmax activation in the top layer (Wen et al., 2016). The Less-Forgetful Learning (LFL) approach (Jung et al., 2016) demonstrated that catastrophic forgetting can be prevented if the representations of the new task are projected close to the learned representations of the old task keeping the decision boundaries unchanged. Ramasesh et al. (2020) empirically demonstrated that if the higher layers are stabilized while learning subsequent tasks, the forgetting can be mitigated significantly. In our approach, we freeze the weights of the last fully connected classification layer to keep the decision boundaries unchanged similar to LFL. However, it is non-trivial to make the learned features for new tasks localize nearby corresponding old task features in the latent space. The LFL solves it by using the L2 distance between the current model’s features and the computed features using the old task model as a regularization penalty to preserve the previously learned input-output mappings. However, this approach is highly memory-intensive and computationally expensive since it requires storing the entire model trained on the old task and does forward pass on it to compute representations for each novel task. Wen et al. (2016) introduced the center loss and demonstrated that the joint supervision of softmax loss and center loss helps to increase the inter-class dispersion and intra-class compactness of deeply learned features. During the training process, the model also learns the centers for each class features around which the deeply learned features are typically clustered (Wen et al., 2016). We exploit these properties of center loss in building our continual learning strategy. More details on the center loss are provided in the Appendix D. We use the center loss as a regularization penalty along with softmax loss. To enforce the model to learn the features for the new task in the proximity of corresponding old task features, we utilize the already learned class feature centers of the old task instead of storing the old model weights like LFL. While learning the new task, we enforce the new task features to be close to the already learned feature centers using the center loss, making the model project the features of all tasks in the same localized region, clustered around corresponding class feature centers. For the new task, we freeze the class centers and reduce the learning rate of feature extractor parameters to prevent significant changes in it while training with the new task. We also provide the findings of our ablation study in the Section 3.3 to analyze the effects of letting the centers and decision boundaries change while learning new tasks. As our method needs to store only the feature centers for each class throughout the lifetime, the memory requirement is significantly lower than other approaches where the agent needs to store the model weights or maintain the replay buffer. The extra memory requirements are discussed in detail in Section 3.3.1 Lt = Ls + λLc (1) Lc = 1 2 m∑ i=1 ‖fL−1(xi; θ(n))− c(o)yi ‖ 2 2 (2) θ̂(n) = argmin θ(n) Lt(x, c (o); θ(n)) +R(θ(n)) (3) The equation 1 represents the joint loss, a combination of the softmax loss Ls and the center loss Lc, which is minimized during training. Minimizing Ls helps solve the current/new task, whereas the term Lc helps retain already learned knowledge and avoid catastrophic forgetting. A scalar λ is used for balancing the two loss functions. The equation 2 defines the modified center loss Lc, where c(o) denotes the learned values of feature centers from old task. The centers are kept frozen during the subsequent tasks. This enforces the new task representations to have the same feature centers as the old task, leading the old task and new task representations to stay within proximity in the feature space, reducing the catastrophic forgetting. We obtain the equation 3 as final objective function where R(·) denotes a general regularization term, such as weight decay. Finally, we propose the center loss regularization algorithm, as shown in Algorithm 1. N denotes the number of training iterations, D(n) denotes the new task data, θ(o) and θ(n) denote the network parameters for the old and new task, respectively. Note that we use a single-headed model in our experiments as we primarily target domain-incremental scenario of continual learning where task identity need not be inferred at test time (Van de Ven & Tolias, 2019). The system learns to adapt to changing input distributions, but the task structure remains the same. Algorithm 1 Center Loss Regularization (CLR) Input: θ(o), c(o), N,D(n) Output: θ̂(n) 1: θ(n) ← θ(o) . initialize weights 2: Freeze the weights of the softmax classification layer. 3: for i← 1, N do . training iteration 4: for each minibatch B ∈ D(n) do 5: Backpropagate and update θ(n) for mini-batch B to minimize loss Lt +R(θ(n)) 6: end for 7: end for 8: θ̂(n) ← θ(n) 9: return θ̂(n) 3 EXPERIMENTS In this section, we first detail the experimental protocols for evaluating lifelong learning algorithms (Section 3.1). We have compared our proposed method against different continual learning methods, which are specified in Section 3.2. Finally, we report our results in Section 3.3. 3.1 EXPERIMENTAL PROTOCOLS Permuted & Rotated MNIST (Kirkpatrick et al., 2017) are variants of the original MNIST dataset (LeCun, 1998). In Permuted MNIST, a fixed permutation of the image pixels is applied to each task’s training and test set. In Rotated MNIST, each task consists of images rotated by a fixed angle between 0 and 180 degrees. We chose 10000 training and 5000 testing samples for both these variants. Each task has a test set with the same rotation transformation as the training set for that task. In these experiments, the model encounters 10 tasks in sequence, each with a unique rotation angle. We evaluate the model on the test sets of the respective datasets after training on each task. We use the fully connected neural network (MLP) with two hidden layers of 100 units, each with ReLU activation for our experiments. We train the network using Adam optimizer (Kingma & Ba, 2014) on mini-batches of 64 samples for 1 epoch over training set per task with a learning rate of 3 · 10−3 for the first task and 3 · 10−4 for subsequent tasks. Our method can also be applied in the supervised continual domain adaptation settings where the model needs to adapt to the new domains without degrading the performance on the previously seen domains (Jung et al., 2018; Volpi et al., 2021). Digit Recognition We consider four widely used datasets for digit recognition in continual domain adaptation setting: MNIST (LeCun, 1998), SVHN (Netzer et al., 2011), MNIST-M (Ganin & Lempitsky, 2015) and SYN (Ganin & Lempitsky, 2015). To assess the performance of our approach, we train our network on these datasets, one by one in the sequence MNIST → MNIST-M → SYN → SVHN, considering each dataset as one task. This way, the network adapts to harder domains continually. For this protocol, 10000 training and 5000 testing samples are chosen for each dataset, and all images are resized to 28 x 28 pixels. We convert the MNIST dataset to 3-channel images by repeating the original channel 3 times for compatibility with the remaining datasets. We use the ResNet18 (He et al., 2016) architecture and train the network on each domain for 20 epochs, with a batch size of 64. We use Adam optimizer (Kingma & Ba, 2014) with a learning rate of 3 · 10−4, which is reduced to 3 · 10−5 after the first domain. PACS dataset (Li et al., 2017) is typically used to assess the domain generalization scenarios. It consists of four domains, namely Photo (1,670 images), Art Painting (2,048 images), Cartoon (2,344 images), and Sketch (3,929 images). Each domain contains seven categories. We use this dataset to evaluate our approach in a continual domain adaptation setting. We trained the network in sequence Sketches → Cartoons → Paintings → Photos where images become more realistic with the new domain. For each domain, the dataset is split into 70% training and the remaining 30% as testing samples. All images are resized to 224 x 224 pixels and standardized with the mean and standard deviation of the ImageNet (Deng et al., 2009) dataset. This is due to the fact that this protocol uses ResNet18 (He et al., 2016) network trained on ImageNet dataset. We train the network on each domain for 5 epochs, with a batch size of 64. We use Adam optimizer (Kingma & Ba, 2014) with a learning rate of 3 · 10−4, which is reduced to 3 · 10−5 after the first domain. 3.2 METHODS We compare the performance of the proposed approach with that of the state-of-the-art regularization-based strategies. First, we test the fine-tuning approach in which a single model is trained across all tasks, which is the naive approach. Second, we test the LwF (Li & Hoiem, 2017) method, which uses only new task data to train the network while preserving the original capabilities. Further, we compare the performance with EWC (Kirkpatrick et al., 2017), SI (Zenke et al., 2017) and MAS (Aljundi et al., 2018) methods which try to estimate synaptic importance of different network parameters and use that information to regularize the weights. We also test the LFL (Jung et al., 2018) method, which tries to position the features extracted by the new network, close to the features extracted by the old network. Moreover, we explore frameworks like Uncertaintyregularized Continual Learning (UCL) (Ahn et al., 2019) and Variational Continual Learning (VCL) (Nguyen et al., 2017) based on variational inference. VCL employs a projection operator through KL divergence minimization. UCL solves the drawbacks of VCL by proposing the concept of nodewise uncertainty. Then, we consider two oracle methods: If we assume access to every domain at every point in time, we can either train on samples from the joint distribution from the beginning oracle (all), or grow the distribution over iterations oracle (cumulative). With access to samples from all domains, oracles are not generally exposed to catastrophic forgetting; yet, their performance is not necessarily an upper bound (Lomonaco & Maltoni, 2017). We also compare our approach with several replay-based strategies. We use the basic experiencereplay method as a baseline and examine if using CLR as surrogate loss along with experience replay can enhance the performance. Further, we compare its performance with state-of-the-art memory-based methods like average gradient episodic memory (A-GEM) (Chaudhry et al., 2019) and GDumb (Prabhu et al., 2020) methods. We also compare CLR with recent continual domain adaptation technique called domain randomization and meta-learning (Meta-DR) (Volpi et al., 2021) for continual domain adaptation experiments. For Permuted and Rotated MNIST tasks, we experiment with four different memory sizes of 10, 20, 50, and 100 examples per task. For the Digits dataset, we experiment with 100, 200, 300 replay examples per task. For the PACS dataset, we experiment with 10, 20, 30 replay examples per task. The metrics used in our experiments to evaluate all considered methods are the average accuracy (ACC) and backward transfer (BWT). ACC is the average of the accuracy of the model across all encountered tasks, and BWT represents the amount of forgetting at the end of training on all tasks (Lopez-Paz & Ranzato, 2017). The formulae for the metrics are presented in detail in Appendix B. Each experiment in the paper is carried out for 5 trials, and the final values are reported as mean and standard deviation of results. 3.3 RESULTS Table 1 compares the performances of different regularization-based approaches on the Rotated and Permuted MNIST datasets. From this table, we can observe that our approach CLR outperforms all other methods. VCL shows competitive performance for Permuted MNIST, but the amount of forgetting (BWT) is worse than CLR. Whereas, LwF shows competitive performance in terms of forgetting but less adaptability to new information. Figure 1 presents the evolution of the average accuracy (ACC) and the first-task accuracy throughout all the tasks for the MNIST variants. This shows that the center loss regularization helps to mitigate the problem of catastrophic forgetting on earlier tasks while maintaining high performance on all the tasks. In Table 2, we report the results of replaying samples from episodic memory along with our proposed approach for both MNIST variants. We can observe that using center loss regularization significantly improves the performance over the plain experience replay strategy. Such improvement is consistent as we increase the memory size. It also outperforms the state-of-the-art memory-based methods like A-GEM and GDumb. This shows that our approach can also be used as surrogate loss along with replay-based strategies to enhance performance and reduce forgetting. We provide the ablation study of our approach in Table 3. We demonstrate how the performance changes if we do not freeze the centers and classifier weights after the training on the first task is completed. The first row in the table represents the approach proposed in Section 2. We try out multiple values of hyperparameter λ for each experiment and put the best performing results for each row. These results show that the best performance is generally achieved when we freeze the centers and the decision boundaries after the first task. Moreover, we also demonstrate how the hyperparameter λ affects the overall performance of CLR in Appendix Figure 3. We report in Table 4 and Table 5 the performance of our proposed method compared with different methods in continual domain adaptation setting on Digits and PACS datasets respectively. We note that our approach outperforms the naive, EWC, and SI strategies with a significant margin for both benchmarks. Our method demonstrates competitive performance compared to LwF and LFL strategies. Having access to the samples from older tasks for replay can help reduce catastrophic forgetting significantly compared to regularization methods which do not have access to the data of old domains. Thus, we also examine if using CLR along with Experience Replay (ER) can help boost the performance. For both the benchmarks, we observe that using CLR with ER can significantly improve the overall performance, and it is consistent with the increase in the memory size. The memory size column denotes the number of replay samples per task. These results suggest that the center loss regularization helps the model successfully adapt to new domains without considerable performance degradation on old domains. 3.3.1 COMPARISON OF ADDITIONAL MEMORY REQUIREMENT Generally, the regularization-based methods store the old network parameters for regularization or knowledge distillation. In Table 6, we compare the extra memory requirement of different regularization-based methods in terms of the number of additional parameters other than the base network parameters. Further, we explain why CLR is the cheapest option from an additional memory requirement perspective compared to other regularization techniques. In order to quantify the importance of weights to previous tasks, EWC needs to compute and store the diagonal of the Fisher matrix for each task, which has the same number of elements as the network parameters. Additionally, optimal parameters from previous tasks are also stored in order to compute the knowledge distillation loss. Moreover, a few samples from previous tasks are maintained in our experiments to compute the Fisher matrix after each task is completed. Thus, given that k is the number of encountered tasks and p is the number of network parameters, the space complexity for additional memory usage in EWC becomes O(k ∗ p). In contrast to EWC, SI computes parameter-specific importance online. So, it does not require storing extra parameters for each task but maintains only the previous task model parameters and regularization strength for each network parameter to compute the surrogate loss. Thus, the space complexity for additional memory becomes O(p) in the case of SI. MAS also requires storing synaptic importances for each model parameter, needing a similar amount of memory as SI. VCL stores variance term for each weight parameter for current and previous models; hence, it requires thrice the number of additional parameters than the original model. UCL solves this drawback of VCL by computing uncertainty (importance) at node-level, reducing the total required parameters to almost half compared to VCL (Ahn et al., 2019). However, CLR outdoes both of them with lesser parameters. Moreover, both LwF and LFL require storing the old network in the memory to compute the knowledge distillation loss for the previous task. Like SI, these two methods LFL and LwF, have the space complexity for additional memory as O(p). CLR attempts to achieve a similar objective as the LFL projecting old and new task features in close proximity. However, the CLR is computationally and memory-wise more efficient than the LFL and LwF because the CLR does not need to store the old model and forward pass it, significantly reducing the memory usage and training time. Hence, all these methods require a large amount of extra memory, which further increases with the number of tasks or the network size. On the other hand, CLR only stores the feature centers of each class, which significantly reduces the additional memory requirement compared to the previous methods. Thus, the space complexity of extra memory usage in CLR comes down toO(n∗d), where n is the number of classes in each task, and each feature center is d-dimensional. 4 LIMITATIONS AND FUTURE DIRECTIONS There are several exciting research directions to extend our work for continual learning. Our method requires the knowledge of task boundaries which may not always be available. CLR does not leverage the task descriptors, which may be exploited to obtain a positive forward transfer. Further, novel approaches can also be developed to exploit our approach for supporting task-incremental and class-incremental learning, where task-IDs need to be inferred. In this paper, we applied CLR to solve the supervised classification problem in continual learning. It would be an interesting research direction to exploit the properties of CLR to solve other problems like regression and dimensionality reduction in the continual learning setting. Moreover, the effects of using other discriminative representation learning approaches (Hadsell et al., 2006; Sun, 2015; Deng et al., 2017; Zhang et al., 2017; Liu et al., 2017; Chen et al., 2017; Wan et al., 2018; Qi & Zhang, 2018; Wang et al., 2018a;b) can be studied for continual learning. 5 CONCLUSION In this paper, we proposed a new regularization-based strategy for continual learning, referred to as center loss regularization (CLR). It utilizes the power of center loss to learn discriminative features and use the learned feature centers to project new task features in the proximity of old task features to transfer knowledge and avoid catastrophic forgetting. Our method was effective in overcoming catastrophic forgetting when applied to the standard continual learning benchmarks as well as continual domain adaptation benchmarks. Our method is scalable and computationally effective, and it does not store previous data and requires minimal additional network parameters. Our extensive experiments consistently demonstrate the competitive performance of CLR against the state-of-the-art regularization strategies for continual learning. A TRAINING DETAILS We used one NVIDIA GeForce TITAN X GPU for training the models in all our experiments with the CUDA version 10.1 on a Linux machine. We built our models and implemented different methods using PyTorch deep learning framework and Avalanche library (Lomonaco et al., 2021) for Continual Learning. Hyperparameter search was done using the grid-search. Next, we provide the best hyperparameter values, specific to the methods for our experiments. The λ denotes the importance of the regularization penalty for EWC, LwF, SI, LFL and MAS. Whereas β controls the speed of standard deviation (σ) for the weight parameter in UCL. VCL does not need any additional hyperparameters. In CLR, λ denotes the importance of the center loss and α denotes the rate with which the centers are allowed to change in the first task. B EVALUATION METRICS Lopez-Paz & Ranzato (2017) introduced Average Accuracy (ACC) and Backward Transfer (BWT) evaluation metrics for continual learning, which we use in our experiments to evaluate our approach. For evaluation, we maintain a test set for each of the T tasks. After learning on the new task ti, we evaluate the model’s test performance on all T tasks. The formulas for calculating ACC and BWT are as follows. Ai,j is the test classification accuracy of the model on task ti after observing the last data sample from task tj . Average Accuracy (ACC) = 1 T T∑ i=1 Ai,T (4) Backward Transfer (BWT) = 1 T − 1 T−1∑ i=1 Ai,T −Ai,i (5) The larger the value of ACC, the better is the model. Whereas, if the values of ACC of two models are similar, then the model with higher BWT is usually considered. C ADDITIONAL PLOTS Figure 2 shows the plots of average accuracy (over 10 tasks) at the end of training on each task. This is different from the plots presented in Figure 1 where we take the average over encountered tasks only. In Figure 3, we show the performance of CLR for various values of hyperparameter λ for various protocols. The value of λ at the highest/peak point of each protocol’s graph was chosen for the corresponding experiment. D DISCRIMINATIVE FEATURE LEARNING WITH CENTER LOSS Typical deep neural network architecture comprises an input layer, followed by several hidden layers with non-linear activation functions and the output layer. The output layer generally has a softmax activation function for multi-class classification. This last fully connected layer acts as a linear classifier that separates the deeply learned features produced by the last hidden layer. The softmax loss forces the deep features of different classes to stay apart. The discriminative power of learned features is enhanced if the intra-class compactness and inter-class separability are maximized simultaneously. Though the features learned using the softmax loss are separable, they are not discriminative enough for open-set supervised problems and often exhibit high intra-class variance. This adversely affects the generalization capabilities of neural networks. Several works (Wen et al., 2016; Deng et al., 2017; Zhang et al., 2017; Liu et al., 2017; Wang et al., 2018b;a; Chen et al., 2017; Wan et al., 2018; Qi & Zhang, 2018) have proposed variants of softmax loss to enhance the discriminative power. The siamese network Koch et al. (2015) based approaches which use contrastive loss Sun (2015); Hadsell et al. (2006) and triplet loss Schroff et al. (2015), learn the embeddings directly. These approaches face the problem of semi-hard sample mining and combinatorial explosion in the number of pairs or triplets, which significantly affect the effective model training Deng et al. (2019). There are also angular margin penalty-based approaches that have shown significant improvements over softmax loss and have been explored in various directions, especially for large-scale face recognition Liu et al. (2017); Wang et al. (2018b;a); Liu et al. (2016); Deng et al. (2019). Wen et al. (2016) introduced the center loss for discriminative feature learning to solve deep face recognition. The joint supervision of softmax loss and center loss is used to obtain the inter-class dispersion and intra-class compactness by simultaneously learning the centers and minimizing the distances between the deep features and their corresponding class centers. The center loss has the same requirement as the softmax loss and needs no complex recombination of the training samples like contrastive loss and triplet loss which suffer from dramatic data expansion. The center loss is defined as follows: Lc(x; θ, c) = 1 2 m∑ i=1 ‖fL−1(xi; θ)− cyi‖22 (6) In Equation 6, the xi denotes the ith sample, belonging to the yith class, cyi ∈ Rd denotes the yi th class center of deep features. The size of mini-batch and size of the feature dimension is m and d, respectively. L is the total number of layers, and fL−1 is the feature vector of layer L− 1, which is just before the softmax classifier layer, and the θ denotes the network parameters. The formulation effectively characterizes the intra-class variations. In each iteration, the centers are computed by averaging the features of the corresponding classes. The deep features learned using the center loss are highly discriminative, clustered around the corresponding class centers, and linearly separable by the final fully connected layer, which acts as a linear classifier. Figure 4 presents the visualizations of features obtained using softmax loss on the left and using joint supervision of softmax and center loss on the right. Wen et al. (2016) provides detailed analysis and extensive experiments on center loss and its application in discriminative feature learning.
1. What is the focus of the paper regarding continual learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its efficiency and simplicity? 3. How does the reviewer assess the novelty and originality of the paper's contributions? 4. What are some concerns or questions regarding the paper's writing quality, notation, and experimental settings?
Summary Of The Paper Review
Summary Of The Paper The paper focuses on regularization-based approach for continual learning. It proposes to freeze the weights of the last fully connected classification layer to keep the decision boundaries unchanged similar to previous work. The goal is then to make the learned features for new tasks localize nearby corresponding old task features in the latent space. It is addressed by using the center loss (previously introduced in Wen et al. (2016)) as a regularization penalty for the softmax loss. The model is forced to learn the features for the new task in the proximity of the stored old class feature centers. For the new task learning, the learning rate of feature extractor is reduced to prevent significant changes. Review On the positive side, the method is very efficient in terms of memory requirements and is very simple. The results show minor to moderate accuracy boost compared to direct competitors. There are, however, many negative points. The contribution is very incremental: uses the idea from Jung et al. (2016) for continual learning and keeps the new class features close to old ones using the center loss regularizer from Wen et al. (2016). The writing is poor: The formulation is not well defined. The setting is not defined at all. Part of the notations are not defined, e.g., x_i,y_i, L-1, m . It’s not clear what scenario was used. It is said explicitly that domain incremental learning is done in PACS. The task is not specified clearly in the rest of the experiments.
ICLR
Title Center Loss Regularization for Continual Learning Abstract The ability to learn different tasks sequentially is essential to the development of artificial intelligence. In general, neural networks lack this capability, the major obstacle being catastrophic forgetting. It occurs when the incrementally available information from non-stationary data distributions is continually acquired, disrupting what the model has already learned. Our approach remembers old tasks by projecting the representations of new tasks close to that of old tasks while keeping the decision boundaries unchanged. We employ the center loss as a regularization penalty that enforces new tasks’ features to have the same class centers as old tasks and makes the features highly discriminative. This, in turn, leads to the least forgetting of already learned information. This method is easy to implement, requires minimal computational and memory overhead, and allows the neural network to maintain high performance across many sequentially encountered tasks. We also demonstrate that using the center loss in conjunction with the memory replay outperforms other replay-based strategies. Along with standard MNIST variants for continual learning, we apply our method to continual domain adaptation scenarios with the Digits and PACS datasets. We demonstrate that our approach is scalable, effective, and gives competitive performance compared to state-of-the-art continual learning methods. 1 INTRODUCTION Humans have the ability to continuously evolve, accumulate and transfer acquired knowledge to learn new skills throughout their lifetime. In contrast, in the classical machine learning paradigm, typically referred to as isolated learning (Chen & Liu, 2018), systems are capable of achieving high performance in learning isolated tasks or narrow domains without using previously learned knowledge. This makes them different from real-world settings where systems are expected to learn consecutive tasks with changing data distributions and unknown task boundaries. In this scenario, the intelligent agent should learn continually without forgetting the already acquired knowledge. Thus, continual learning, or traditionally called lifelong learning (Chen & Liu, 2018; Thrun, 1995; 1996; 1998; Thrun & Pratt, 2012), becomes necessary for artificial general intelligence. A significant problem in continual learning is catastrophic forgetting in neural networks, also known as catastrophic interference (McCloskey & Cohen, 1989). The newly learned information may interfere and disrupt the already learned knowledge, leading to a performance loss on old tasks (Ratcliff, 1990). The extent to which the system should be prone to refine and integrate new knowledge and retain previous information was termed as a stability-plasticity dilemma and is well-studied in many previous works. (Grossberg, 1982; 2013; Mermillod et al., 2013). This issue of catastrophic forgetting is known to exist in many different types of neural networks, from standard backpropagation networks to unsupervised networks like self-organizing maps (Richardson & Thomas, 2008; Mermillod et al., 2013). There have been several attempts to overcome catastrophic forgetting in neural networks, and the various approaches are discussed in the next section. 1.1 CONTINUAL LEARNING APPROACHES In general, current continual learning methods can be broadly categorized into three different types of strategies based on how they attempt to solve the problem of catastrophic forgetting. Architectural approaches mitigate catastrophic forgetting by modifying the architectural properties of the networks, e.g., adding more neurons or layers or incorporating weight freezing strategies. One of the earliest strategies in this category was Progressive Neural Networks (PNN) proposed by Rusu et al. (2016) that retains a pool of pre-trained models as knowledge and learns lateral connections among them to learn the task at hand. Another simpler strategy, Copy Weights with Re-init (CWR), was proposed (Lomonaco & Maltoni, 2017) where consolidated knowledge is maintained by isolating the subsets of weights for each class and learning rest task-specific parameters. Scalability remains an issue in this category of approaches as the network parameters explode as the number of tasks increases. Replay or rehearsal-based approaches maintain a memory buffer with samples from previous tasks to replay with the examples from the current task to strengthen the old memories. (Rolnick et al., 2018). Lopez-Paz & Ranzato (2017) proposed Gradient Episodic Memory (GEM) that favors positive backward transfer and hence mitigating forgetting by using episodic memory of samples from previous tasks. Later, Chaudhry et al. (2019) proposed a more efficient and faster version, called Averaged GEM (A-GEM). Inspired by the suggestion that the hippocampus is better paralleled with a generative model than a replay buffer (Ramirez et al., 2013; Stickgold & Walker, 2007), the replay approach was improved further by replacing the memory buffer with a generative model which could generate unlimited pseudo data from past tasks (Shin et al., 2017; Van de Ven & Tolias, 2018). Instead of using stored input samples, the replay of latent representations to mitigate forgetting is also explored in many recent works (Pellegrini et al., 2020; van de Ven et al., 2020). Regularization-based approaches attenuate catastrophic forgetting by imposing constraints on the update of the network weights (Parisi et al., 2019). It is generally formulated via additional regularization terms that penalize changes in the weights or predictions of neural network. Learning Without Forgetting (LwF) (Li & Hoiem, 2017) distills knowledge with the network’s previous version to enforce the predictions of current and previous tasks to be similar. Many recent regularization based methods apply penalty on network parameters and estimate the importance of different network parameters. Kirkpatrick et al. (2017) proposed the Elastic Weight Consolidation (EWC), which imposes a quadratic penalty on the difference between the old and new task parameters to slow down the learning on certain weights based on their importance for previous tasks (Parisi et al., 2019). In Synaptic Intelligence (SI) (Zenke et al., 2017), individual synapses are allowed to estimate their importance by computing the path integral of the gradient vector field along the parameter trajectory. Whereas Memory Aware Synapses (MAS) (Aljundi et al., 2018) computes importance based on the sensitivity of predicted output function to each parameter. On the other hand, instead of imposing penalty directly on weights, Less-Forgetful Learning (LFL) (Jung et al., 2018) regularizes the L2 distance between the new and old feature representations to preserve the previously learned input-output mappings by computing auxiliary activations with the old task parameters. Recent regularization works also build upon the traditional Bayesian online learning framework with variational inference (Nguyen et al., 2017; Ahn et al., 2019; Adel et al., 2020). 1.2 MOTIVATION The architectural approaches suffer from scalability issues as the number of parameters increases with the number of tasks (Parisi et al., 2019). On the other hand, rehearsal-based strategies generally require large memory buffers to store old task data for high performance. Moreover, in real-world scenarios, it is not always possible to have access to old task data. The generative replay-based methods attempt to solve this issue but are often difficult to train and computationally expensive. On the contrary, the regularization-based strategies assume that all the information essential about the old task is contained in the network weights (Kirkpatrick et al., 2017). The over-parameterization in neural networks makes it possible for the solution to a new task to be found close to the solution for the old task (Hecht-Nielsen, 1992; Sussmann, 1992; Kirkpatrick et al., 2017). Thus, the regularization strategies are generally memory efficient and computationally less expensive than the other two approaches. Our approach belongs to this category as it focuses on alleviating the catastrophic forgetting problem by regularizing the network to project the new task representations close to the old task representations while keeping the decision boundaries unchanged. We achieve this using the center loss (Wen et al., 2016) as a regularization penalty to minimize forgetting. We show that our approach successfully prevents catastrophic forgetting in a computationally efficient manner without accessing the data from old tasks. 1.3 CONTRIBUTIONS The contributions of this paper are as follows: 1. We propose a novel regularization-based continual learning strategy which we refer to as center loss regularization (CLR). 2. We compare our approach to different continual learning strategies in domain incremental scenarios and show that our approach is scalable, computationally efficient while storing minimal additional parameters. 3. We show that our approach gives a competitive performance with the state-of-the-art techniques when applied in continual domain adaptation scenarios. 2 CENTER LOSS REGULARIZATION (CLR) Deep neural networks excel at learning the hierarchical internal representations from raw input data by stacking multiple layers, which allows the system to learn complex function mappings from input to output (Farabet et al., 2013). These representations become increasingly invariant to small changes in the input as we go up the layers towards the output layer by preserving the vital information about the input related to the task (Guest & Love, 2019). The portion till the last hidden layer is considered the feature extractor, and the last fully connected layer is regarded as a linear classifier as the features extracted from the feature extractor are usually linearly separable due to the softmax activation in the top layer (Wen et al., 2016). The Less-Forgetful Learning (LFL) approach (Jung et al., 2016) demonstrated that catastrophic forgetting can be prevented if the representations of the new task are projected close to the learned representations of the old task keeping the decision boundaries unchanged. Ramasesh et al. (2020) empirically demonstrated that if the higher layers are stabilized while learning subsequent tasks, the forgetting can be mitigated significantly. In our approach, we freeze the weights of the last fully connected classification layer to keep the decision boundaries unchanged similar to LFL. However, it is non-trivial to make the learned features for new tasks localize nearby corresponding old task features in the latent space. The LFL solves it by using the L2 distance between the current model’s features and the computed features using the old task model as a regularization penalty to preserve the previously learned input-output mappings. However, this approach is highly memory-intensive and computationally expensive since it requires storing the entire model trained on the old task and does forward pass on it to compute representations for each novel task. Wen et al. (2016) introduced the center loss and demonstrated that the joint supervision of softmax loss and center loss helps to increase the inter-class dispersion and intra-class compactness of deeply learned features. During the training process, the model also learns the centers for each class features around which the deeply learned features are typically clustered (Wen et al., 2016). We exploit these properties of center loss in building our continual learning strategy. More details on the center loss are provided in the Appendix D. We use the center loss as a regularization penalty along with softmax loss. To enforce the model to learn the features for the new task in the proximity of corresponding old task features, we utilize the already learned class feature centers of the old task instead of storing the old model weights like LFL. While learning the new task, we enforce the new task features to be close to the already learned feature centers using the center loss, making the model project the features of all tasks in the same localized region, clustered around corresponding class feature centers. For the new task, we freeze the class centers and reduce the learning rate of feature extractor parameters to prevent significant changes in it while training with the new task. We also provide the findings of our ablation study in the Section 3.3 to analyze the effects of letting the centers and decision boundaries change while learning new tasks. As our method needs to store only the feature centers for each class throughout the lifetime, the memory requirement is significantly lower than other approaches where the agent needs to store the model weights or maintain the replay buffer. The extra memory requirements are discussed in detail in Section 3.3.1 Lt = Ls + λLc (1) Lc = 1 2 m∑ i=1 ‖fL−1(xi; θ(n))− c(o)yi ‖ 2 2 (2) θ̂(n) = argmin θ(n) Lt(x, c (o); θ(n)) +R(θ(n)) (3) The equation 1 represents the joint loss, a combination of the softmax loss Ls and the center loss Lc, which is minimized during training. Minimizing Ls helps solve the current/new task, whereas the term Lc helps retain already learned knowledge and avoid catastrophic forgetting. A scalar λ is used for balancing the two loss functions. The equation 2 defines the modified center loss Lc, where c(o) denotes the learned values of feature centers from old task. The centers are kept frozen during the subsequent tasks. This enforces the new task representations to have the same feature centers as the old task, leading the old task and new task representations to stay within proximity in the feature space, reducing the catastrophic forgetting. We obtain the equation 3 as final objective function where R(·) denotes a general regularization term, such as weight decay. Finally, we propose the center loss regularization algorithm, as shown in Algorithm 1. N denotes the number of training iterations, D(n) denotes the new task data, θ(o) and θ(n) denote the network parameters for the old and new task, respectively. Note that we use a single-headed model in our experiments as we primarily target domain-incremental scenario of continual learning where task identity need not be inferred at test time (Van de Ven & Tolias, 2019). The system learns to adapt to changing input distributions, but the task structure remains the same. Algorithm 1 Center Loss Regularization (CLR) Input: θ(o), c(o), N,D(n) Output: θ̂(n) 1: θ(n) ← θ(o) . initialize weights 2: Freeze the weights of the softmax classification layer. 3: for i← 1, N do . training iteration 4: for each minibatch B ∈ D(n) do 5: Backpropagate and update θ(n) for mini-batch B to minimize loss Lt +R(θ(n)) 6: end for 7: end for 8: θ̂(n) ← θ(n) 9: return θ̂(n) 3 EXPERIMENTS In this section, we first detail the experimental protocols for evaluating lifelong learning algorithms (Section 3.1). We have compared our proposed method against different continual learning methods, which are specified in Section 3.2. Finally, we report our results in Section 3.3. 3.1 EXPERIMENTAL PROTOCOLS Permuted & Rotated MNIST (Kirkpatrick et al., 2017) are variants of the original MNIST dataset (LeCun, 1998). In Permuted MNIST, a fixed permutation of the image pixels is applied to each task’s training and test set. In Rotated MNIST, each task consists of images rotated by a fixed angle between 0 and 180 degrees. We chose 10000 training and 5000 testing samples for both these variants. Each task has a test set with the same rotation transformation as the training set for that task. In these experiments, the model encounters 10 tasks in sequence, each with a unique rotation angle. We evaluate the model on the test sets of the respective datasets after training on each task. We use the fully connected neural network (MLP) with two hidden layers of 100 units, each with ReLU activation for our experiments. We train the network using Adam optimizer (Kingma & Ba, 2014) on mini-batches of 64 samples for 1 epoch over training set per task with a learning rate of 3 · 10−3 for the first task and 3 · 10−4 for subsequent tasks. Our method can also be applied in the supervised continual domain adaptation settings where the model needs to adapt to the new domains without degrading the performance on the previously seen domains (Jung et al., 2018; Volpi et al., 2021). Digit Recognition We consider four widely used datasets for digit recognition in continual domain adaptation setting: MNIST (LeCun, 1998), SVHN (Netzer et al., 2011), MNIST-M (Ganin & Lempitsky, 2015) and SYN (Ganin & Lempitsky, 2015). To assess the performance of our approach, we train our network on these datasets, one by one in the sequence MNIST → MNIST-M → SYN → SVHN, considering each dataset as one task. This way, the network adapts to harder domains continually. For this protocol, 10000 training and 5000 testing samples are chosen for each dataset, and all images are resized to 28 x 28 pixels. We convert the MNIST dataset to 3-channel images by repeating the original channel 3 times for compatibility with the remaining datasets. We use the ResNet18 (He et al., 2016) architecture and train the network on each domain for 20 epochs, with a batch size of 64. We use Adam optimizer (Kingma & Ba, 2014) with a learning rate of 3 · 10−4, which is reduced to 3 · 10−5 after the first domain. PACS dataset (Li et al., 2017) is typically used to assess the domain generalization scenarios. It consists of four domains, namely Photo (1,670 images), Art Painting (2,048 images), Cartoon (2,344 images), and Sketch (3,929 images). Each domain contains seven categories. We use this dataset to evaluate our approach in a continual domain adaptation setting. We trained the network in sequence Sketches → Cartoons → Paintings → Photos where images become more realistic with the new domain. For each domain, the dataset is split into 70% training and the remaining 30% as testing samples. All images are resized to 224 x 224 pixels and standardized with the mean and standard deviation of the ImageNet (Deng et al., 2009) dataset. This is due to the fact that this protocol uses ResNet18 (He et al., 2016) network trained on ImageNet dataset. We train the network on each domain for 5 epochs, with a batch size of 64. We use Adam optimizer (Kingma & Ba, 2014) with a learning rate of 3 · 10−4, which is reduced to 3 · 10−5 after the first domain. 3.2 METHODS We compare the performance of the proposed approach with that of the state-of-the-art regularization-based strategies. First, we test the fine-tuning approach in which a single model is trained across all tasks, which is the naive approach. Second, we test the LwF (Li & Hoiem, 2017) method, which uses only new task data to train the network while preserving the original capabilities. Further, we compare the performance with EWC (Kirkpatrick et al., 2017), SI (Zenke et al., 2017) and MAS (Aljundi et al., 2018) methods which try to estimate synaptic importance of different network parameters and use that information to regularize the weights. We also test the LFL (Jung et al., 2018) method, which tries to position the features extracted by the new network, close to the features extracted by the old network. Moreover, we explore frameworks like Uncertaintyregularized Continual Learning (UCL) (Ahn et al., 2019) and Variational Continual Learning (VCL) (Nguyen et al., 2017) based on variational inference. VCL employs a projection operator through KL divergence minimization. UCL solves the drawbacks of VCL by proposing the concept of nodewise uncertainty. Then, we consider two oracle methods: If we assume access to every domain at every point in time, we can either train on samples from the joint distribution from the beginning oracle (all), or grow the distribution over iterations oracle (cumulative). With access to samples from all domains, oracles are not generally exposed to catastrophic forgetting; yet, their performance is not necessarily an upper bound (Lomonaco & Maltoni, 2017). We also compare our approach with several replay-based strategies. We use the basic experiencereplay method as a baseline and examine if using CLR as surrogate loss along with experience replay can enhance the performance. Further, we compare its performance with state-of-the-art memory-based methods like average gradient episodic memory (A-GEM) (Chaudhry et al., 2019) and GDumb (Prabhu et al., 2020) methods. We also compare CLR with recent continual domain adaptation technique called domain randomization and meta-learning (Meta-DR) (Volpi et al., 2021) for continual domain adaptation experiments. For Permuted and Rotated MNIST tasks, we experiment with four different memory sizes of 10, 20, 50, and 100 examples per task. For the Digits dataset, we experiment with 100, 200, 300 replay examples per task. For the PACS dataset, we experiment with 10, 20, 30 replay examples per task. The metrics used in our experiments to evaluate all considered methods are the average accuracy (ACC) and backward transfer (BWT). ACC is the average of the accuracy of the model across all encountered tasks, and BWT represents the amount of forgetting at the end of training on all tasks (Lopez-Paz & Ranzato, 2017). The formulae for the metrics are presented in detail in Appendix B. Each experiment in the paper is carried out for 5 trials, and the final values are reported as mean and standard deviation of results. 3.3 RESULTS Table 1 compares the performances of different regularization-based approaches on the Rotated and Permuted MNIST datasets. From this table, we can observe that our approach CLR outperforms all other methods. VCL shows competitive performance for Permuted MNIST, but the amount of forgetting (BWT) is worse than CLR. Whereas, LwF shows competitive performance in terms of forgetting but less adaptability to new information. Figure 1 presents the evolution of the average accuracy (ACC) and the first-task accuracy throughout all the tasks for the MNIST variants. This shows that the center loss regularization helps to mitigate the problem of catastrophic forgetting on earlier tasks while maintaining high performance on all the tasks. In Table 2, we report the results of replaying samples from episodic memory along with our proposed approach for both MNIST variants. We can observe that using center loss regularization significantly improves the performance over the plain experience replay strategy. Such improvement is consistent as we increase the memory size. It also outperforms the state-of-the-art memory-based methods like A-GEM and GDumb. This shows that our approach can also be used as surrogate loss along with replay-based strategies to enhance performance and reduce forgetting. We provide the ablation study of our approach in Table 3. We demonstrate how the performance changes if we do not freeze the centers and classifier weights after the training on the first task is completed. The first row in the table represents the approach proposed in Section 2. We try out multiple values of hyperparameter λ for each experiment and put the best performing results for each row. These results show that the best performance is generally achieved when we freeze the centers and the decision boundaries after the first task. Moreover, we also demonstrate how the hyperparameter λ affects the overall performance of CLR in Appendix Figure 3. We report in Table 4 and Table 5 the performance of our proposed method compared with different methods in continual domain adaptation setting on Digits and PACS datasets respectively. We note that our approach outperforms the naive, EWC, and SI strategies with a significant margin for both benchmarks. Our method demonstrates competitive performance compared to LwF and LFL strategies. Having access to the samples from older tasks for replay can help reduce catastrophic forgetting significantly compared to regularization methods which do not have access to the data of old domains. Thus, we also examine if using CLR along with Experience Replay (ER) can help boost the performance. For both the benchmarks, we observe that using CLR with ER can significantly improve the overall performance, and it is consistent with the increase in the memory size. The memory size column denotes the number of replay samples per task. These results suggest that the center loss regularization helps the model successfully adapt to new domains without considerable performance degradation on old domains. 3.3.1 COMPARISON OF ADDITIONAL MEMORY REQUIREMENT Generally, the regularization-based methods store the old network parameters for regularization or knowledge distillation. In Table 6, we compare the extra memory requirement of different regularization-based methods in terms of the number of additional parameters other than the base network parameters. Further, we explain why CLR is the cheapest option from an additional memory requirement perspective compared to other regularization techniques. In order to quantify the importance of weights to previous tasks, EWC needs to compute and store the diagonal of the Fisher matrix for each task, which has the same number of elements as the network parameters. Additionally, optimal parameters from previous tasks are also stored in order to compute the knowledge distillation loss. Moreover, a few samples from previous tasks are maintained in our experiments to compute the Fisher matrix after each task is completed. Thus, given that k is the number of encountered tasks and p is the number of network parameters, the space complexity for additional memory usage in EWC becomes O(k ∗ p). In contrast to EWC, SI computes parameter-specific importance online. So, it does not require storing extra parameters for each task but maintains only the previous task model parameters and regularization strength for each network parameter to compute the surrogate loss. Thus, the space complexity for additional memory becomes O(p) in the case of SI. MAS also requires storing synaptic importances for each model parameter, needing a similar amount of memory as SI. VCL stores variance term for each weight parameter for current and previous models; hence, it requires thrice the number of additional parameters than the original model. UCL solves this drawback of VCL by computing uncertainty (importance) at node-level, reducing the total required parameters to almost half compared to VCL (Ahn et al., 2019). However, CLR outdoes both of them with lesser parameters. Moreover, both LwF and LFL require storing the old network in the memory to compute the knowledge distillation loss for the previous task. Like SI, these two methods LFL and LwF, have the space complexity for additional memory as O(p). CLR attempts to achieve a similar objective as the LFL projecting old and new task features in close proximity. However, the CLR is computationally and memory-wise more efficient than the LFL and LwF because the CLR does not need to store the old model and forward pass it, significantly reducing the memory usage and training time. Hence, all these methods require a large amount of extra memory, which further increases with the number of tasks or the network size. On the other hand, CLR only stores the feature centers of each class, which significantly reduces the additional memory requirement compared to the previous methods. Thus, the space complexity of extra memory usage in CLR comes down toO(n∗d), where n is the number of classes in each task, and each feature center is d-dimensional. 4 LIMITATIONS AND FUTURE DIRECTIONS There are several exciting research directions to extend our work for continual learning. Our method requires the knowledge of task boundaries which may not always be available. CLR does not leverage the task descriptors, which may be exploited to obtain a positive forward transfer. Further, novel approaches can also be developed to exploit our approach for supporting task-incremental and class-incremental learning, where task-IDs need to be inferred. In this paper, we applied CLR to solve the supervised classification problem in continual learning. It would be an interesting research direction to exploit the properties of CLR to solve other problems like regression and dimensionality reduction in the continual learning setting. Moreover, the effects of using other discriminative representation learning approaches (Hadsell et al., 2006; Sun, 2015; Deng et al., 2017; Zhang et al., 2017; Liu et al., 2017; Chen et al., 2017; Wan et al., 2018; Qi & Zhang, 2018; Wang et al., 2018a;b) can be studied for continual learning. 5 CONCLUSION In this paper, we proposed a new regularization-based strategy for continual learning, referred to as center loss regularization (CLR). It utilizes the power of center loss to learn discriminative features and use the learned feature centers to project new task features in the proximity of old task features to transfer knowledge and avoid catastrophic forgetting. Our method was effective in overcoming catastrophic forgetting when applied to the standard continual learning benchmarks as well as continual domain adaptation benchmarks. Our method is scalable and computationally effective, and it does not store previous data and requires minimal additional network parameters. Our extensive experiments consistently demonstrate the competitive performance of CLR against the state-of-the-art regularization strategies for continual learning. A TRAINING DETAILS We used one NVIDIA GeForce TITAN X GPU for training the models in all our experiments with the CUDA version 10.1 on a Linux machine. We built our models and implemented different methods using PyTorch deep learning framework and Avalanche library (Lomonaco et al., 2021) for Continual Learning. Hyperparameter search was done using the grid-search. Next, we provide the best hyperparameter values, specific to the methods for our experiments. The λ denotes the importance of the regularization penalty for EWC, LwF, SI, LFL and MAS. Whereas β controls the speed of standard deviation (σ) for the weight parameter in UCL. VCL does not need any additional hyperparameters. In CLR, λ denotes the importance of the center loss and α denotes the rate with which the centers are allowed to change in the first task. B EVALUATION METRICS Lopez-Paz & Ranzato (2017) introduced Average Accuracy (ACC) and Backward Transfer (BWT) evaluation metrics for continual learning, which we use in our experiments to evaluate our approach. For evaluation, we maintain a test set for each of the T tasks. After learning on the new task ti, we evaluate the model’s test performance on all T tasks. The formulas for calculating ACC and BWT are as follows. Ai,j is the test classification accuracy of the model on task ti after observing the last data sample from task tj . Average Accuracy (ACC) = 1 T T∑ i=1 Ai,T (4) Backward Transfer (BWT) = 1 T − 1 T−1∑ i=1 Ai,T −Ai,i (5) The larger the value of ACC, the better is the model. Whereas, if the values of ACC of two models are similar, then the model with higher BWT is usually considered. C ADDITIONAL PLOTS Figure 2 shows the plots of average accuracy (over 10 tasks) at the end of training on each task. This is different from the plots presented in Figure 1 where we take the average over encountered tasks only. In Figure 3, we show the performance of CLR for various values of hyperparameter λ for various protocols. The value of λ at the highest/peak point of each protocol’s graph was chosen for the corresponding experiment. D DISCRIMINATIVE FEATURE LEARNING WITH CENTER LOSS Typical deep neural network architecture comprises an input layer, followed by several hidden layers with non-linear activation functions and the output layer. The output layer generally has a softmax activation function for multi-class classification. This last fully connected layer acts as a linear classifier that separates the deeply learned features produced by the last hidden layer. The softmax loss forces the deep features of different classes to stay apart. The discriminative power of learned features is enhanced if the intra-class compactness and inter-class separability are maximized simultaneously. Though the features learned using the softmax loss are separable, they are not discriminative enough for open-set supervised problems and often exhibit high intra-class variance. This adversely affects the generalization capabilities of neural networks. Several works (Wen et al., 2016; Deng et al., 2017; Zhang et al., 2017; Liu et al., 2017; Wang et al., 2018b;a; Chen et al., 2017; Wan et al., 2018; Qi & Zhang, 2018) have proposed variants of softmax loss to enhance the discriminative power. The siamese network Koch et al. (2015) based approaches which use contrastive loss Sun (2015); Hadsell et al. (2006) and triplet loss Schroff et al. (2015), learn the embeddings directly. These approaches face the problem of semi-hard sample mining and combinatorial explosion in the number of pairs or triplets, which significantly affect the effective model training Deng et al. (2019). There are also angular margin penalty-based approaches that have shown significant improvements over softmax loss and have been explored in various directions, especially for large-scale face recognition Liu et al. (2017); Wang et al. (2018b;a); Liu et al. (2016); Deng et al. (2019). Wen et al. (2016) introduced the center loss for discriminative feature learning to solve deep face recognition. The joint supervision of softmax loss and center loss is used to obtain the inter-class dispersion and intra-class compactness by simultaneously learning the centers and minimizing the distances between the deep features and their corresponding class centers. The center loss has the same requirement as the softmax loss and needs no complex recombination of the training samples like contrastive loss and triplet loss which suffer from dramatic data expansion. The center loss is defined as follows: Lc(x; θ, c) = 1 2 m∑ i=1 ‖fL−1(xi; θ)− cyi‖22 (6) In Equation 6, the xi denotes the ith sample, belonging to the yith class, cyi ∈ Rd denotes the yi th class center of deep features. The size of mini-batch and size of the feature dimension is m and d, respectively. L is the total number of layers, and fL−1 is the feature vector of layer L− 1, which is just before the softmax classifier layer, and the θ denotes the network parameters. The formulation effectively characterizes the intra-class variations. In each iteration, the centers are computed by averaging the features of the corresponding classes. The deep features learned using the center loss are highly discriminative, clustered around the corresponding class centers, and linearly separable by the final fully connected layer, which acts as a linear classifier. Figure 4 presents the visualizations of features obtained using softmax loss on the left and using joint supervision of softmax and center loss on the right. Wen et al. (2016) provides detailed analysis and extensive experiments on center loss and its application in discriminative feature learning.
1. What is the main contribution of the paper regarding task incremental learning? 2. What are the strengths and weaknesses of the proposed regularization strategy? 3. Do you have any concerns about the experimental setup or comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or remarks regarding the paper's organization, motivation, or conclusions?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a regularization strategy for task incremental learning (a.k.a, domain incremental). The strategy is tested on Rotated-MNIST, Permuted-MNIST, Digits, and PACS datasets. The contributions claimed are: (1) a novel regularization-based strategy for CL (center loss regularization, aka CLR), (2) computationally and memory efficient ,(3) with competive performance wrt SOTA CL strategies in domain incremental scenarios. Review The approach is quite simple and easy to follow, which is a good thing. However, while digging a bit deeper in the paper, some essential details are difficult to find, such as the dimension of c^{(0)} or even a clear description of what is precisely one center: "the learned values of feature centers from old task" is not completely clear to me. Is it the mean feature vector for one class/task? The motivation paragraph should be reworked to be more rigorous, I believe there is interest in studying regularization-based methods, but for example, "the regularization strategies are generally memory efficient and computationally less expensive than the other two approaches [Architectural & Replay]." is probably false. For example, on rotated-MNIST (one of the experimented datasets in this paper), a vanilla rehearsal strategy is probably more memory and computationally efficient than most regularization strategies while being very effective. On the other hand, architectural approaches could also be efficient in such settings. Therefore, I believe that the motivation section should be better grounded to be more convincing. Concerning the approach, what is the risk of freezing the output layer to learn new tasks? Might it create some problems in some domain incremental settings, e.g. the first task with very simple data, then a second task with much more complex data? The experiments contain interesting results with many baselines. I particularly appreciated the results on Digits and PACS datasets. However, I did not fully understand why the baselines used for Rotated/Permuted MNIST are different from the baselines used for Digits and PACS. For example, why are Meta-DR results not in Rotated/Permuted MNIST experiments? This approach looks quite effective without replay in the other experiments. Why is Gdumb not reported in Tables 5 and 6 while it is reported in table 2? I believe the experiment section should be reorganized to present better what the authors want to show. I think that important regularization baselines are missing, such as "mean-IMM" or "mode-IMM" "Overcoming Catastrophic Forgetting by Incremental Moment Matching" Lee et Al, and KFAC-Ewc "Online structured laplace approximations for overcoming catastrophic forgetting" Ritter et al. Maybe there is a good reason not to compare to them, but it should be explained. The authors claim the computational efficiency of their approach. However, they do not provide any measurement of computational cost. They do not provide an estimation of the scaling behavior of their approach. Some Remarks: Section 1.1 reviews the bibliography but does not link the bibliography with the proposed approach. The link is more explained in the motivation section (1.2), but then 1.1 does not explain how the proposed approach fits the existing bibliography. The results of tables 4/5/6 should not be after the conclusion but inside (or eventually above) the results section. Regarding results, since the classes are the same for all tasks, it could be interesting to evaluate the accuracy of future tasks to understand the progress better. In Table 6: Unit of measurement is not given for additional memory requirement. In Conclusion: "Our method was effective in overcoming catastrophic forgetting when applied to the standard continual learning benchmarks as well as continual domain adaptation benchmarks." it would be more relevant to say that the approach was effective in incremental domain benchmarks.
ICLR
Title Visual Timing For Sound Source Depth Estimation in the Wild Abstract Depth estimation enables a wide variety of 3D applications, such as robotics and autonomous driving. Despite significant work on various depth sensors, it is challenging to develop an all-in-one method to meet multiple basic criteria. In this paper, we propose a novel audio-visual learning scheme by integrating semantic features with physical spatial cues to boost monocular depth with only one microphone. Inspired by the flash-to-bang theory, we develop FBDepth, the first passive audio-visual depth estimation framework. It is based on the difference between the time-of-flight (ToF) of the light and the sound. We formulate sound source depth estimation as an audio-visual event localization task for collision events. To approach decimeter-level depth accuracy, we design a coarse-to-fine pipeline to push the temporary localization accuracy from event-level to millisecond-level by aligning audio-visual correspondence and manipulating optical flow. FBDepth feeds the estimated visual timestamp together with the audio clip and object visual features to regress the source depth. We use a mobile phone to collect 3.6K+ video clips with 24 different objects at up to 60m. FBDepth shows superior performance especially at a long range compared to monocular and stereo methods. 1 INTRODUCTION Depth estimation is the fundamental functionality to enable 3D perception and manipulation. Although there have been significant efforts on developing depth estimation methods with various sensors, current depth estimation schemes fail to achieve a good balance on multiple basic metrics including accuracy, range, angular resolution, cost, and power consumption. Active depth sensing methods actively emit signals, such as LiDAR (Caesar et al., 2020), structuredlight (Zhang, 2012), mmWave (Barnes et al., 2020), ultrasound (Mao et al., 2016), WiFi (Vasisht et al., 2016). They compare the reflected signal with the reference signal to derive time-of-flight (ToF), phase change, or Doppler shift to estimate the depth. Active methods can achieve high accuracy because of the physical fundamental and well-designed modulated sensing signals. Lidar is the most attractive active senor due to its large sensing range and dense point cloud. However, the density is not sufficient enough to enable a small angular resolution. Therefore, the points are too sparse to be recognized at a long distance. Besides, the prohibitive cost and power consumption limit the availability of Lidar on general sensing devices. Passive depth sensing takes signals from the environment for sensing directly. It commonly uses RGB monocular camera (Bhoi, 2019; Laga et al., 2020), stereo camera (Cheng et al., 2020), thermal camera (Lu & Lu, 2021), or multi-view cameras (Long et al., 2021a). These sensors can achieve pixel-wise angular resolution and consume pretty less energy due to omitting the signal emitting. Among them, stereo matching can effectively estimate the disparity and infer a dense depth map since it transforms the spatial depth to the visual disparity based on the solid physical law. The baseline of the stereo camera determines the effective range and accuracy. Therefore, the dimension of the stereo camera is placed as the critical trade-off with sensing metrics. Thanks to the advance in deep learning, the cheap monocular depth estimation keeps on improving performance with new network structures and high-quality datasets. However, the accuracy is still not satisfactory especially at a long range because it can only regress depth based on the implicit visual cues. It is ill-posed without any physical formulation. Besides, it heavily relies on the dataset. It requires domain adaption and camera calibration for various camera intrinsics (Li et al., 2022). In this paper, we propose to add only one microphone to enable explicit physical depth measurement and boost the performance of a single RGB camera. It does not rely on the intrinsic of cameras and implicit visual cues. We develop a novel passive depth estimation scheme with a solid physical formulation, called Flash-to-Bang Depth (FBDepth). Flash-to-Bang is used to estimate the distance to the lightning strike according to the difference between the arrival time of a lightning flash and a thunder crack. This works because light travels a million times faster than sound. When the sound source is several miles away, the delay is large enough to be perceptible. Applying it to our context, FBDepth can estimate the depth of a collision that triggers audio-visual events. The collision event has been explored for navigation and physical search in (Gan et al., 2022), but our work is the first that uses the collision for depth estimation. Collisions are common and can arise when a ball bounces on the ground, a person takes a step, or a musician hits a drum. We identify and exploit several unique properties related to various collisions in the wild. First, the duration of a collision is short and collision events are sparse. Thus, there are few overlapped collisions. Second, though the motion of objects changes dramatically after the collision, they are almost static at the collision moment. Third, the impact sound is loud enough to propagate to a long range. Flash-to-Bang is applied to the range of miles for human perception. Using it for general depth estimation poses several significant challenges: (i) It is inaccessible to ground truth collision time from video and audio. Video only offers up to 240 frames per second(fps), and may not capture the exact instance when the collision occurs. Audio has a high sampling rate but it is hard to detect the start of a collision solely based on the collision sound due to different sound patterns arising from collisions as well as ambient noise. (ii) We need highly accurate collision time. 1 ms error can result in a depth error of 34 cm. (iii) Noise present in both audio and video further exacerbate the problem. To realize our idea, we formulate the sound source depth estimation as the audio-visual localization task. Whereas existing work (Wu et al., 2019; Xia & Zhao, 2022) still focuses on 1-second-segment level localization. FBdepth performs event-level localization by aligning correspondence between the audio and the video. Apart from audio-visual semantic features as input in existing work (Tian et al., 2018; Chen et al., 2021a), we incorporate optical flow to exclude static objects with similar visual appearances. Furthermore, FBDepth applies the impulse change of optical flow to locate collision moments at the frame level. Finally, we formulate the ms-level estimation as an optimization problem of video interpolations. FBDepth succeeds to interpolate the best collision moment by maximizing the intersection between extrapolations of before-collision and after-collision flows. With the estimated timestamp of visual collision, we regress the sound source depth with the audio clip and visual features. FBdepth avoids the requirement to know the timestamp of audio collision. Besides, different objects have subtle differences in audio-visual temporal alignment. For example, a rigid body generates the sound peak once it touches another body. But an elastic body produces little sound during the initial collision and takes several ms to produce the peak with the maximum deformation. We feed semantic features to enable the network aware of the material, size, etc. Our main contributions are as follows: 1. To the best of our knowledge, FBDepth is the first passive audio-visual depth estimation. It brings the physical propagation property to audio-visual learning. 2. We introduce the ms-level audio-visual localization task. We propose a novel coarse-to-fine method to improve temporal resolution by leveraging the unique properties of collisions. 3. We collect 3.6K+ audio-visual samples across 24 different objects in the wild. Our extensive evaluation shows that FBDepth achieves 0.64m absolute error(AbsErr) and 2.98% AbsRel across a wide range from 2 m to 60 m. Especially, FBDepth shows more improvement in the longer range. 2 RELATED WORK Multi-modality Depth estimation. Recent work on depth estimation has shown the benefits of fusing cameras and other active sensors. (Qiu et al., 2019; Imran et al., 2021) recover dense depth maps from sparse Lidar point clouds and a single image. (Long et al., 2021b) associates pixels with pretty sparse radar points to achieve superior accuracy. The effective range can be increased as well by Lidar-camera (Zhang et al., 2020) or Radar-camera (Zhang et al., 2021). However, these methods are still expensive in cost and power consumption. (Gao et al., 2020; Parida et al., 2021) emit audio chirps and learn the depth map implicitly with audio reflections and a single image. However, these methods require many nearby acoustic reflectors to produce effective echos so the setup is limited in rooms. Besides, they are evaluated in an audiovisual simulator. FBDepth only uses one extra microphone to perceive natural sounds directly. It keeps the passive design of the audio but applies the physical measurement explicitly. The one-path sound propagation has a longer effective range than echoes. Sound source localization. Previous systems localize sound sources with microphone arrays (Valin et al., 2003; Rascon & Meza, 2017) or one microphone with a camera (Hershey & Movellan, 1999). They intend to estimate the direction of arrival(DOA) or the distance. The DOA is inferred by the subtle difference in arrival time from the sound source to each microphone(Mao et al., 2019; Sun et al., 2022) or by semantic matching with the visual appearance if given images(Tian et al., 2018; Arandjelovic & Zisserman, 2018). The distance can be estimated by triangulation methods with multiple DOAs and room structures(Wang et al., 2021; Shen et al., 2020). Many work study the room acoustic and the distance cues from the reverberation(Singh et al., 2021; Chen et al., 2021b) but (Zahorik, 2002) shows that the reverberation has a coarse coding with the distance. Compared to these methods, FBDepth directly estimates the distance by the ToF and achieves superior accuracy to indirect triangulation methods and implicitly depth learning networks on reverberation. Audio-visual event localization aims to detect and localize events in videos. (Tian et al., 2018) first propose the task and build up the audio-visual event(AVE) dataset. They apply an audio-guided visual attention mechanism to learn visual regions with the related sounding object or motions. Recent works develop dual-modality sequence-sequence framework (Lin et al., 2019) and dual attention matching mechanism (Wu et al., 2019) to leverage global features. However, the temporal event boundary is 1s-level in AVE dataset so it is split as 1s-long segments. We study the instant collision event and solve the coarse boundary problem as well. (Gan et al., 2022) has a similar setup to ours. They use an embodied robot agent to navigate to a dropped object in 3D virtual rooms. They integrate asynchronous vision and audition and navigate to the object. The asynchronism comes from the invisibility of the object. Even though their simulator has been pretty vivid enough for semantic tasks, it has a gap in the real-world collision for the mslevel formulation. Falling objects dataset(Kotera et al., 2020), TbD dataset(Kotera et al., 2019) and TbD-3D dataset(Rozumnyi et al., 2020) explore falling motions and fast movements but they do not have audio and depth information. Video frame interpolation aims to synthesize intermediate frames between existing ones of a video. Most state-of-the-art approaches explicitly or implicitly assume a simplistic linear motion. Warping-based methods (Baker et al., 2011; Park et al., 2020) apply optical flow and forward warping to shift pixels to intermediate frames linearly. Phase-based methods (Meyer et al., 2015; 2018) combine the phase information across different scales but the phase is modeled as a linear function of time. Recent methods are developed to approximate non-linear motion, such as kernelbased methods (Niklaus et al., 2017a;b), quadratic interpolation (Xu et al., 2019a), cubic motion modeling (Chi et al., 2020), etc. However, they still fail to complex non-linear motions because precise motion dynamics cannot be captured in the blind time between keyframes. Unfortunately, collisions are super non-linear and instant. Given two keyframes before and after the collision, it is ambiguous to decide whether there is a collision. Hence, these methods are not applicable. We analyze the motions before and after the collision and extrapolate optical flows to find the most potential collision position. 3 PROBLEM FORMULATION We formulate the depth estimation by the physical law of wave propagation. We have: d v − d c = T (1) where the depth of the sound source is d and the difference between the ToF of sound and light is T . c and v denote the propagation speeds of light and sound, respectively. We can estimate d based on d = cvTc−v ≈ vT since c ≫ v. We observe T = Taudio − Tvideo + Thardware, where Taudio and Tvideo denote the event time in the audio and video recordings, respectively, and Thardware denotes the start time difference in the audio and video recordings. It can be small as well as have a small variance with a well-designed media system such as the Apple AVFoundation framework. We regard it as a constant unknown bias to learn. It is impossible to label the precise Tvideo and Taudio manually. Tvideo can be tagged at most frame-level. Even though many commercial cameras can support up to 240 FPS, it results in a 4-ms segment and 1.43m depth variation. Moreover, it is tough to determine the exact frame that is nearest to the collision in high FPS mode by a human being due to the constrained view of the camera. Taudio is challenging to recognize in the wild as well. Although the audio sampling rate is high enough, we can recognize the significant early peaks instead of the first sample triggered by the collision. The best effort of segmentation is 10-ms level based on real data. We cannot learn the timestamp with supervision. We propose a 2-stage estimation framework. The goal of the first stage is to estimate the numerical Tvideo. As figure 1 shows, we localize the audiovisual event in the stream and then take advantage of the unique optical flow of the collision to estimate Tvideo at ms-level. In the second stage, we place the Tvideo as an anchor into the audio clip and direct regress the depth with depth supervision. We make the network optimize Taudio automatically with knowledge of the Tvideo, the audio waveform and visual features. 4 APPROACH We demonstrate a novel coarse-to-fine pipeline to localize the collision with a super temporal resolution in the video. This method does not require annotations on ms-level, which is at least two orders of magnitude finer than previous approaches. They rely on the supervision of segment annotations, such as AVE dataset with 1-second segments (Tian et al., 2018), Lip Reading Sentences 2 dataset with word-level segments (Chung & Zisserman, 2016), BOBSL with sentence-level alignments (Bull et al., 2021). 4.1 EVENT-LEVEL LOCALIZATION Audio-visual modeling for collisions. In this step, our goal is to localize the audio-visual event for the region and the period of interest. It is similar to (Tian et al., 2018), but the unique properties of collisions bring new opportunities to learning strategy. Collisions have a significant motion than other sound sources. We can use the optical flow to inform the network of moving pixels. Besides, the impact sound is highly correlated to the rich information of objects (Gan et al., 2022), such as shape, materials, size, mass, etc. It makes audio-visual cross-matching easier than general audiovisual events so that we do not need to apply a complex scheme to learn. Another fact is that collisions are pretty sparse temporally in the wild because the duration of collisions is extremely short. It is rare to come across overlapped collisions based on our empirical study on the basketball court. Only two frames have double collisions among all 1208 frames and a total of 203 collisions when 7 basketballs are played during a 40-s duration. We propose a motion-guided audio-visual correspondence network (MAVNet). Similar to (Tian et al., 2018; Wu et al., 2019), MAVNet performs the cross-matching for the audio features and the RGB-F channels. Besides, it predicts audio-visual segmentation to capture whole pixels of the target object. It can achieve fine-grained audio-visual scene understanding (Zhou et al., 2022). We use the segmentation mask to filter flows of interest and perform high-resolution estimation in the next steps. MAVNet has two backbones to deal with RGB-F channels and audio clips respectively. A UNet (Ronneberger et al., 2015) style encoder is applied to extract the frame features conditioned by optical flows. It uses a series of convolution layers to extract visual features. Another branch is the audio encoder which takes in the time-domain signal. It has a 1D convolution layer to learn an STFT-like representation and a stack of 2D convolution layers with batch normalization to learn the semantic audio features. We replicate the audio feature, tile them to match the visual feature dimension, and concatenate the audio and visual feature maps. MAVNet has two output heads as well. the U-Net decoder applies a series of up-convolutions and skip-connections from the RGB-F encoder to fused feature maps to learn the binary segmentation mask M . Meanwhile, the fused feature map is fed into a binary classification head consisting of convolution layers and linear layers to predict the audio-visual event relevance y ∈ {0, 1}. Training We use the weighted sum Binary Cross Entropy (BCE) loss as the training objective for both segmentation and the cross matching, We train all components to jointly optimize the location predictions and energy reconstruction. We minimize the total loss Ltotal = BCE(M,M̂) + λ ∗ BCE(y, ŷ) where λ is the hypermeter to set. Inference We only use low FPS to perform MAVNet to avoid dense inference at this stage. Moreover, we do not need to activate the segmentation head until the audio clip and the frame are highly matched. Finally, MAVNet uses this audio clip to retrieve a sequence of frames including the full collision procedure. 4.2 FRAME-LEVEL LOCALIZATION Given a sequence of video frames, our goal is to split them into two sets: the frames before the collision V0 and the frames after the collision V1. This essentially requires us to determine the last frame Ie in V0 before the collision and the first frame Is in V1 after the collision. Thus, we locate the collision between the frame Ie and Is. Based on the analysis of the physical motion, we make an important observation that can help determine Ie and Is. The collision results in a significant acceleration change due to the strong impulse force. Let at = vt − vt−1 and δat = at − at−1 denote the acceleration and acceleration change of frame It, respectively. δa between Ie and Is is large, while δa between adjacent frames before or after the collision is small. If the object stops moving immediately after the collision, we take the static frame Ie+1 as Is. Finally, we select the frames before Ie to generate V0, and select the frames after Is to generate V1. We use the retrieved mask in the last stage to determine the object positions in the frames and calculate the velocity, acceleration, and acceleration change. We find the Ie and Is at the low FPS and then replicate the procedure for frames between Ie and Is at high FPS. Finally, we locate Ie and Is in the high FPS mode efficiently. 4.3 MS-LEVEL LOCALIZATION To further locate the exact moment of the collision, we try to interpolate frames between Ie and Is to recover the skipped frame. Unfortunately, the common assumption of frame-based interpolation is fully broken down. Motion consistency is fundamental for spatio-temporal video processing. If the motion of the object is temporally stable across several frames (e.g., due to a constant force), the position and pose can be predicted in the future frames as well as be interpolated between two frames. We denote it as motion first consistency. However, the impact sound is caused by an impulse force, which results in a rapid change of the motion status. It breaks the motion continuity and consistency. When we observe Ie and Is, we cannot determine whether a collision happens or the object just flies in the air. Luckily, the collision moment retains a new form of motion consistency. We denote it as motion second consistency. It reveals that the motions before and after the collision share the same intersection position. Besides, they keep the motion first consistency separately. Therefore, we can extrapolate the motions based on the motion first consistency and search for the most similar motion extrapolations by leveraging motion second consistency. Note that our final goal is to find the timestamp of the collision instead of the motion status at the shared position. (Kotera et al., 2019; Rozumnyi et al., 2020) try to recover the sub-frame motions and trajectories as well but they require the high FPS ground truth to guide the training. In our context, we care more about when the collision happens than what it looks like. Optical flow extrapolation Optical flow is widely used for frame prediction () and interpolation (Baker et al., 2011) by warping the frame with the estimated optical flow. Because it can capture all motions of pixels and get a finer understanding of the object dynamics. The optical flow sequence is usually generated by adjacent video frames. However, it is not efficient for extrapolation. The drift of pixels in the flow requires extra iterative wrappings to align the corresponding pixels, which results in accumulation errors. Therefore, we compute the optical flows from an anchor frame Ia to the frame sequence V as {I0, I1, ...In}. We can estimate the flow sequence Fa→V as {fa→0, fa→1, ...fa→n}. As fa→n(x, y) represents the movement of the pixel Ia(x, y) to In, Fa→V(x, y) describes how the pixel in Ia(x, y) moves across the frame sequence V . Hence, Fa→V tracks the global motion of each pixel without iterative warpings. With the historical positions of Ia(x, y) from frame I0 to In, we can regress the motion of this pixel and extrapolate the flow to fa→n+δt, which is the relative pixel position to In+δt with an arbitrary δt. In our context, We pick k consecutive frames before the collision Vpre as {Ie−k+1, Ie−k+2, ..., Ie} and after the collision Vpost as {Is+k−1, Is+k−2, ..., Is}. We select the frame Ie as the anchor frame. It is near the collision moment, so its motion to other frames is not dramatic and easy to be estimated. Hence, we can estimate the optical flow sequences Fe→Vpre and Fe→Vpost Meanwhile, we apply the predicted segmentation mask of Ie to filter the pixels of the target object. In the last step, we build up regressors R for each pixel’s motion individually and predict future locations in any sub-frame. Optical flow interpolation We have construct pixel level regressors for Fe→Vpre and corresponding Fe→Vpost . They can extrapolate the flow fe→e+δt0 and fa→s+δt1 , respectively. δt0, δt1 are extrapolation steps. The optimization goal is to min e−s≤δt1≤0≤δt0≤s−e ||fe→e+δt0 , fa→s+δt1 ||2, s.t. e+ δt0 < s+ δt1 The collision duration is s+ δt1 − (e+ δt0), which is always more than 0. e+ δt0 is the target mslevel localization T̂video. We can apply this interpolation methodology to search the intersection of the object’s center trajectory or maximize the Intersection over Union (IoU) of the object’s bounding box. However, both only use several key points so they cannot achieve a fine granularity since the optical flow takes advantage of thousands of pixels. 4.4 DEPTH REGRESSION Based on the estimation T̂video, we directly regress the depth to fit the Taudio and the bias THardware with the supervision of ground truth depth. We observe that the sound generation procedure varies a lot across different objects, materials, shapes, and motions. On one hand, the diverse waveforms make it impractical to measure the exact Taudio manually. On the other hand, each specific waveform has significant implications on what is the best Taudio corresponding to T̂video. To combat the background noise from other sources, we also feed the RGB-F crop of the target object from frame Ie to the depth predictor. It includes the semantic features of the object as well as the motion status just before the collision. These cues can guide the predictor to find the waveform pattern easily. We select a sequence of audio samples starting from Ie and label some anchor samples as 1 at T̂video. It informed the audio sequence about the timestamp of the visual collision directly. We feed the enriched sequence into the 1D convolution layer to extract a 2D representation. It is followed by two residual blocks to learn high-dimension features. Meanwhile, we use ResNet-18 (He et al., 2015) to extract the RGB-F features of the target object. We tile and concatenate the RGB-F features to the audio features along the channel dimension and append another two residual blocks to fuse the features. Finally, it is followed by a pooling layer and a fully connected layer to predict the depth. The output maps to depth by the 2D projection. We use Mean Square Error (MSE) Ldepth = ||d, d̂||2 as the learning objective where d and d̂ are the target depth and the predicted depth. 5 EXPERIMENTS 5.1 SETUP Dataset platform and collection We use an iPhone XR with a 240-fps slow-motion mode to collect the video with audio. The audio sampling rate is 48Khz. We set a stereo camera and a Lidar together to collect ground truth. We include details of data collection in the Appendix B. AVD Dataset We collect 3.6K+ raw audio-visual sequences with a single collision event as the audio-visual depth(AVD) dataset. We randomly sample raw sequences to generate train/val/eval splits, which have 2600/500/522 sequences. We augment the raw sequences by cropping one moving object from a raw video sequence and inserting it into another raw sequence with a random temporal location. Besides, we augment the raw depth with a maximum 3% random change to diversify the depth and shift audio samples accordingly to the video timestamp. More details are described in Appendix B. Baselines We include three types of baseline for comparison. We compare to a monocular depth estimation method NeWCRFs (Yuan et al., 2022), a state-of-the-art(SOTA) on multiple benchmarks. We also compare to stereo matching methods including the ZED built-in ultra depth estimation SDK and a SOTA method LEAStereo (Cheng et al., 2020). We use dense depth maps collected by the Lidar to finetune the NeWCRFs and LEAStereo on images collected by the stereo camera. Despite optical flow based interpolation, we compare to interpolation using key points such as the trajectories of center or bounding boxes. Metrics We use the mean absolute depth errors as AbsErr = 1n ∑n i=1 |d − d̂|, root mean square absolute relative errors RMSE = √ 1 n ∑n i=1(d− d̂)2, AbsRel = 1 n ∑n i=1 |d−d̂| d as the end-to-end performance metrics. FBDepth is a sparse depth estimation. We evaluate the depth of each target object. However, monocular and stereo baselines have dense depth estimations for all pixels of the object. We evaluate the median estimation depth with the median depth of the ground truth dense map. We provide the results over different distance ranges as close(≤ 10m), mid(10m-30m), and far(≥ 30m). Intuitively, there is an upper bound for the temporal resolution so AbsRel at close depths performs worse than at further distances. 5.2 RESULTS Table 1 shows the results on the depth estimation. In all, FBDepth can achieve better performance on all metrics than baselines across different FPS. Several important trends can be observed. Stereo matching methods perform extraordinarily on close objects, where more clear view difference can be captured. The AbsErr and RMSE increase dramatically as the targets become further because the limited baseline cannot resolve the view difference easily. In the other side, the AbsErr and RMSE of FBDepth grows slowly with the increasing distance while its AbsRel decreases gradually. Intuitively, there is a upper bound for the temporal resolution due to the limited FPS, the lack of the accurate timestamp and the small disturbance of audio-video software. Thus FBDepth may not achieve the centimeter level easily. A Further depth can break the assumption of stereo matching methods as well as monocular methods which has a fixed depth range of training data, but FBDepth still holds the physical propagation law in this condition. FBdepth also shows advantages on NeWCRFs. The monocular methods rely on the training set, which includes various scenarios and depths. Although we apply camera remapping with intrinsic matrix and finetuning, NeWCRFs still cannot achieve the best performance as the one in the pretrained dataset. The implicit depth regression has difficulty in domain adaption. In the contrast, stereo methods can be directly applied to the new scenario and achieve awesome estimation because its fundamental is the explicit spatial view difference on stereo images. FBDepth applies the explicit spatial measurement and does not reply on the camera and scenarios heavily. It requires several learning models but these model can be applied to common cameras and microphones. FBDepth can be more general with a more diverse dataset. We show some visual qualitative results in the Appendix B.3. Compared to other methods, the delay between audio and video can be visually recognized, which is similar to object detection. In another word, FBDepth transforms the tough depth estimation problem to a simple interpretable problem. 5.3 ABLATION In the ablation study, we show how each stage contributes to the final results. Event-level localization We invest how the optical flow can help detect the collision event as well as contour the object mask. We define recall and precision as the percentage of correct recognized audio-visual events in all audio-visual events and all recognized events with an IoU more than 0.5, respectively. With the flow, both recall and precision improve as the flow can work as a pre-mask to guide the network. The main failures in recall come from weak collision sounds or simultaneous collisions. The incorrect recognition is mainly due to similar objects in the frame. Frame-level localization Frame rate is most related to the frame-level stage. We observe that increasing the frame rate reduces the numerical error of FBDepth in Table 1. Especially, increasing 30 FPS to 60 FPS yields the largest improvement, and the benefit gradually tapers off with a further increase in the frame rate. We observe that 30 FPS is too slow to capture sudden movements and fast dynamics while 60 FPS is around the borderline. It is consistent with the trend to set 60 FPS as the default video recording and playing. The motion in 120FPS and 240 FPS is even slower so it is more difficult to distinguish the frame Ie. The frame error is no more than the one in the low FPS mode. Thus, 120 FPS and 240 FPS bring less improvement. Ms-level localization We investigate our special interpolation in two perspectives. First, we need to verify whether this method works. However, there is no ground truth timestamp so we cannot directly quantify the accuracy. We set the estimation of 240 FPS as a baseline and compare it with the estimation of lower FPS. If it can get similar numerical results from independent input, which means the algorithm is reliable. In Figure r̃efmicro2, the median temporal error for 30, 60, and 120 FPS is 2.3ms, 0.65ms, and 0.5ms respectively. Considering the frame resolution, we can compute the improvement ratio as frame durationtemporal error . The 60 FPS has the largest 25x improvement over the frame duration. This is strong evidence that our ms-level localization is reasonable and robust. Second, we compare the performance of depth estimation with different interpolation strategies in Table 2. We use the result from frame localization to predict the depth when there is no interpolation. The error is large since this timestamp is ambiguous for the depth prediction. Interpolation with the traces of centers or bounding boxes does not work well. A few key points cannot capture the dynamics in fine granularity. Depth regression Without the RGB-F channel of the target object in depth regression, the estimation will be less robust due to the ambient sound and the background noise as shown in the Table 2 6 LIMITATION AND FUTURE WORKS We classify audio-visual events into 3 categories by the quality and the quantity of visual cues during the sound production. Obvious visual cues during sound production (e.g.collision) This is the main scenario we try to address in this paper. It requires both visible procedure and audible sound to estimate the depth. We can apply it to sports analytics, human stepping, etc. Moreover, it can collect the sparse depth point and accumulate depth points over time. According to existing work on depth completion (Long et al., 2021b; Xu et al., 2019b), adding some accurate depth points can boost the performance of monocular depth. Indirect visual cues during sound production (e.g.speech, playing the piano) This scenario is challenging but common every day. They do not show the vibration visually. Fortunately, there are still lots of visual cues. Existing work on speech synthesis with lip motion(Ephrat & Peleg, 2017), and music generation with pose(Gan et al., 2020) indicates the strong semantic relationship between video and audio. The spatial correlation still holds here. We propose to apply a high-resolution multi-frame alignment between the video and audio to find the accurate propagation delay. No visual cues during sound production (e.g. car engines, mobile phone speaker) We admit that we have no idea to estimate the depth when these sound sources are static because we cannot see them at all. Luckily, we still have a chance when these sound sources move. We propose a Dopplerlike formulation to associate visual cues and audio cues. Another urgent problem is that the microphone is pretty challenging to synchronize with other sensors. Pushing the latency to the sub-ms level can boost many applications including FBDepth. 7 CONCLUSION In this paper, we develop a novel depth estimation method based on the ”Flash-to-Bang”. By aligning the video with the audio and detecting the events from both, we can estimate the depth in the wild without calibration or prior knowledge about the environment or target. Our extensive evaluation shows that our approach yields similar errors across varying distances. In comparison, the errors of several existing methods increase rapidly with distance. Therefore, our method is particularly attractive for large distances. As part of our future work, we are interested in further enhancing the accuracy of our method, generalizing to more contexts, and using the estimated depth to the collision to estimate the depth to other objects in the scene. A BACKGROUND OF DEPTH SENSORS We add more details on the performance of various depth sensors on multiple criteria in Table 3 and Table 4. We especially demo the available depth sensors and corresponding APIs on iPhone Pro 13 in Table 5 as a typical example that depth estimation is well studied at the short range. B DATASET DETAILS We describe the details to build up the data collection pipeline for this novel task and discuss the trade-off during the data collection. B.1 PLATFORM AND COLLISION OBJECTS Figure 3 shows the data collection platform. It includes three devices. Lidar: We use a Livox Mid-70 Lidar(LIVOX, 2021) to collect the ground truth depth. The detection range is 90 m @ 10% reflectivity. The range precision is 2cm. Although the point rate of Mid-70 is low, it has a special non-repetitive scan pattern so that the point cloud can be very dense by accumulation. Thus, it is best to be used to collect the depth in the static scene. Stereo Camera: We use a ZED 2i stereo camera(StereoLab, 2021) with a 12 cm baseline and a focal length of 4mm. The large focal length is designed to increase the maximum effective range. The image resolution is 1242 by 2208 pixels. Table 3 shows detailed performance. We use the ZED 2i camera as an important depth estimation baseline. Video Recorder: A pair of a camera and a microphone can play the basic functionalities of the video recorder. However, it is very challenging to satisfy all the criteria for the audio-visual depth estimation. In this experiment, we use an iPhone XR and record the video by the default Camera app. It has several promising advantages. First, we can record slow-motion 1080P videos with 240 fps. The frame duration is constant so that we can transform the frame number to the timestamp accurately and align it with the audio track which has a 48kHz sampling rate. Second, the audiovisual recording delay Thardware is small as 1 ms and has a small variance within 1 ms on the iPhone. Both specifications above are critical to the audio-visual depth but cannot be satisfied on other platforms such as Android phones. The calibration of the audio-visual recording framework is out of the scope of this work. It is unexpected that the calibration is pretty difficult based on our experience. To capture the remote scene clearly, the telephoto lens has become indispensable in recent smartphones. Samsung Ultra 22 can support 10x optical zoom and 100x hybrid zoom, and Pixel 6 pro has 20x zoom in all. Their zoom performance is much superior to iPhone. The iPhone XR is not equipped with a telephoto lens, so we mount an ARPBEST monocular telescope to enlarge the scene at a large distance. As shown in Figure 4, the image quality of our setup is a bit worse than the one captured by Pixel 6 Pro’s telephoto lens. Thus, our setup does not provide superior image quality compared to existing commercial camera modules on smartphones. The image taken by Pixel Pro 6 is sharp but noisy while the one taken by iPhone XR with the telescope is a bit blurred. Our setup does not take advantage of the external telescope from this perspective. Overall, our setup resembles the hardware available on commercial mobile phones. Collision Objects: In Figure 8, We use 24 objects including various masses, sizes, shapes, and six common materials: wood, metal, foam, rubber, plastic, and paper. These objects are ubiquitous every day. Besides, they do not break down during the collision. B.2 COLLECTION METHODOLOGY Sensor setup: We mount the Lidar, the stereo camera, and the iPhone on one slide. We perform camera Lidar calibration between the left camera of the stereo camera and the Lidar according to (Yuan et al., 2021). We use the left camera to evaluate the monocular depth estimation and use the stereo camera to evaluate the stereo depth estimation. The mobile phone changes the field of view to fit the object at different distances. Hence, its intrinsic is not constant. We use the frames recorded by iPhone only for FBDepth. Collision setup: Since the point cloud is too sparse to measure the instant collision, we control the collision position to get the ground truth depth. First, we select an anchor position and measure the depth from the slide to the anchor by the Lidar. Second, we perform the collision at the anchor. For example, we throw an object to collide with the anchor or strike a hammer into the anchor or step the shoes on the anchor. Finally, the iPhone records the collision procedure. Besides, the Lidar and the stereo camera record the object placed at the anchor. They record the static object corresponding to the moving object in the video frames. We set up various anchors from 2 meters to 60 meters in different environments. Data Augmentation: After data cleaning and annotation, we get 3.6K+ raw audio-visual sequences, including 280K+ frames as te AVD dataset. Each sequence has about 40 to 120 frames and a corresponding audio clip corresponding. We use the stereo camera to capture static images and use the lidar to capture static depth maps. We augment the raw audio-visual sequences to have more than a single collision by cropping one moving object from a raw video sequence and augmenting it to another raw sequence with a random temporal location. Meanwhile, we add up the audio sequence with the same time shift as the video. We have 10K audio-visual sequences. For the event-level localization stage, we segment an audio clip of 66.7ms including the impact sound and sample 20 frames including visible objects from each sequence and pair them as positive pairs. Negative samples pair the frame with the audio clip without impact sounds or with irrelevant impact sounds. Finally, we generate around 400K audiovisual pairs. Besides, we augment the raw depth with a maximum 3% random change to diversify the depth and shift audio samples accordingly to the video timestamp. It can solve the problem of discrete anchor depths. The change cannot be significant because the impulse response of sound is also related to depth. It requires more transformation than just shifting audio samples. We also augment images with low light, flip and rotation, and audio with diverse background noise from WHAM!Wichern et al. (2019). B.3 SAMPLES AND VISUAL QUALITATIVE RESULTS We provide some samples and visual qualitative results. Considering the objects are small in the normal camera, we only show the region of interest in the RGB image and depth map. The most intuitive observation is that our approach simplifies the difficult depth estimation problem to be easily estimated from the visual samples. Humans can give a coarse estimation with the given timestamps, frames, and waveforms. However, we can have no idea to know the depth from the RGB image or stereo image visually.
1. What is the main contribution of the paper, and how does it improve upon previous methods? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its ability to handle various environmental factors? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or limitations regarding the scope and applicability of the method, especially in terms of distance cues and object properties? 5. Would additional resources, such as example videos or a dedicated website, enhance the understanding and potential impact of the work?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents a novel framework, called Flash-to-Bang Depth (FBDepth), for passive sound-source depth estimation . The authors use audio-visual correspondence and optical flow manipulation to get decimeter-level depth accuracy. The proposed audio-visual depth estimation system uses video, audio and optical flow to perform event-level localization to retrieve the collision event. The main idea is based on a well-known method to estimate the distance of a lightning strike. The presented comparisons with previous depth estimation approaches show that FBDepth shows better performance than previous methods that use purely video. Although, stereo matching methods seem to be significantly better in the close range. Strengths And Weaknesses Strengths - The integration of information from sound to add visual information to yield more reliable depth is a novel and much needed idea The use of everyday cameras and mics in smartphones, makes the results quite general and reproducible The ablation study allows for a good understanding of incremental benefits of some of the major components of the framework Weaknesses - There hasn’t been any investigation of - the effect of reverberation in different environments which can smear and sometimes even give different type of distance cues based of the ratio of direct-to-reverberant ratio the effect of object size and material which seems to be significant considering the duration of impact can vary quite a lot based on mass and stiffness as shown by Traer et al., 2019 Many distance cues in the sounds seem to be ignored in the current framework Lack of ground truth or a more reliable source of timestamps for verifying ms-level localization seems to be limiting The method is only particularly attractive for longer distances Clarity, Quality, Novelty And Reproducibility The manuscript is clearly written barring a few typos and grammatical errors here and there. The figures and tables are clear and well captioned but the language could be improved. A webpage or an appendix section with more examples and specific videos would have been a nice edition. In the current form, the manuscript leaves many questions about used videos and associated failure modes open. The work is original and approaches an important broad idea of audio-visual cue integration for inference in the physical world through the use case of depth estimation. It seems to be reproducable if codebases and example videos are shared but those are not present in the current form.
ICLR
Title Visual Timing For Sound Source Depth Estimation in the Wild Abstract Depth estimation enables a wide variety of 3D applications, such as robotics and autonomous driving. Despite significant work on various depth sensors, it is challenging to develop an all-in-one method to meet multiple basic criteria. In this paper, we propose a novel audio-visual learning scheme by integrating semantic features with physical spatial cues to boost monocular depth with only one microphone. Inspired by the flash-to-bang theory, we develop FBDepth, the first passive audio-visual depth estimation framework. It is based on the difference between the time-of-flight (ToF) of the light and the sound. We formulate sound source depth estimation as an audio-visual event localization task for collision events. To approach decimeter-level depth accuracy, we design a coarse-to-fine pipeline to push the temporary localization accuracy from event-level to millisecond-level by aligning audio-visual correspondence and manipulating optical flow. FBDepth feeds the estimated visual timestamp together with the audio clip and object visual features to regress the source depth. We use a mobile phone to collect 3.6K+ video clips with 24 different objects at up to 60m. FBDepth shows superior performance especially at a long range compared to monocular and stereo methods. 1 INTRODUCTION Depth estimation is the fundamental functionality to enable 3D perception and manipulation. Although there have been significant efforts on developing depth estimation methods with various sensors, current depth estimation schemes fail to achieve a good balance on multiple basic metrics including accuracy, range, angular resolution, cost, and power consumption. Active depth sensing methods actively emit signals, such as LiDAR (Caesar et al., 2020), structuredlight (Zhang, 2012), mmWave (Barnes et al., 2020), ultrasound (Mao et al., 2016), WiFi (Vasisht et al., 2016). They compare the reflected signal with the reference signal to derive time-of-flight (ToF), phase change, or Doppler shift to estimate the depth. Active methods can achieve high accuracy because of the physical fundamental and well-designed modulated sensing signals. Lidar is the most attractive active senor due to its large sensing range and dense point cloud. However, the density is not sufficient enough to enable a small angular resolution. Therefore, the points are too sparse to be recognized at a long distance. Besides, the prohibitive cost and power consumption limit the availability of Lidar on general sensing devices. Passive depth sensing takes signals from the environment for sensing directly. It commonly uses RGB monocular camera (Bhoi, 2019; Laga et al., 2020), stereo camera (Cheng et al., 2020), thermal camera (Lu & Lu, 2021), or multi-view cameras (Long et al., 2021a). These sensors can achieve pixel-wise angular resolution and consume pretty less energy due to omitting the signal emitting. Among them, stereo matching can effectively estimate the disparity and infer a dense depth map since it transforms the spatial depth to the visual disparity based on the solid physical law. The baseline of the stereo camera determines the effective range and accuracy. Therefore, the dimension of the stereo camera is placed as the critical trade-off with sensing metrics. Thanks to the advance in deep learning, the cheap monocular depth estimation keeps on improving performance with new network structures and high-quality datasets. However, the accuracy is still not satisfactory especially at a long range because it can only regress depth based on the implicit visual cues. It is ill-posed without any physical formulation. Besides, it heavily relies on the dataset. It requires domain adaption and camera calibration for various camera intrinsics (Li et al., 2022). In this paper, we propose to add only one microphone to enable explicit physical depth measurement and boost the performance of a single RGB camera. It does not rely on the intrinsic of cameras and implicit visual cues. We develop a novel passive depth estimation scheme with a solid physical formulation, called Flash-to-Bang Depth (FBDepth). Flash-to-Bang is used to estimate the distance to the lightning strike according to the difference between the arrival time of a lightning flash and a thunder crack. This works because light travels a million times faster than sound. When the sound source is several miles away, the delay is large enough to be perceptible. Applying it to our context, FBDepth can estimate the depth of a collision that triggers audio-visual events. The collision event has been explored for navigation and physical search in (Gan et al., 2022), but our work is the first that uses the collision for depth estimation. Collisions are common and can arise when a ball bounces on the ground, a person takes a step, or a musician hits a drum. We identify and exploit several unique properties related to various collisions in the wild. First, the duration of a collision is short and collision events are sparse. Thus, there are few overlapped collisions. Second, though the motion of objects changes dramatically after the collision, they are almost static at the collision moment. Third, the impact sound is loud enough to propagate to a long range. Flash-to-Bang is applied to the range of miles for human perception. Using it for general depth estimation poses several significant challenges: (i) It is inaccessible to ground truth collision time from video and audio. Video only offers up to 240 frames per second(fps), and may not capture the exact instance when the collision occurs. Audio has a high sampling rate but it is hard to detect the start of a collision solely based on the collision sound due to different sound patterns arising from collisions as well as ambient noise. (ii) We need highly accurate collision time. 1 ms error can result in a depth error of 34 cm. (iii) Noise present in both audio and video further exacerbate the problem. To realize our idea, we formulate the sound source depth estimation as the audio-visual localization task. Whereas existing work (Wu et al., 2019; Xia & Zhao, 2022) still focuses on 1-second-segment level localization. FBdepth performs event-level localization by aligning correspondence between the audio and the video. Apart from audio-visual semantic features as input in existing work (Tian et al., 2018; Chen et al., 2021a), we incorporate optical flow to exclude static objects with similar visual appearances. Furthermore, FBDepth applies the impulse change of optical flow to locate collision moments at the frame level. Finally, we formulate the ms-level estimation as an optimization problem of video interpolations. FBDepth succeeds to interpolate the best collision moment by maximizing the intersection between extrapolations of before-collision and after-collision flows. With the estimated timestamp of visual collision, we regress the sound source depth with the audio clip and visual features. FBdepth avoids the requirement to know the timestamp of audio collision. Besides, different objects have subtle differences in audio-visual temporal alignment. For example, a rigid body generates the sound peak once it touches another body. But an elastic body produces little sound during the initial collision and takes several ms to produce the peak with the maximum deformation. We feed semantic features to enable the network aware of the material, size, etc. Our main contributions are as follows: 1. To the best of our knowledge, FBDepth is the first passive audio-visual depth estimation. It brings the physical propagation property to audio-visual learning. 2. We introduce the ms-level audio-visual localization task. We propose a novel coarse-to-fine method to improve temporal resolution by leveraging the unique properties of collisions. 3. We collect 3.6K+ audio-visual samples across 24 different objects in the wild. Our extensive evaluation shows that FBDepth achieves 0.64m absolute error(AbsErr) and 2.98% AbsRel across a wide range from 2 m to 60 m. Especially, FBDepth shows more improvement in the longer range. 2 RELATED WORK Multi-modality Depth estimation. Recent work on depth estimation has shown the benefits of fusing cameras and other active sensors. (Qiu et al., 2019; Imran et al., 2021) recover dense depth maps from sparse Lidar point clouds and a single image. (Long et al., 2021b) associates pixels with pretty sparse radar points to achieve superior accuracy. The effective range can be increased as well by Lidar-camera (Zhang et al., 2020) or Radar-camera (Zhang et al., 2021). However, these methods are still expensive in cost and power consumption. (Gao et al., 2020; Parida et al., 2021) emit audio chirps and learn the depth map implicitly with audio reflections and a single image. However, these methods require many nearby acoustic reflectors to produce effective echos so the setup is limited in rooms. Besides, they are evaluated in an audiovisual simulator. FBDepth only uses one extra microphone to perceive natural sounds directly. It keeps the passive design of the audio but applies the physical measurement explicitly. The one-path sound propagation has a longer effective range than echoes. Sound source localization. Previous systems localize sound sources with microphone arrays (Valin et al., 2003; Rascon & Meza, 2017) or one microphone with a camera (Hershey & Movellan, 1999). They intend to estimate the direction of arrival(DOA) or the distance. The DOA is inferred by the subtle difference in arrival time from the sound source to each microphone(Mao et al., 2019; Sun et al., 2022) or by semantic matching with the visual appearance if given images(Tian et al., 2018; Arandjelovic & Zisserman, 2018). The distance can be estimated by triangulation methods with multiple DOAs and room structures(Wang et al., 2021; Shen et al., 2020). Many work study the room acoustic and the distance cues from the reverberation(Singh et al., 2021; Chen et al., 2021b) but (Zahorik, 2002) shows that the reverberation has a coarse coding with the distance. Compared to these methods, FBDepth directly estimates the distance by the ToF and achieves superior accuracy to indirect triangulation methods and implicitly depth learning networks on reverberation. Audio-visual event localization aims to detect and localize events in videos. (Tian et al., 2018) first propose the task and build up the audio-visual event(AVE) dataset. They apply an audio-guided visual attention mechanism to learn visual regions with the related sounding object or motions. Recent works develop dual-modality sequence-sequence framework (Lin et al., 2019) and dual attention matching mechanism (Wu et al., 2019) to leverage global features. However, the temporal event boundary is 1s-level in AVE dataset so it is split as 1s-long segments. We study the instant collision event and solve the coarse boundary problem as well. (Gan et al., 2022) has a similar setup to ours. They use an embodied robot agent to navigate to a dropped object in 3D virtual rooms. They integrate asynchronous vision and audition and navigate to the object. The asynchronism comes from the invisibility of the object. Even though their simulator has been pretty vivid enough for semantic tasks, it has a gap in the real-world collision for the mslevel formulation. Falling objects dataset(Kotera et al., 2020), TbD dataset(Kotera et al., 2019) and TbD-3D dataset(Rozumnyi et al., 2020) explore falling motions and fast movements but they do not have audio and depth information. Video frame interpolation aims to synthesize intermediate frames between existing ones of a video. Most state-of-the-art approaches explicitly or implicitly assume a simplistic linear motion. Warping-based methods (Baker et al., 2011; Park et al., 2020) apply optical flow and forward warping to shift pixels to intermediate frames linearly. Phase-based methods (Meyer et al., 2015; 2018) combine the phase information across different scales but the phase is modeled as a linear function of time. Recent methods are developed to approximate non-linear motion, such as kernelbased methods (Niklaus et al., 2017a;b), quadratic interpolation (Xu et al., 2019a), cubic motion modeling (Chi et al., 2020), etc. However, they still fail to complex non-linear motions because precise motion dynamics cannot be captured in the blind time between keyframes. Unfortunately, collisions are super non-linear and instant. Given two keyframes before and after the collision, it is ambiguous to decide whether there is a collision. Hence, these methods are not applicable. We analyze the motions before and after the collision and extrapolate optical flows to find the most potential collision position. 3 PROBLEM FORMULATION We formulate the depth estimation by the physical law of wave propagation. We have: d v − d c = T (1) where the depth of the sound source is d and the difference between the ToF of sound and light is T . c and v denote the propagation speeds of light and sound, respectively. We can estimate d based on d = cvTc−v ≈ vT since c ≫ v. We observe T = Taudio − Tvideo + Thardware, where Taudio and Tvideo denote the event time in the audio and video recordings, respectively, and Thardware denotes the start time difference in the audio and video recordings. It can be small as well as have a small variance with a well-designed media system such as the Apple AVFoundation framework. We regard it as a constant unknown bias to learn. It is impossible to label the precise Tvideo and Taudio manually. Tvideo can be tagged at most frame-level. Even though many commercial cameras can support up to 240 FPS, it results in a 4-ms segment and 1.43m depth variation. Moreover, it is tough to determine the exact frame that is nearest to the collision in high FPS mode by a human being due to the constrained view of the camera. Taudio is challenging to recognize in the wild as well. Although the audio sampling rate is high enough, we can recognize the significant early peaks instead of the first sample triggered by the collision. The best effort of segmentation is 10-ms level based on real data. We cannot learn the timestamp with supervision. We propose a 2-stage estimation framework. The goal of the first stage is to estimate the numerical Tvideo. As figure 1 shows, we localize the audiovisual event in the stream and then take advantage of the unique optical flow of the collision to estimate Tvideo at ms-level. In the second stage, we place the Tvideo as an anchor into the audio clip and direct regress the depth with depth supervision. We make the network optimize Taudio automatically with knowledge of the Tvideo, the audio waveform and visual features. 4 APPROACH We demonstrate a novel coarse-to-fine pipeline to localize the collision with a super temporal resolution in the video. This method does not require annotations on ms-level, which is at least two orders of magnitude finer than previous approaches. They rely on the supervision of segment annotations, such as AVE dataset with 1-second segments (Tian et al., 2018), Lip Reading Sentences 2 dataset with word-level segments (Chung & Zisserman, 2016), BOBSL with sentence-level alignments (Bull et al., 2021). 4.1 EVENT-LEVEL LOCALIZATION Audio-visual modeling for collisions. In this step, our goal is to localize the audio-visual event for the region and the period of interest. It is similar to (Tian et al., 2018), but the unique properties of collisions bring new opportunities to learning strategy. Collisions have a significant motion than other sound sources. We can use the optical flow to inform the network of moving pixels. Besides, the impact sound is highly correlated to the rich information of objects (Gan et al., 2022), such as shape, materials, size, mass, etc. It makes audio-visual cross-matching easier than general audiovisual events so that we do not need to apply a complex scheme to learn. Another fact is that collisions are pretty sparse temporally in the wild because the duration of collisions is extremely short. It is rare to come across overlapped collisions based on our empirical study on the basketball court. Only two frames have double collisions among all 1208 frames and a total of 203 collisions when 7 basketballs are played during a 40-s duration. We propose a motion-guided audio-visual correspondence network (MAVNet). Similar to (Tian et al., 2018; Wu et al., 2019), MAVNet performs the cross-matching for the audio features and the RGB-F channels. Besides, it predicts audio-visual segmentation to capture whole pixels of the target object. It can achieve fine-grained audio-visual scene understanding (Zhou et al., 2022). We use the segmentation mask to filter flows of interest and perform high-resolution estimation in the next steps. MAVNet has two backbones to deal with RGB-F channels and audio clips respectively. A UNet (Ronneberger et al., 2015) style encoder is applied to extract the frame features conditioned by optical flows. It uses a series of convolution layers to extract visual features. Another branch is the audio encoder which takes in the time-domain signal. It has a 1D convolution layer to learn an STFT-like representation and a stack of 2D convolution layers with batch normalization to learn the semantic audio features. We replicate the audio feature, tile them to match the visual feature dimension, and concatenate the audio and visual feature maps. MAVNet has two output heads as well. the U-Net decoder applies a series of up-convolutions and skip-connections from the RGB-F encoder to fused feature maps to learn the binary segmentation mask M . Meanwhile, the fused feature map is fed into a binary classification head consisting of convolution layers and linear layers to predict the audio-visual event relevance y ∈ {0, 1}. Training We use the weighted sum Binary Cross Entropy (BCE) loss as the training objective for both segmentation and the cross matching, We train all components to jointly optimize the location predictions and energy reconstruction. We minimize the total loss Ltotal = BCE(M,M̂) + λ ∗ BCE(y, ŷ) where λ is the hypermeter to set. Inference We only use low FPS to perform MAVNet to avoid dense inference at this stage. Moreover, we do not need to activate the segmentation head until the audio clip and the frame are highly matched. Finally, MAVNet uses this audio clip to retrieve a sequence of frames including the full collision procedure. 4.2 FRAME-LEVEL LOCALIZATION Given a sequence of video frames, our goal is to split them into two sets: the frames before the collision V0 and the frames after the collision V1. This essentially requires us to determine the last frame Ie in V0 before the collision and the first frame Is in V1 after the collision. Thus, we locate the collision between the frame Ie and Is. Based on the analysis of the physical motion, we make an important observation that can help determine Ie and Is. The collision results in a significant acceleration change due to the strong impulse force. Let at = vt − vt−1 and δat = at − at−1 denote the acceleration and acceleration change of frame It, respectively. δa between Ie and Is is large, while δa between adjacent frames before or after the collision is small. If the object stops moving immediately after the collision, we take the static frame Ie+1 as Is. Finally, we select the frames before Ie to generate V0, and select the frames after Is to generate V1. We use the retrieved mask in the last stage to determine the object positions in the frames and calculate the velocity, acceleration, and acceleration change. We find the Ie and Is at the low FPS and then replicate the procedure for frames between Ie and Is at high FPS. Finally, we locate Ie and Is in the high FPS mode efficiently. 4.3 MS-LEVEL LOCALIZATION To further locate the exact moment of the collision, we try to interpolate frames between Ie and Is to recover the skipped frame. Unfortunately, the common assumption of frame-based interpolation is fully broken down. Motion consistency is fundamental for spatio-temporal video processing. If the motion of the object is temporally stable across several frames (e.g., due to a constant force), the position and pose can be predicted in the future frames as well as be interpolated between two frames. We denote it as motion first consistency. However, the impact sound is caused by an impulse force, which results in a rapid change of the motion status. It breaks the motion continuity and consistency. When we observe Ie and Is, we cannot determine whether a collision happens or the object just flies in the air. Luckily, the collision moment retains a new form of motion consistency. We denote it as motion second consistency. It reveals that the motions before and after the collision share the same intersection position. Besides, they keep the motion first consistency separately. Therefore, we can extrapolate the motions based on the motion first consistency and search for the most similar motion extrapolations by leveraging motion second consistency. Note that our final goal is to find the timestamp of the collision instead of the motion status at the shared position. (Kotera et al., 2019; Rozumnyi et al., 2020) try to recover the sub-frame motions and trajectories as well but they require the high FPS ground truth to guide the training. In our context, we care more about when the collision happens than what it looks like. Optical flow extrapolation Optical flow is widely used for frame prediction () and interpolation (Baker et al., 2011) by warping the frame with the estimated optical flow. Because it can capture all motions of pixels and get a finer understanding of the object dynamics. The optical flow sequence is usually generated by adjacent video frames. However, it is not efficient for extrapolation. The drift of pixels in the flow requires extra iterative wrappings to align the corresponding pixels, which results in accumulation errors. Therefore, we compute the optical flows from an anchor frame Ia to the frame sequence V as {I0, I1, ...In}. We can estimate the flow sequence Fa→V as {fa→0, fa→1, ...fa→n}. As fa→n(x, y) represents the movement of the pixel Ia(x, y) to In, Fa→V(x, y) describes how the pixel in Ia(x, y) moves across the frame sequence V . Hence, Fa→V tracks the global motion of each pixel without iterative warpings. With the historical positions of Ia(x, y) from frame I0 to In, we can regress the motion of this pixel and extrapolate the flow to fa→n+δt, which is the relative pixel position to In+δt with an arbitrary δt. In our context, We pick k consecutive frames before the collision Vpre as {Ie−k+1, Ie−k+2, ..., Ie} and after the collision Vpost as {Is+k−1, Is+k−2, ..., Is}. We select the frame Ie as the anchor frame. It is near the collision moment, so its motion to other frames is not dramatic and easy to be estimated. Hence, we can estimate the optical flow sequences Fe→Vpre and Fe→Vpost Meanwhile, we apply the predicted segmentation mask of Ie to filter the pixels of the target object. In the last step, we build up regressors R for each pixel’s motion individually and predict future locations in any sub-frame. Optical flow interpolation We have construct pixel level regressors for Fe→Vpre and corresponding Fe→Vpost . They can extrapolate the flow fe→e+δt0 and fa→s+δt1 , respectively. δt0, δt1 are extrapolation steps. The optimization goal is to min e−s≤δt1≤0≤δt0≤s−e ||fe→e+δt0 , fa→s+δt1 ||2, s.t. e+ δt0 < s+ δt1 The collision duration is s+ δt1 − (e+ δt0), which is always more than 0. e+ δt0 is the target mslevel localization T̂video. We can apply this interpolation methodology to search the intersection of the object’s center trajectory or maximize the Intersection over Union (IoU) of the object’s bounding box. However, both only use several key points so they cannot achieve a fine granularity since the optical flow takes advantage of thousands of pixels. 4.4 DEPTH REGRESSION Based on the estimation T̂video, we directly regress the depth to fit the Taudio and the bias THardware with the supervision of ground truth depth. We observe that the sound generation procedure varies a lot across different objects, materials, shapes, and motions. On one hand, the diverse waveforms make it impractical to measure the exact Taudio manually. On the other hand, each specific waveform has significant implications on what is the best Taudio corresponding to T̂video. To combat the background noise from other sources, we also feed the RGB-F crop of the target object from frame Ie to the depth predictor. It includes the semantic features of the object as well as the motion status just before the collision. These cues can guide the predictor to find the waveform pattern easily. We select a sequence of audio samples starting from Ie and label some anchor samples as 1 at T̂video. It informed the audio sequence about the timestamp of the visual collision directly. We feed the enriched sequence into the 1D convolution layer to extract a 2D representation. It is followed by two residual blocks to learn high-dimension features. Meanwhile, we use ResNet-18 (He et al., 2015) to extract the RGB-F features of the target object. We tile and concatenate the RGB-F features to the audio features along the channel dimension and append another two residual blocks to fuse the features. Finally, it is followed by a pooling layer and a fully connected layer to predict the depth. The output maps to depth by the 2D projection. We use Mean Square Error (MSE) Ldepth = ||d, d̂||2 as the learning objective where d and d̂ are the target depth and the predicted depth. 5 EXPERIMENTS 5.1 SETUP Dataset platform and collection We use an iPhone XR with a 240-fps slow-motion mode to collect the video with audio. The audio sampling rate is 48Khz. We set a stereo camera and a Lidar together to collect ground truth. We include details of data collection in the Appendix B. AVD Dataset We collect 3.6K+ raw audio-visual sequences with a single collision event as the audio-visual depth(AVD) dataset. We randomly sample raw sequences to generate train/val/eval splits, which have 2600/500/522 sequences. We augment the raw sequences by cropping one moving object from a raw video sequence and inserting it into another raw sequence with a random temporal location. Besides, we augment the raw depth with a maximum 3% random change to diversify the depth and shift audio samples accordingly to the video timestamp. More details are described in Appendix B. Baselines We include three types of baseline for comparison. We compare to a monocular depth estimation method NeWCRFs (Yuan et al., 2022), a state-of-the-art(SOTA) on multiple benchmarks. We also compare to stereo matching methods including the ZED built-in ultra depth estimation SDK and a SOTA method LEAStereo (Cheng et al., 2020). We use dense depth maps collected by the Lidar to finetune the NeWCRFs and LEAStereo on images collected by the stereo camera. Despite optical flow based interpolation, we compare to interpolation using key points such as the trajectories of center or bounding boxes. Metrics We use the mean absolute depth errors as AbsErr = 1n ∑n i=1 |d − d̂|, root mean square absolute relative errors RMSE = √ 1 n ∑n i=1(d− d̂)2, AbsRel = 1 n ∑n i=1 |d−d̂| d as the end-to-end performance metrics. FBDepth is a sparse depth estimation. We evaluate the depth of each target object. However, monocular and stereo baselines have dense depth estimations for all pixels of the object. We evaluate the median estimation depth with the median depth of the ground truth dense map. We provide the results over different distance ranges as close(≤ 10m), mid(10m-30m), and far(≥ 30m). Intuitively, there is an upper bound for the temporal resolution so AbsRel at close depths performs worse than at further distances. 5.2 RESULTS Table 1 shows the results on the depth estimation. In all, FBDepth can achieve better performance on all metrics than baselines across different FPS. Several important trends can be observed. Stereo matching methods perform extraordinarily on close objects, where more clear view difference can be captured. The AbsErr and RMSE increase dramatically as the targets become further because the limited baseline cannot resolve the view difference easily. In the other side, the AbsErr and RMSE of FBDepth grows slowly with the increasing distance while its AbsRel decreases gradually. Intuitively, there is a upper bound for the temporal resolution due to the limited FPS, the lack of the accurate timestamp and the small disturbance of audio-video software. Thus FBDepth may not achieve the centimeter level easily. A Further depth can break the assumption of stereo matching methods as well as monocular methods which has a fixed depth range of training data, but FBDepth still holds the physical propagation law in this condition. FBdepth also shows advantages on NeWCRFs. The monocular methods rely on the training set, which includes various scenarios and depths. Although we apply camera remapping with intrinsic matrix and finetuning, NeWCRFs still cannot achieve the best performance as the one in the pretrained dataset. The implicit depth regression has difficulty in domain adaption. In the contrast, stereo methods can be directly applied to the new scenario and achieve awesome estimation because its fundamental is the explicit spatial view difference on stereo images. FBDepth applies the explicit spatial measurement and does not reply on the camera and scenarios heavily. It requires several learning models but these model can be applied to common cameras and microphones. FBDepth can be more general with a more diverse dataset. We show some visual qualitative results in the Appendix B.3. Compared to other methods, the delay between audio and video can be visually recognized, which is similar to object detection. In another word, FBDepth transforms the tough depth estimation problem to a simple interpretable problem. 5.3 ABLATION In the ablation study, we show how each stage contributes to the final results. Event-level localization We invest how the optical flow can help detect the collision event as well as contour the object mask. We define recall and precision as the percentage of correct recognized audio-visual events in all audio-visual events and all recognized events with an IoU more than 0.5, respectively. With the flow, both recall and precision improve as the flow can work as a pre-mask to guide the network. The main failures in recall come from weak collision sounds or simultaneous collisions. The incorrect recognition is mainly due to similar objects in the frame. Frame-level localization Frame rate is most related to the frame-level stage. We observe that increasing the frame rate reduces the numerical error of FBDepth in Table 1. Especially, increasing 30 FPS to 60 FPS yields the largest improvement, and the benefit gradually tapers off with a further increase in the frame rate. We observe that 30 FPS is too slow to capture sudden movements and fast dynamics while 60 FPS is around the borderline. It is consistent with the trend to set 60 FPS as the default video recording and playing. The motion in 120FPS and 240 FPS is even slower so it is more difficult to distinguish the frame Ie. The frame error is no more than the one in the low FPS mode. Thus, 120 FPS and 240 FPS bring less improvement. Ms-level localization We investigate our special interpolation in two perspectives. First, we need to verify whether this method works. However, there is no ground truth timestamp so we cannot directly quantify the accuracy. We set the estimation of 240 FPS as a baseline and compare it with the estimation of lower FPS. If it can get similar numerical results from independent input, which means the algorithm is reliable. In Figure r̃efmicro2, the median temporal error for 30, 60, and 120 FPS is 2.3ms, 0.65ms, and 0.5ms respectively. Considering the frame resolution, we can compute the improvement ratio as frame durationtemporal error . The 60 FPS has the largest 25x improvement over the frame duration. This is strong evidence that our ms-level localization is reasonable and robust. Second, we compare the performance of depth estimation with different interpolation strategies in Table 2. We use the result from frame localization to predict the depth when there is no interpolation. The error is large since this timestamp is ambiguous for the depth prediction. Interpolation with the traces of centers or bounding boxes does not work well. A few key points cannot capture the dynamics in fine granularity. Depth regression Without the RGB-F channel of the target object in depth regression, the estimation will be less robust due to the ambient sound and the background noise as shown in the Table 2 6 LIMITATION AND FUTURE WORKS We classify audio-visual events into 3 categories by the quality and the quantity of visual cues during the sound production. Obvious visual cues during sound production (e.g.collision) This is the main scenario we try to address in this paper. It requires both visible procedure and audible sound to estimate the depth. We can apply it to sports analytics, human stepping, etc. Moreover, it can collect the sparse depth point and accumulate depth points over time. According to existing work on depth completion (Long et al., 2021b; Xu et al., 2019b), adding some accurate depth points can boost the performance of monocular depth. Indirect visual cues during sound production (e.g.speech, playing the piano) This scenario is challenging but common every day. They do not show the vibration visually. Fortunately, there are still lots of visual cues. Existing work on speech synthesis with lip motion(Ephrat & Peleg, 2017), and music generation with pose(Gan et al., 2020) indicates the strong semantic relationship between video and audio. The spatial correlation still holds here. We propose to apply a high-resolution multi-frame alignment between the video and audio to find the accurate propagation delay. No visual cues during sound production (e.g. car engines, mobile phone speaker) We admit that we have no idea to estimate the depth when these sound sources are static because we cannot see them at all. Luckily, we still have a chance when these sound sources move. We propose a Dopplerlike formulation to associate visual cues and audio cues. Another urgent problem is that the microphone is pretty challenging to synchronize with other sensors. Pushing the latency to the sub-ms level can boost many applications including FBDepth. 7 CONCLUSION In this paper, we develop a novel depth estimation method based on the ”Flash-to-Bang”. By aligning the video with the audio and detecting the events from both, we can estimate the depth in the wild without calibration or prior knowledge about the environment or target. Our extensive evaluation shows that our approach yields similar errors across varying distances. In comparison, the errors of several existing methods increase rapidly with distance. Therefore, our method is particularly attractive for large distances. As part of our future work, we are interested in further enhancing the accuracy of our method, generalizing to more contexts, and using the estimated depth to the collision to estimate the depth to other objects in the scene. A BACKGROUND OF DEPTH SENSORS We add more details on the performance of various depth sensors on multiple criteria in Table 3 and Table 4. We especially demo the available depth sensors and corresponding APIs on iPhone Pro 13 in Table 5 as a typical example that depth estimation is well studied at the short range. B DATASET DETAILS We describe the details to build up the data collection pipeline for this novel task and discuss the trade-off during the data collection. B.1 PLATFORM AND COLLISION OBJECTS Figure 3 shows the data collection platform. It includes three devices. Lidar: We use a Livox Mid-70 Lidar(LIVOX, 2021) to collect the ground truth depth. The detection range is 90 m @ 10% reflectivity. The range precision is 2cm. Although the point rate of Mid-70 is low, it has a special non-repetitive scan pattern so that the point cloud can be very dense by accumulation. Thus, it is best to be used to collect the depth in the static scene. Stereo Camera: We use a ZED 2i stereo camera(StereoLab, 2021) with a 12 cm baseline and a focal length of 4mm. The large focal length is designed to increase the maximum effective range. The image resolution is 1242 by 2208 pixels. Table 3 shows detailed performance. We use the ZED 2i camera as an important depth estimation baseline. Video Recorder: A pair of a camera and a microphone can play the basic functionalities of the video recorder. However, it is very challenging to satisfy all the criteria for the audio-visual depth estimation. In this experiment, we use an iPhone XR and record the video by the default Camera app. It has several promising advantages. First, we can record slow-motion 1080P videos with 240 fps. The frame duration is constant so that we can transform the frame number to the timestamp accurately and align it with the audio track which has a 48kHz sampling rate. Second, the audiovisual recording delay Thardware is small as 1 ms and has a small variance within 1 ms on the iPhone. Both specifications above are critical to the audio-visual depth but cannot be satisfied on other platforms such as Android phones. The calibration of the audio-visual recording framework is out of the scope of this work. It is unexpected that the calibration is pretty difficult based on our experience. To capture the remote scene clearly, the telephoto lens has become indispensable in recent smartphones. Samsung Ultra 22 can support 10x optical zoom and 100x hybrid zoom, and Pixel 6 pro has 20x zoom in all. Their zoom performance is much superior to iPhone. The iPhone XR is not equipped with a telephoto lens, so we mount an ARPBEST monocular telescope to enlarge the scene at a large distance. As shown in Figure 4, the image quality of our setup is a bit worse than the one captured by Pixel 6 Pro’s telephoto lens. Thus, our setup does not provide superior image quality compared to existing commercial camera modules on smartphones. The image taken by Pixel Pro 6 is sharp but noisy while the one taken by iPhone XR with the telescope is a bit blurred. Our setup does not take advantage of the external telescope from this perspective. Overall, our setup resembles the hardware available on commercial mobile phones. Collision Objects: In Figure 8, We use 24 objects including various masses, sizes, shapes, and six common materials: wood, metal, foam, rubber, plastic, and paper. These objects are ubiquitous every day. Besides, they do not break down during the collision. B.2 COLLECTION METHODOLOGY Sensor setup: We mount the Lidar, the stereo camera, and the iPhone on one slide. We perform camera Lidar calibration between the left camera of the stereo camera and the Lidar according to (Yuan et al., 2021). We use the left camera to evaluate the monocular depth estimation and use the stereo camera to evaluate the stereo depth estimation. The mobile phone changes the field of view to fit the object at different distances. Hence, its intrinsic is not constant. We use the frames recorded by iPhone only for FBDepth. Collision setup: Since the point cloud is too sparse to measure the instant collision, we control the collision position to get the ground truth depth. First, we select an anchor position and measure the depth from the slide to the anchor by the Lidar. Second, we perform the collision at the anchor. For example, we throw an object to collide with the anchor or strike a hammer into the anchor or step the shoes on the anchor. Finally, the iPhone records the collision procedure. Besides, the Lidar and the stereo camera record the object placed at the anchor. They record the static object corresponding to the moving object in the video frames. We set up various anchors from 2 meters to 60 meters in different environments. Data Augmentation: After data cleaning and annotation, we get 3.6K+ raw audio-visual sequences, including 280K+ frames as te AVD dataset. Each sequence has about 40 to 120 frames and a corresponding audio clip corresponding. We use the stereo camera to capture static images and use the lidar to capture static depth maps. We augment the raw audio-visual sequences to have more than a single collision by cropping one moving object from a raw video sequence and augmenting it to another raw sequence with a random temporal location. Meanwhile, we add up the audio sequence with the same time shift as the video. We have 10K audio-visual sequences. For the event-level localization stage, we segment an audio clip of 66.7ms including the impact sound and sample 20 frames including visible objects from each sequence and pair them as positive pairs. Negative samples pair the frame with the audio clip without impact sounds or with irrelevant impact sounds. Finally, we generate around 400K audiovisual pairs. Besides, we augment the raw depth with a maximum 3% random change to diversify the depth and shift audio samples accordingly to the video timestamp. It can solve the problem of discrete anchor depths. The change cannot be significant because the impulse response of sound is also related to depth. It requires more transformation than just shifting audio samples. We also augment images with low light, flip and rotation, and audio with diverse background noise from WHAM!Wichern et al. (2019). B.3 SAMPLES AND VISUAL QUALITATIVE RESULTS We provide some samples and visual qualitative results. Considering the objects are small in the normal camera, we only show the region of interest in the RGB image and depth map. The most intuitive observation is that our approach simplifies the difficult depth estimation problem to be easily estimated from the visual samples. Humans can give a coarse estimation with the given timestamps, frames, and waveforms. However, we can have no idea to know the depth from the RGB image or stereo image visually.
1. What is the focus and contribution of the paper on audio-visual learning for sound source depth estimation? 2. What are the strengths of the proposed approach, particularly in its formulation and technical pipeline? 3. What are the weaknesses of the paper, especially regarding its experimental evaluation and comparison with other methods? 4. Do you have any concerns or questions regarding the collected dataset and its usage in the paper? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Motivated by the ’flash-to-bang' phenomenon, in this paper, the authors propose a new audio-visual learning model for sound source depth estimation. In particular, the authors formulate the sound source depth estimation as an audio-visual collision event localization task. To solve the task and increase depth estimation accuracy, a coarse-to-fine pipeline is introduced. To facilitate the research, the authors collected a new video dataset including 3600+ video clips with 24 objects. Experimental results show that the proposed approach can outperform compared mono and stereo methods. Strengths And Weaknesses Pros: The idea of depth estimation using audio and visual information is very interesting. Based on the difference between the time-of-flight of the light and the sound, the authors formulate sound source depth estimation as an audio-visual collision event localization task. The coarse-to-fine pipeline is technically sound. From the event level to the frame level, the authors progressively increase video event localization precision in the proposed method. Compared to mono and stereo approaches, the proposed sound source depth estimation method achieves competitive performance. Cons My biggest concern is about the evaluation. The current experimental results cannot fully validate the effectiveness of the proposed method. The authors provided numerical comparison results. But, visual results are totally missing in the paper. Some details about the collected dataset were not provided. What are object categories in the dataset? Video length? Sound source number in videos? Quality of the groundtruth depth? In addition, it would be better to provide some video samples in the dataset. Whether the compared monocular and stereo depth estimation methods were re-trained using the new dataset? If not, the comparison is not fair. Clarity, Quality, Novelty And Reproducibility The authors address a very interesting problem in the paper. But, the evaluation is not sufficient, and some details are missing. The authors did not promise that they would release their dataset. Without the dataset, it is not possible to reproduce the results in the paper.
ICLR
Title Visual Timing For Sound Source Depth Estimation in the Wild Abstract Depth estimation enables a wide variety of 3D applications, such as robotics and autonomous driving. Despite significant work on various depth sensors, it is challenging to develop an all-in-one method to meet multiple basic criteria. In this paper, we propose a novel audio-visual learning scheme by integrating semantic features with physical spatial cues to boost monocular depth with only one microphone. Inspired by the flash-to-bang theory, we develop FBDepth, the first passive audio-visual depth estimation framework. It is based on the difference between the time-of-flight (ToF) of the light and the sound. We formulate sound source depth estimation as an audio-visual event localization task for collision events. To approach decimeter-level depth accuracy, we design a coarse-to-fine pipeline to push the temporary localization accuracy from event-level to millisecond-level by aligning audio-visual correspondence and manipulating optical flow. FBDepth feeds the estimated visual timestamp together with the audio clip and object visual features to regress the source depth. We use a mobile phone to collect 3.6K+ video clips with 24 different objects at up to 60m. FBDepth shows superior performance especially at a long range compared to monocular and stereo methods. 1 INTRODUCTION Depth estimation is the fundamental functionality to enable 3D perception and manipulation. Although there have been significant efforts on developing depth estimation methods with various sensors, current depth estimation schemes fail to achieve a good balance on multiple basic metrics including accuracy, range, angular resolution, cost, and power consumption. Active depth sensing methods actively emit signals, such as LiDAR (Caesar et al., 2020), structuredlight (Zhang, 2012), mmWave (Barnes et al., 2020), ultrasound (Mao et al., 2016), WiFi (Vasisht et al., 2016). They compare the reflected signal with the reference signal to derive time-of-flight (ToF), phase change, or Doppler shift to estimate the depth. Active methods can achieve high accuracy because of the physical fundamental and well-designed modulated sensing signals. Lidar is the most attractive active senor due to its large sensing range and dense point cloud. However, the density is not sufficient enough to enable a small angular resolution. Therefore, the points are too sparse to be recognized at a long distance. Besides, the prohibitive cost and power consumption limit the availability of Lidar on general sensing devices. Passive depth sensing takes signals from the environment for sensing directly. It commonly uses RGB monocular camera (Bhoi, 2019; Laga et al., 2020), stereo camera (Cheng et al., 2020), thermal camera (Lu & Lu, 2021), or multi-view cameras (Long et al., 2021a). These sensors can achieve pixel-wise angular resolution and consume pretty less energy due to omitting the signal emitting. Among them, stereo matching can effectively estimate the disparity and infer a dense depth map since it transforms the spatial depth to the visual disparity based on the solid physical law. The baseline of the stereo camera determines the effective range and accuracy. Therefore, the dimension of the stereo camera is placed as the critical trade-off with sensing metrics. Thanks to the advance in deep learning, the cheap monocular depth estimation keeps on improving performance with new network structures and high-quality datasets. However, the accuracy is still not satisfactory especially at a long range because it can only regress depth based on the implicit visual cues. It is ill-posed without any physical formulation. Besides, it heavily relies on the dataset. It requires domain adaption and camera calibration for various camera intrinsics (Li et al., 2022). In this paper, we propose to add only one microphone to enable explicit physical depth measurement and boost the performance of a single RGB camera. It does not rely on the intrinsic of cameras and implicit visual cues. We develop a novel passive depth estimation scheme with a solid physical formulation, called Flash-to-Bang Depth (FBDepth). Flash-to-Bang is used to estimate the distance to the lightning strike according to the difference between the arrival time of a lightning flash and a thunder crack. This works because light travels a million times faster than sound. When the sound source is several miles away, the delay is large enough to be perceptible. Applying it to our context, FBDepth can estimate the depth of a collision that triggers audio-visual events. The collision event has been explored for navigation and physical search in (Gan et al., 2022), but our work is the first that uses the collision for depth estimation. Collisions are common and can arise when a ball bounces on the ground, a person takes a step, or a musician hits a drum. We identify and exploit several unique properties related to various collisions in the wild. First, the duration of a collision is short and collision events are sparse. Thus, there are few overlapped collisions. Second, though the motion of objects changes dramatically after the collision, they are almost static at the collision moment. Third, the impact sound is loud enough to propagate to a long range. Flash-to-Bang is applied to the range of miles for human perception. Using it for general depth estimation poses several significant challenges: (i) It is inaccessible to ground truth collision time from video and audio. Video only offers up to 240 frames per second(fps), and may not capture the exact instance when the collision occurs. Audio has a high sampling rate but it is hard to detect the start of a collision solely based on the collision sound due to different sound patterns arising from collisions as well as ambient noise. (ii) We need highly accurate collision time. 1 ms error can result in a depth error of 34 cm. (iii) Noise present in both audio and video further exacerbate the problem. To realize our idea, we formulate the sound source depth estimation as the audio-visual localization task. Whereas existing work (Wu et al., 2019; Xia & Zhao, 2022) still focuses on 1-second-segment level localization. FBdepth performs event-level localization by aligning correspondence between the audio and the video. Apart from audio-visual semantic features as input in existing work (Tian et al., 2018; Chen et al., 2021a), we incorporate optical flow to exclude static objects with similar visual appearances. Furthermore, FBDepth applies the impulse change of optical flow to locate collision moments at the frame level. Finally, we formulate the ms-level estimation as an optimization problem of video interpolations. FBDepth succeeds to interpolate the best collision moment by maximizing the intersection between extrapolations of before-collision and after-collision flows. With the estimated timestamp of visual collision, we regress the sound source depth with the audio clip and visual features. FBdepth avoids the requirement to know the timestamp of audio collision. Besides, different objects have subtle differences in audio-visual temporal alignment. For example, a rigid body generates the sound peak once it touches another body. But an elastic body produces little sound during the initial collision and takes several ms to produce the peak with the maximum deformation. We feed semantic features to enable the network aware of the material, size, etc. Our main contributions are as follows: 1. To the best of our knowledge, FBDepth is the first passive audio-visual depth estimation. It brings the physical propagation property to audio-visual learning. 2. We introduce the ms-level audio-visual localization task. We propose a novel coarse-to-fine method to improve temporal resolution by leveraging the unique properties of collisions. 3. We collect 3.6K+ audio-visual samples across 24 different objects in the wild. Our extensive evaluation shows that FBDepth achieves 0.64m absolute error(AbsErr) and 2.98% AbsRel across a wide range from 2 m to 60 m. Especially, FBDepth shows more improvement in the longer range. 2 RELATED WORK Multi-modality Depth estimation. Recent work on depth estimation has shown the benefits of fusing cameras and other active sensors. (Qiu et al., 2019; Imran et al., 2021) recover dense depth maps from sparse Lidar point clouds and a single image. (Long et al., 2021b) associates pixels with pretty sparse radar points to achieve superior accuracy. The effective range can be increased as well by Lidar-camera (Zhang et al., 2020) or Radar-camera (Zhang et al., 2021). However, these methods are still expensive in cost and power consumption. (Gao et al., 2020; Parida et al., 2021) emit audio chirps and learn the depth map implicitly with audio reflections and a single image. However, these methods require many nearby acoustic reflectors to produce effective echos so the setup is limited in rooms. Besides, they are evaluated in an audiovisual simulator. FBDepth only uses one extra microphone to perceive natural sounds directly. It keeps the passive design of the audio but applies the physical measurement explicitly. The one-path sound propagation has a longer effective range than echoes. Sound source localization. Previous systems localize sound sources with microphone arrays (Valin et al., 2003; Rascon & Meza, 2017) or one microphone with a camera (Hershey & Movellan, 1999). They intend to estimate the direction of arrival(DOA) or the distance. The DOA is inferred by the subtle difference in arrival time from the sound source to each microphone(Mao et al., 2019; Sun et al., 2022) or by semantic matching with the visual appearance if given images(Tian et al., 2018; Arandjelovic & Zisserman, 2018). The distance can be estimated by triangulation methods with multiple DOAs and room structures(Wang et al., 2021; Shen et al., 2020). Many work study the room acoustic and the distance cues from the reverberation(Singh et al., 2021; Chen et al., 2021b) but (Zahorik, 2002) shows that the reverberation has a coarse coding with the distance. Compared to these methods, FBDepth directly estimates the distance by the ToF and achieves superior accuracy to indirect triangulation methods and implicitly depth learning networks on reverberation. Audio-visual event localization aims to detect and localize events in videos. (Tian et al., 2018) first propose the task and build up the audio-visual event(AVE) dataset. They apply an audio-guided visual attention mechanism to learn visual regions with the related sounding object or motions. Recent works develop dual-modality sequence-sequence framework (Lin et al., 2019) and dual attention matching mechanism (Wu et al., 2019) to leverage global features. However, the temporal event boundary is 1s-level in AVE dataset so it is split as 1s-long segments. We study the instant collision event and solve the coarse boundary problem as well. (Gan et al., 2022) has a similar setup to ours. They use an embodied robot agent to navigate to a dropped object in 3D virtual rooms. They integrate asynchronous vision and audition and navigate to the object. The asynchronism comes from the invisibility of the object. Even though their simulator has been pretty vivid enough for semantic tasks, it has a gap in the real-world collision for the mslevel formulation. Falling objects dataset(Kotera et al., 2020), TbD dataset(Kotera et al., 2019) and TbD-3D dataset(Rozumnyi et al., 2020) explore falling motions and fast movements but they do not have audio and depth information. Video frame interpolation aims to synthesize intermediate frames between existing ones of a video. Most state-of-the-art approaches explicitly or implicitly assume a simplistic linear motion. Warping-based methods (Baker et al., 2011; Park et al., 2020) apply optical flow and forward warping to shift pixels to intermediate frames linearly. Phase-based methods (Meyer et al., 2015; 2018) combine the phase information across different scales but the phase is modeled as a linear function of time. Recent methods are developed to approximate non-linear motion, such as kernelbased methods (Niklaus et al., 2017a;b), quadratic interpolation (Xu et al., 2019a), cubic motion modeling (Chi et al., 2020), etc. However, they still fail to complex non-linear motions because precise motion dynamics cannot be captured in the blind time between keyframes. Unfortunately, collisions are super non-linear and instant. Given two keyframes before and after the collision, it is ambiguous to decide whether there is a collision. Hence, these methods are not applicable. We analyze the motions before and after the collision and extrapolate optical flows to find the most potential collision position. 3 PROBLEM FORMULATION We formulate the depth estimation by the physical law of wave propagation. We have: d v − d c = T (1) where the depth of the sound source is d and the difference between the ToF of sound and light is T . c and v denote the propagation speeds of light and sound, respectively. We can estimate d based on d = cvTc−v ≈ vT since c ≫ v. We observe T = Taudio − Tvideo + Thardware, where Taudio and Tvideo denote the event time in the audio and video recordings, respectively, and Thardware denotes the start time difference in the audio and video recordings. It can be small as well as have a small variance with a well-designed media system such as the Apple AVFoundation framework. We regard it as a constant unknown bias to learn. It is impossible to label the precise Tvideo and Taudio manually. Tvideo can be tagged at most frame-level. Even though many commercial cameras can support up to 240 FPS, it results in a 4-ms segment and 1.43m depth variation. Moreover, it is tough to determine the exact frame that is nearest to the collision in high FPS mode by a human being due to the constrained view of the camera. Taudio is challenging to recognize in the wild as well. Although the audio sampling rate is high enough, we can recognize the significant early peaks instead of the first sample triggered by the collision. The best effort of segmentation is 10-ms level based on real data. We cannot learn the timestamp with supervision. We propose a 2-stage estimation framework. The goal of the first stage is to estimate the numerical Tvideo. As figure 1 shows, we localize the audiovisual event in the stream and then take advantage of the unique optical flow of the collision to estimate Tvideo at ms-level. In the second stage, we place the Tvideo as an anchor into the audio clip and direct regress the depth with depth supervision. We make the network optimize Taudio automatically with knowledge of the Tvideo, the audio waveform and visual features. 4 APPROACH We demonstrate a novel coarse-to-fine pipeline to localize the collision with a super temporal resolution in the video. This method does not require annotations on ms-level, which is at least two orders of magnitude finer than previous approaches. They rely on the supervision of segment annotations, such as AVE dataset with 1-second segments (Tian et al., 2018), Lip Reading Sentences 2 dataset with word-level segments (Chung & Zisserman, 2016), BOBSL with sentence-level alignments (Bull et al., 2021). 4.1 EVENT-LEVEL LOCALIZATION Audio-visual modeling for collisions. In this step, our goal is to localize the audio-visual event for the region and the period of interest. It is similar to (Tian et al., 2018), but the unique properties of collisions bring new opportunities to learning strategy. Collisions have a significant motion than other sound sources. We can use the optical flow to inform the network of moving pixels. Besides, the impact sound is highly correlated to the rich information of objects (Gan et al., 2022), such as shape, materials, size, mass, etc. It makes audio-visual cross-matching easier than general audiovisual events so that we do not need to apply a complex scheme to learn. Another fact is that collisions are pretty sparse temporally in the wild because the duration of collisions is extremely short. It is rare to come across overlapped collisions based on our empirical study on the basketball court. Only two frames have double collisions among all 1208 frames and a total of 203 collisions when 7 basketballs are played during a 40-s duration. We propose a motion-guided audio-visual correspondence network (MAVNet). Similar to (Tian et al., 2018; Wu et al., 2019), MAVNet performs the cross-matching for the audio features and the RGB-F channels. Besides, it predicts audio-visual segmentation to capture whole pixels of the target object. It can achieve fine-grained audio-visual scene understanding (Zhou et al., 2022). We use the segmentation mask to filter flows of interest and perform high-resolution estimation in the next steps. MAVNet has two backbones to deal with RGB-F channels and audio clips respectively. A UNet (Ronneberger et al., 2015) style encoder is applied to extract the frame features conditioned by optical flows. It uses a series of convolution layers to extract visual features. Another branch is the audio encoder which takes in the time-domain signal. It has a 1D convolution layer to learn an STFT-like representation and a stack of 2D convolution layers with batch normalization to learn the semantic audio features. We replicate the audio feature, tile them to match the visual feature dimension, and concatenate the audio and visual feature maps. MAVNet has two output heads as well. the U-Net decoder applies a series of up-convolutions and skip-connections from the RGB-F encoder to fused feature maps to learn the binary segmentation mask M . Meanwhile, the fused feature map is fed into a binary classification head consisting of convolution layers and linear layers to predict the audio-visual event relevance y ∈ {0, 1}. Training We use the weighted sum Binary Cross Entropy (BCE) loss as the training objective for both segmentation and the cross matching, We train all components to jointly optimize the location predictions and energy reconstruction. We minimize the total loss Ltotal = BCE(M,M̂) + λ ∗ BCE(y, ŷ) where λ is the hypermeter to set. Inference We only use low FPS to perform MAVNet to avoid dense inference at this stage. Moreover, we do not need to activate the segmentation head until the audio clip and the frame are highly matched. Finally, MAVNet uses this audio clip to retrieve a sequence of frames including the full collision procedure. 4.2 FRAME-LEVEL LOCALIZATION Given a sequence of video frames, our goal is to split them into two sets: the frames before the collision V0 and the frames after the collision V1. This essentially requires us to determine the last frame Ie in V0 before the collision and the first frame Is in V1 after the collision. Thus, we locate the collision between the frame Ie and Is. Based on the analysis of the physical motion, we make an important observation that can help determine Ie and Is. The collision results in a significant acceleration change due to the strong impulse force. Let at = vt − vt−1 and δat = at − at−1 denote the acceleration and acceleration change of frame It, respectively. δa between Ie and Is is large, while δa between adjacent frames before or after the collision is small. If the object stops moving immediately after the collision, we take the static frame Ie+1 as Is. Finally, we select the frames before Ie to generate V0, and select the frames after Is to generate V1. We use the retrieved mask in the last stage to determine the object positions in the frames and calculate the velocity, acceleration, and acceleration change. We find the Ie and Is at the low FPS and then replicate the procedure for frames between Ie and Is at high FPS. Finally, we locate Ie and Is in the high FPS mode efficiently. 4.3 MS-LEVEL LOCALIZATION To further locate the exact moment of the collision, we try to interpolate frames between Ie and Is to recover the skipped frame. Unfortunately, the common assumption of frame-based interpolation is fully broken down. Motion consistency is fundamental for spatio-temporal video processing. If the motion of the object is temporally stable across several frames (e.g., due to a constant force), the position and pose can be predicted in the future frames as well as be interpolated between two frames. We denote it as motion first consistency. However, the impact sound is caused by an impulse force, which results in a rapid change of the motion status. It breaks the motion continuity and consistency. When we observe Ie and Is, we cannot determine whether a collision happens or the object just flies in the air. Luckily, the collision moment retains a new form of motion consistency. We denote it as motion second consistency. It reveals that the motions before and after the collision share the same intersection position. Besides, they keep the motion first consistency separately. Therefore, we can extrapolate the motions based on the motion first consistency and search for the most similar motion extrapolations by leveraging motion second consistency. Note that our final goal is to find the timestamp of the collision instead of the motion status at the shared position. (Kotera et al., 2019; Rozumnyi et al., 2020) try to recover the sub-frame motions and trajectories as well but they require the high FPS ground truth to guide the training. In our context, we care more about when the collision happens than what it looks like. Optical flow extrapolation Optical flow is widely used for frame prediction () and interpolation (Baker et al., 2011) by warping the frame with the estimated optical flow. Because it can capture all motions of pixels and get a finer understanding of the object dynamics. The optical flow sequence is usually generated by adjacent video frames. However, it is not efficient for extrapolation. The drift of pixels in the flow requires extra iterative wrappings to align the corresponding pixels, which results in accumulation errors. Therefore, we compute the optical flows from an anchor frame Ia to the frame sequence V as {I0, I1, ...In}. We can estimate the flow sequence Fa→V as {fa→0, fa→1, ...fa→n}. As fa→n(x, y) represents the movement of the pixel Ia(x, y) to In, Fa→V(x, y) describes how the pixel in Ia(x, y) moves across the frame sequence V . Hence, Fa→V tracks the global motion of each pixel without iterative warpings. With the historical positions of Ia(x, y) from frame I0 to In, we can regress the motion of this pixel and extrapolate the flow to fa→n+δt, which is the relative pixel position to In+δt with an arbitrary δt. In our context, We pick k consecutive frames before the collision Vpre as {Ie−k+1, Ie−k+2, ..., Ie} and after the collision Vpost as {Is+k−1, Is+k−2, ..., Is}. We select the frame Ie as the anchor frame. It is near the collision moment, so its motion to other frames is not dramatic and easy to be estimated. Hence, we can estimate the optical flow sequences Fe→Vpre and Fe→Vpost Meanwhile, we apply the predicted segmentation mask of Ie to filter the pixels of the target object. In the last step, we build up regressors R for each pixel’s motion individually and predict future locations in any sub-frame. Optical flow interpolation We have construct pixel level regressors for Fe→Vpre and corresponding Fe→Vpost . They can extrapolate the flow fe→e+δt0 and fa→s+δt1 , respectively. δt0, δt1 are extrapolation steps. The optimization goal is to min e−s≤δt1≤0≤δt0≤s−e ||fe→e+δt0 , fa→s+δt1 ||2, s.t. e+ δt0 < s+ δt1 The collision duration is s+ δt1 − (e+ δt0), which is always more than 0. e+ δt0 is the target mslevel localization T̂video. We can apply this interpolation methodology to search the intersection of the object’s center trajectory or maximize the Intersection over Union (IoU) of the object’s bounding box. However, both only use several key points so they cannot achieve a fine granularity since the optical flow takes advantage of thousands of pixels. 4.4 DEPTH REGRESSION Based on the estimation T̂video, we directly regress the depth to fit the Taudio and the bias THardware with the supervision of ground truth depth. We observe that the sound generation procedure varies a lot across different objects, materials, shapes, and motions. On one hand, the diverse waveforms make it impractical to measure the exact Taudio manually. On the other hand, each specific waveform has significant implications on what is the best Taudio corresponding to T̂video. To combat the background noise from other sources, we also feed the RGB-F crop of the target object from frame Ie to the depth predictor. It includes the semantic features of the object as well as the motion status just before the collision. These cues can guide the predictor to find the waveform pattern easily. We select a sequence of audio samples starting from Ie and label some anchor samples as 1 at T̂video. It informed the audio sequence about the timestamp of the visual collision directly. We feed the enriched sequence into the 1D convolution layer to extract a 2D representation. It is followed by two residual blocks to learn high-dimension features. Meanwhile, we use ResNet-18 (He et al., 2015) to extract the RGB-F features of the target object. We tile and concatenate the RGB-F features to the audio features along the channel dimension and append another two residual blocks to fuse the features. Finally, it is followed by a pooling layer and a fully connected layer to predict the depth. The output maps to depth by the 2D projection. We use Mean Square Error (MSE) Ldepth = ||d, d̂||2 as the learning objective where d and d̂ are the target depth and the predicted depth. 5 EXPERIMENTS 5.1 SETUP Dataset platform and collection We use an iPhone XR with a 240-fps slow-motion mode to collect the video with audio. The audio sampling rate is 48Khz. We set a stereo camera and a Lidar together to collect ground truth. We include details of data collection in the Appendix B. AVD Dataset We collect 3.6K+ raw audio-visual sequences with a single collision event as the audio-visual depth(AVD) dataset. We randomly sample raw sequences to generate train/val/eval splits, which have 2600/500/522 sequences. We augment the raw sequences by cropping one moving object from a raw video sequence and inserting it into another raw sequence with a random temporal location. Besides, we augment the raw depth with a maximum 3% random change to diversify the depth and shift audio samples accordingly to the video timestamp. More details are described in Appendix B. Baselines We include three types of baseline for comparison. We compare to a monocular depth estimation method NeWCRFs (Yuan et al., 2022), a state-of-the-art(SOTA) on multiple benchmarks. We also compare to stereo matching methods including the ZED built-in ultra depth estimation SDK and a SOTA method LEAStereo (Cheng et al., 2020). We use dense depth maps collected by the Lidar to finetune the NeWCRFs and LEAStereo on images collected by the stereo camera. Despite optical flow based interpolation, we compare to interpolation using key points such as the trajectories of center or bounding boxes. Metrics We use the mean absolute depth errors as AbsErr = 1n ∑n i=1 |d − d̂|, root mean square absolute relative errors RMSE = √ 1 n ∑n i=1(d− d̂)2, AbsRel = 1 n ∑n i=1 |d−d̂| d as the end-to-end performance metrics. FBDepth is a sparse depth estimation. We evaluate the depth of each target object. However, monocular and stereo baselines have dense depth estimations for all pixels of the object. We evaluate the median estimation depth with the median depth of the ground truth dense map. We provide the results over different distance ranges as close(≤ 10m), mid(10m-30m), and far(≥ 30m). Intuitively, there is an upper bound for the temporal resolution so AbsRel at close depths performs worse than at further distances. 5.2 RESULTS Table 1 shows the results on the depth estimation. In all, FBDepth can achieve better performance on all metrics than baselines across different FPS. Several important trends can be observed. Stereo matching methods perform extraordinarily on close objects, where more clear view difference can be captured. The AbsErr and RMSE increase dramatically as the targets become further because the limited baseline cannot resolve the view difference easily. In the other side, the AbsErr and RMSE of FBDepth grows slowly with the increasing distance while its AbsRel decreases gradually. Intuitively, there is a upper bound for the temporal resolution due to the limited FPS, the lack of the accurate timestamp and the small disturbance of audio-video software. Thus FBDepth may not achieve the centimeter level easily. A Further depth can break the assumption of stereo matching methods as well as monocular methods which has a fixed depth range of training data, but FBDepth still holds the physical propagation law in this condition. FBdepth also shows advantages on NeWCRFs. The monocular methods rely on the training set, which includes various scenarios and depths. Although we apply camera remapping with intrinsic matrix and finetuning, NeWCRFs still cannot achieve the best performance as the one in the pretrained dataset. The implicit depth regression has difficulty in domain adaption. In the contrast, stereo methods can be directly applied to the new scenario and achieve awesome estimation because its fundamental is the explicit spatial view difference on stereo images. FBDepth applies the explicit spatial measurement and does not reply on the camera and scenarios heavily. It requires several learning models but these model can be applied to common cameras and microphones. FBDepth can be more general with a more diverse dataset. We show some visual qualitative results in the Appendix B.3. Compared to other methods, the delay between audio and video can be visually recognized, which is similar to object detection. In another word, FBDepth transforms the tough depth estimation problem to a simple interpretable problem. 5.3 ABLATION In the ablation study, we show how each stage contributes to the final results. Event-level localization We invest how the optical flow can help detect the collision event as well as contour the object mask. We define recall and precision as the percentage of correct recognized audio-visual events in all audio-visual events and all recognized events with an IoU more than 0.5, respectively. With the flow, both recall and precision improve as the flow can work as a pre-mask to guide the network. The main failures in recall come from weak collision sounds or simultaneous collisions. The incorrect recognition is mainly due to similar objects in the frame. Frame-level localization Frame rate is most related to the frame-level stage. We observe that increasing the frame rate reduces the numerical error of FBDepth in Table 1. Especially, increasing 30 FPS to 60 FPS yields the largest improvement, and the benefit gradually tapers off with a further increase in the frame rate. We observe that 30 FPS is too slow to capture sudden movements and fast dynamics while 60 FPS is around the borderline. It is consistent with the trend to set 60 FPS as the default video recording and playing. The motion in 120FPS and 240 FPS is even slower so it is more difficult to distinguish the frame Ie. The frame error is no more than the one in the low FPS mode. Thus, 120 FPS and 240 FPS bring less improvement. Ms-level localization We investigate our special interpolation in two perspectives. First, we need to verify whether this method works. However, there is no ground truth timestamp so we cannot directly quantify the accuracy. We set the estimation of 240 FPS as a baseline and compare it with the estimation of lower FPS. If it can get similar numerical results from independent input, which means the algorithm is reliable. In Figure r̃efmicro2, the median temporal error for 30, 60, and 120 FPS is 2.3ms, 0.65ms, and 0.5ms respectively. Considering the frame resolution, we can compute the improvement ratio as frame durationtemporal error . The 60 FPS has the largest 25x improvement over the frame duration. This is strong evidence that our ms-level localization is reasonable and robust. Second, we compare the performance of depth estimation with different interpolation strategies in Table 2. We use the result from frame localization to predict the depth when there is no interpolation. The error is large since this timestamp is ambiguous for the depth prediction. Interpolation with the traces of centers or bounding boxes does not work well. A few key points cannot capture the dynamics in fine granularity. Depth regression Without the RGB-F channel of the target object in depth regression, the estimation will be less robust due to the ambient sound and the background noise as shown in the Table 2 6 LIMITATION AND FUTURE WORKS We classify audio-visual events into 3 categories by the quality and the quantity of visual cues during the sound production. Obvious visual cues during sound production (e.g.collision) This is the main scenario we try to address in this paper. It requires both visible procedure and audible sound to estimate the depth. We can apply it to sports analytics, human stepping, etc. Moreover, it can collect the sparse depth point and accumulate depth points over time. According to existing work on depth completion (Long et al., 2021b; Xu et al., 2019b), adding some accurate depth points can boost the performance of monocular depth. Indirect visual cues during sound production (e.g.speech, playing the piano) This scenario is challenging but common every day. They do not show the vibration visually. Fortunately, there are still lots of visual cues. Existing work on speech synthesis with lip motion(Ephrat & Peleg, 2017), and music generation with pose(Gan et al., 2020) indicates the strong semantic relationship between video and audio. The spatial correlation still holds here. We propose to apply a high-resolution multi-frame alignment between the video and audio to find the accurate propagation delay. No visual cues during sound production (e.g. car engines, mobile phone speaker) We admit that we have no idea to estimate the depth when these sound sources are static because we cannot see them at all. Luckily, we still have a chance when these sound sources move. We propose a Dopplerlike formulation to associate visual cues and audio cues. Another urgent problem is that the microphone is pretty challenging to synchronize with other sensors. Pushing the latency to the sub-ms level can boost many applications including FBDepth. 7 CONCLUSION In this paper, we develop a novel depth estimation method based on the ”Flash-to-Bang”. By aligning the video with the audio and detecting the events from both, we can estimate the depth in the wild without calibration or prior knowledge about the environment or target. Our extensive evaluation shows that our approach yields similar errors across varying distances. In comparison, the errors of several existing methods increase rapidly with distance. Therefore, our method is particularly attractive for large distances. As part of our future work, we are interested in further enhancing the accuracy of our method, generalizing to more contexts, and using the estimated depth to the collision to estimate the depth to other objects in the scene. A BACKGROUND OF DEPTH SENSORS We add more details on the performance of various depth sensors on multiple criteria in Table 3 and Table 4. We especially demo the available depth sensors and corresponding APIs on iPhone Pro 13 in Table 5 as a typical example that depth estimation is well studied at the short range. B DATASET DETAILS We describe the details to build up the data collection pipeline for this novel task and discuss the trade-off during the data collection. B.1 PLATFORM AND COLLISION OBJECTS Figure 3 shows the data collection platform. It includes three devices. Lidar: We use a Livox Mid-70 Lidar(LIVOX, 2021) to collect the ground truth depth. The detection range is 90 m @ 10% reflectivity. The range precision is 2cm. Although the point rate of Mid-70 is low, it has a special non-repetitive scan pattern so that the point cloud can be very dense by accumulation. Thus, it is best to be used to collect the depth in the static scene. Stereo Camera: We use a ZED 2i stereo camera(StereoLab, 2021) with a 12 cm baseline and a focal length of 4mm. The large focal length is designed to increase the maximum effective range. The image resolution is 1242 by 2208 pixels. Table 3 shows detailed performance. We use the ZED 2i camera as an important depth estimation baseline. Video Recorder: A pair of a camera and a microphone can play the basic functionalities of the video recorder. However, it is very challenging to satisfy all the criteria for the audio-visual depth estimation. In this experiment, we use an iPhone XR and record the video by the default Camera app. It has several promising advantages. First, we can record slow-motion 1080P videos with 240 fps. The frame duration is constant so that we can transform the frame number to the timestamp accurately and align it with the audio track which has a 48kHz sampling rate. Second, the audiovisual recording delay Thardware is small as 1 ms and has a small variance within 1 ms on the iPhone. Both specifications above are critical to the audio-visual depth but cannot be satisfied on other platforms such as Android phones. The calibration of the audio-visual recording framework is out of the scope of this work. It is unexpected that the calibration is pretty difficult based on our experience. To capture the remote scene clearly, the telephoto lens has become indispensable in recent smartphones. Samsung Ultra 22 can support 10x optical zoom and 100x hybrid zoom, and Pixel 6 pro has 20x zoom in all. Their zoom performance is much superior to iPhone. The iPhone XR is not equipped with a telephoto lens, so we mount an ARPBEST monocular telescope to enlarge the scene at a large distance. As shown in Figure 4, the image quality of our setup is a bit worse than the one captured by Pixel 6 Pro’s telephoto lens. Thus, our setup does not provide superior image quality compared to existing commercial camera modules on smartphones. The image taken by Pixel Pro 6 is sharp but noisy while the one taken by iPhone XR with the telescope is a bit blurred. Our setup does not take advantage of the external telescope from this perspective. Overall, our setup resembles the hardware available on commercial mobile phones. Collision Objects: In Figure 8, We use 24 objects including various masses, sizes, shapes, and six common materials: wood, metal, foam, rubber, plastic, and paper. These objects are ubiquitous every day. Besides, they do not break down during the collision. B.2 COLLECTION METHODOLOGY Sensor setup: We mount the Lidar, the stereo camera, and the iPhone on one slide. We perform camera Lidar calibration between the left camera of the stereo camera and the Lidar according to (Yuan et al., 2021). We use the left camera to evaluate the monocular depth estimation and use the stereo camera to evaluate the stereo depth estimation. The mobile phone changes the field of view to fit the object at different distances. Hence, its intrinsic is not constant. We use the frames recorded by iPhone only for FBDepth. Collision setup: Since the point cloud is too sparse to measure the instant collision, we control the collision position to get the ground truth depth. First, we select an anchor position and measure the depth from the slide to the anchor by the Lidar. Second, we perform the collision at the anchor. For example, we throw an object to collide with the anchor or strike a hammer into the anchor or step the shoes on the anchor. Finally, the iPhone records the collision procedure. Besides, the Lidar and the stereo camera record the object placed at the anchor. They record the static object corresponding to the moving object in the video frames. We set up various anchors from 2 meters to 60 meters in different environments. Data Augmentation: After data cleaning and annotation, we get 3.6K+ raw audio-visual sequences, including 280K+ frames as te AVD dataset. Each sequence has about 40 to 120 frames and a corresponding audio clip corresponding. We use the stereo camera to capture static images and use the lidar to capture static depth maps. We augment the raw audio-visual sequences to have more than a single collision by cropping one moving object from a raw video sequence and augmenting it to another raw sequence with a random temporal location. Meanwhile, we add up the audio sequence with the same time shift as the video. We have 10K audio-visual sequences. For the event-level localization stage, we segment an audio clip of 66.7ms including the impact sound and sample 20 frames including visible objects from each sequence and pair them as positive pairs. Negative samples pair the frame with the audio clip without impact sounds or with irrelevant impact sounds. Finally, we generate around 400K audiovisual pairs. Besides, we augment the raw depth with a maximum 3% random change to diversify the depth and shift audio samples accordingly to the video timestamp. It can solve the problem of discrete anchor depths. The change cannot be significant because the impulse response of sound is also related to depth. It requires more transformation than just shifting audio samples. We also augment images with low light, flip and rotation, and audio with diverse background noise from WHAM!Wichern et al. (2019). B.3 SAMPLES AND VISUAL QUALITATIVE RESULTS We provide some samples and visual qualitative results. Considering the objects are small in the normal camera, we only show the region of interest in the RGB image and depth map. The most intuitive observation is that our approach simplifies the difficult depth estimation problem to be easily estimated from the visual samples. Humans can give a coarse estimation with the given timestamps, frames, and waveforms. However, we can have no idea to know the depth from the RGB image or stereo image visually.
1. What is the main contribution of the paper in terms of depth estimation? 2. What are the strengths of the proposed approach, particularly in utilizing both image and audio signals? 3. What are the weaknesses of the paper regarding its assumptions, experimental results, and clarity? 4. How does the reviewer assess the novelty and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents a depth estimation method based on deep neural network, FBDepth, which makes use of both video and audio signals to estimate the object which makes collision event. FBDepth is a two-stage mehod where in the first stage it estimates the visual timing. In the second stage, FBDepth regresses the object depth based on the image, audio, visual timing, and optical flow as the inputs. To train the network, the authors collect a new dataset named the audio-visual depth (AVD) dataset. The evaluation results show the FBDepth achieved superior results than monocular and stereo image-based depth estimation methods. Strengths And Weaknesses Strength S1. This is the first work that tackles the depth estimation problem using both image and audio. The motivation for using the flash-to-bang phenomenon is quite inspiring. S2. Besides FBDepth, the method itself, the authors also collect a new dataset that could benefit the community in this direction. S3. The comparison with monocular and stereo-based methods shows the effectiveness of the proposed method. Weaknesses W1. Although the flash-to-bang phenomenon is quite well motivated, the depth estimation, in the end, is not really making use of it directly. The network still regresses the depth directly without using Eq. (1) by audio, video, and optical flow as the inputs. W2. The authors provide no qualitative results at all. It's pretty hard to assess how well the method works in practice without qualitative results. Both the object segmentation mask prediction and the object depth prediction are quite important to show visual results. Moreover, the authors made a very interesting new dataset (the AVD dataset), but no qualitative samples are provided at all with the submission. W3. The method would assume a static camera if I understood correctly. It is unclear how well this method can work when the camera is moving. W4. It is unclear how the authors make the train/val/test splits and on which split are the numbers reported. W5. Overall the writing of the paper needs to be improved a lot. It is generally quite puzzling to understand the technical details of the paper. Clarity, Quality, Novelty And Reproducibility The originality of the paper needs to be praised as it approaches depth estimation using a novel multi-modality method (audio+video). However, the quality and the clarity of the paper still require quite some improvement to meet the bar of a top conference like ICLR.
ICLR
Title Visual Timing For Sound Source Depth Estimation in the Wild Abstract Depth estimation enables a wide variety of 3D applications, such as robotics and autonomous driving. Despite significant work on various depth sensors, it is challenging to develop an all-in-one method to meet multiple basic criteria. In this paper, we propose a novel audio-visual learning scheme by integrating semantic features with physical spatial cues to boost monocular depth with only one microphone. Inspired by the flash-to-bang theory, we develop FBDepth, the first passive audio-visual depth estimation framework. It is based on the difference between the time-of-flight (ToF) of the light and the sound. We formulate sound source depth estimation as an audio-visual event localization task for collision events. To approach decimeter-level depth accuracy, we design a coarse-to-fine pipeline to push the temporary localization accuracy from event-level to millisecond-level by aligning audio-visual correspondence and manipulating optical flow. FBDepth feeds the estimated visual timestamp together with the audio clip and object visual features to regress the source depth. We use a mobile phone to collect 3.6K+ video clips with 24 different objects at up to 60m. FBDepth shows superior performance especially at a long range compared to monocular and stereo methods. 1 INTRODUCTION Depth estimation is the fundamental functionality to enable 3D perception and manipulation. Although there have been significant efforts on developing depth estimation methods with various sensors, current depth estimation schemes fail to achieve a good balance on multiple basic metrics including accuracy, range, angular resolution, cost, and power consumption. Active depth sensing methods actively emit signals, such as LiDAR (Caesar et al., 2020), structuredlight (Zhang, 2012), mmWave (Barnes et al., 2020), ultrasound (Mao et al., 2016), WiFi (Vasisht et al., 2016). They compare the reflected signal with the reference signal to derive time-of-flight (ToF), phase change, or Doppler shift to estimate the depth. Active methods can achieve high accuracy because of the physical fundamental and well-designed modulated sensing signals. Lidar is the most attractive active senor due to its large sensing range and dense point cloud. However, the density is not sufficient enough to enable a small angular resolution. Therefore, the points are too sparse to be recognized at a long distance. Besides, the prohibitive cost and power consumption limit the availability of Lidar on general sensing devices. Passive depth sensing takes signals from the environment for sensing directly. It commonly uses RGB monocular camera (Bhoi, 2019; Laga et al., 2020), stereo camera (Cheng et al., 2020), thermal camera (Lu & Lu, 2021), or multi-view cameras (Long et al., 2021a). These sensors can achieve pixel-wise angular resolution and consume pretty less energy due to omitting the signal emitting. Among them, stereo matching can effectively estimate the disparity and infer a dense depth map since it transforms the spatial depth to the visual disparity based on the solid physical law. The baseline of the stereo camera determines the effective range and accuracy. Therefore, the dimension of the stereo camera is placed as the critical trade-off with sensing metrics. Thanks to the advance in deep learning, the cheap monocular depth estimation keeps on improving performance with new network structures and high-quality datasets. However, the accuracy is still not satisfactory especially at a long range because it can only regress depth based on the implicit visual cues. It is ill-posed without any physical formulation. Besides, it heavily relies on the dataset. It requires domain adaption and camera calibration for various camera intrinsics (Li et al., 2022). In this paper, we propose to add only one microphone to enable explicit physical depth measurement and boost the performance of a single RGB camera. It does not rely on the intrinsic of cameras and implicit visual cues. We develop a novel passive depth estimation scheme with a solid physical formulation, called Flash-to-Bang Depth (FBDepth). Flash-to-Bang is used to estimate the distance to the lightning strike according to the difference between the arrival time of a lightning flash and a thunder crack. This works because light travels a million times faster than sound. When the sound source is several miles away, the delay is large enough to be perceptible. Applying it to our context, FBDepth can estimate the depth of a collision that triggers audio-visual events. The collision event has been explored for navigation and physical search in (Gan et al., 2022), but our work is the first that uses the collision for depth estimation. Collisions are common and can arise when a ball bounces on the ground, a person takes a step, or a musician hits a drum. We identify and exploit several unique properties related to various collisions in the wild. First, the duration of a collision is short and collision events are sparse. Thus, there are few overlapped collisions. Second, though the motion of objects changes dramatically after the collision, they are almost static at the collision moment. Third, the impact sound is loud enough to propagate to a long range. Flash-to-Bang is applied to the range of miles for human perception. Using it for general depth estimation poses several significant challenges: (i) It is inaccessible to ground truth collision time from video and audio. Video only offers up to 240 frames per second(fps), and may not capture the exact instance when the collision occurs. Audio has a high sampling rate but it is hard to detect the start of a collision solely based on the collision sound due to different sound patterns arising from collisions as well as ambient noise. (ii) We need highly accurate collision time. 1 ms error can result in a depth error of 34 cm. (iii) Noise present in both audio and video further exacerbate the problem. To realize our idea, we formulate the sound source depth estimation as the audio-visual localization task. Whereas existing work (Wu et al., 2019; Xia & Zhao, 2022) still focuses on 1-second-segment level localization. FBdepth performs event-level localization by aligning correspondence between the audio and the video. Apart from audio-visual semantic features as input in existing work (Tian et al., 2018; Chen et al., 2021a), we incorporate optical flow to exclude static objects with similar visual appearances. Furthermore, FBDepth applies the impulse change of optical flow to locate collision moments at the frame level. Finally, we formulate the ms-level estimation as an optimization problem of video interpolations. FBDepth succeeds to interpolate the best collision moment by maximizing the intersection between extrapolations of before-collision and after-collision flows. With the estimated timestamp of visual collision, we regress the sound source depth with the audio clip and visual features. FBdepth avoids the requirement to know the timestamp of audio collision. Besides, different objects have subtle differences in audio-visual temporal alignment. For example, a rigid body generates the sound peak once it touches another body. But an elastic body produces little sound during the initial collision and takes several ms to produce the peak with the maximum deformation. We feed semantic features to enable the network aware of the material, size, etc. Our main contributions are as follows: 1. To the best of our knowledge, FBDepth is the first passive audio-visual depth estimation. It brings the physical propagation property to audio-visual learning. 2. We introduce the ms-level audio-visual localization task. We propose a novel coarse-to-fine method to improve temporal resolution by leveraging the unique properties of collisions. 3. We collect 3.6K+ audio-visual samples across 24 different objects in the wild. Our extensive evaluation shows that FBDepth achieves 0.64m absolute error(AbsErr) and 2.98% AbsRel across a wide range from 2 m to 60 m. Especially, FBDepth shows more improvement in the longer range. 2 RELATED WORK Multi-modality Depth estimation. Recent work on depth estimation has shown the benefits of fusing cameras and other active sensors. (Qiu et al., 2019; Imran et al., 2021) recover dense depth maps from sparse Lidar point clouds and a single image. (Long et al., 2021b) associates pixels with pretty sparse radar points to achieve superior accuracy. The effective range can be increased as well by Lidar-camera (Zhang et al., 2020) or Radar-camera (Zhang et al., 2021). However, these methods are still expensive in cost and power consumption. (Gao et al., 2020; Parida et al., 2021) emit audio chirps and learn the depth map implicitly with audio reflections and a single image. However, these methods require many nearby acoustic reflectors to produce effective echos so the setup is limited in rooms. Besides, they are evaluated in an audiovisual simulator. FBDepth only uses one extra microphone to perceive natural sounds directly. It keeps the passive design of the audio but applies the physical measurement explicitly. The one-path sound propagation has a longer effective range than echoes. Sound source localization. Previous systems localize sound sources with microphone arrays (Valin et al., 2003; Rascon & Meza, 2017) or one microphone with a camera (Hershey & Movellan, 1999). They intend to estimate the direction of arrival(DOA) or the distance. The DOA is inferred by the subtle difference in arrival time from the sound source to each microphone(Mao et al., 2019; Sun et al., 2022) or by semantic matching with the visual appearance if given images(Tian et al., 2018; Arandjelovic & Zisserman, 2018). The distance can be estimated by triangulation methods with multiple DOAs and room structures(Wang et al., 2021; Shen et al., 2020). Many work study the room acoustic and the distance cues from the reverberation(Singh et al., 2021; Chen et al., 2021b) but (Zahorik, 2002) shows that the reverberation has a coarse coding with the distance. Compared to these methods, FBDepth directly estimates the distance by the ToF and achieves superior accuracy to indirect triangulation methods and implicitly depth learning networks on reverberation. Audio-visual event localization aims to detect and localize events in videos. (Tian et al., 2018) first propose the task and build up the audio-visual event(AVE) dataset. They apply an audio-guided visual attention mechanism to learn visual regions with the related sounding object or motions. Recent works develop dual-modality sequence-sequence framework (Lin et al., 2019) and dual attention matching mechanism (Wu et al., 2019) to leverage global features. However, the temporal event boundary is 1s-level in AVE dataset so it is split as 1s-long segments. We study the instant collision event and solve the coarse boundary problem as well. (Gan et al., 2022) has a similar setup to ours. They use an embodied robot agent to navigate to a dropped object in 3D virtual rooms. They integrate asynchronous vision and audition and navigate to the object. The asynchronism comes from the invisibility of the object. Even though their simulator has been pretty vivid enough for semantic tasks, it has a gap in the real-world collision for the mslevel formulation. Falling objects dataset(Kotera et al., 2020), TbD dataset(Kotera et al., 2019) and TbD-3D dataset(Rozumnyi et al., 2020) explore falling motions and fast movements but they do not have audio and depth information. Video frame interpolation aims to synthesize intermediate frames between existing ones of a video. Most state-of-the-art approaches explicitly or implicitly assume a simplistic linear motion. Warping-based methods (Baker et al., 2011; Park et al., 2020) apply optical flow and forward warping to shift pixels to intermediate frames linearly. Phase-based methods (Meyer et al., 2015; 2018) combine the phase information across different scales but the phase is modeled as a linear function of time. Recent methods are developed to approximate non-linear motion, such as kernelbased methods (Niklaus et al., 2017a;b), quadratic interpolation (Xu et al., 2019a), cubic motion modeling (Chi et al., 2020), etc. However, they still fail to complex non-linear motions because precise motion dynamics cannot be captured in the blind time between keyframes. Unfortunately, collisions are super non-linear and instant. Given two keyframes before and after the collision, it is ambiguous to decide whether there is a collision. Hence, these methods are not applicable. We analyze the motions before and after the collision and extrapolate optical flows to find the most potential collision position. 3 PROBLEM FORMULATION We formulate the depth estimation by the physical law of wave propagation. We have: d v − d c = T (1) where the depth of the sound source is d and the difference between the ToF of sound and light is T . c and v denote the propagation speeds of light and sound, respectively. We can estimate d based on d = cvTc−v ≈ vT since c ≫ v. We observe T = Taudio − Tvideo + Thardware, where Taudio and Tvideo denote the event time in the audio and video recordings, respectively, and Thardware denotes the start time difference in the audio and video recordings. It can be small as well as have a small variance with a well-designed media system such as the Apple AVFoundation framework. We regard it as a constant unknown bias to learn. It is impossible to label the precise Tvideo and Taudio manually. Tvideo can be tagged at most frame-level. Even though many commercial cameras can support up to 240 FPS, it results in a 4-ms segment and 1.43m depth variation. Moreover, it is tough to determine the exact frame that is nearest to the collision in high FPS mode by a human being due to the constrained view of the camera. Taudio is challenging to recognize in the wild as well. Although the audio sampling rate is high enough, we can recognize the significant early peaks instead of the first sample triggered by the collision. The best effort of segmentation is 10-ms level based on real data. We cannot learn the timestamp with supervision. We propose a 2-stage estimation framework. The goal of the first stage is to estimate the numerical Tvideo. As figure 1 shows, we localize the audiovisual event in the stream and then take advantage of the unique optical flow of the collision to estimate Tvideo at ms-level. In the second stage, we place the Tvideo as an anchor into the audio clip and direct regress the depth with depth supervision. We make the network optimize Taudio automatically with knowledge of the Tvideo, the audio waveform and visual features. 4 APPROACH We demonstrate a novel coarse-to-fine pipeline to localize the collision with a super temporal resolution in the video. This method does not require annotations on ms-level, which is at least two orders of magnitude finer than previous approaches. They rely on the supervision of segment annotations, such as AVE dataset with 1-second segments (Tian et al., 2018), Lip Reading Sentences 2 dataset with word-level segments (Chung & Zisserman, 2016), BOBSL with sentence-level alignments (Bull et al., 2021). 4.1 EVENT-LEVEL LOCALIZATION Audio-visual modeling for collisions. In this step, our goal is to localize the audio-visual event for the region and the period of interest. It is similar to (Tian et al., 2018), but the unique properties of collisions bring new opportunities to learning strategy. Collisions have a significant motion than other sound sources. We can use the optical flow to inform the network of moving pixels. Besides, the impact sound is highly correlated to the rich information of objects (Gan et al., 2022), such as shape, materials, size, mass, etc. It makes audio-visual cross-matching easier than general audiovisual events so that we do not need to apply a complex scheme to learn. Another fact is that collisions are pretty sparse temporally in the wild because the duration of collisions is extremely short. It is rare to come across overlapped collisions based on our empirical study on the basketball court. Only two frames have double collisions among all 1208 frames and a total of 203 collisions when 7 basketballs are played during a 40-s duration. We propose a motion-guided audio-visual correspondence network (MAVNet). Similar to (Tian et al., 2018; Wu et al., 2019), MAVNet performs the cross-matching for the audio features and the RGB-F channels. Besides, it predicts audio-visual segmentation to capture whole pixels of the target object. It can achieve fine-grained audio-visual scene understanding (Zhou et al., 2022). We use the segmentation mask to filter flows of interest and perform high-resolution estimation in the next steps. MAVNet has two backbones to deal with RGB-F channels and audio clips respectively. A UNet (Ronneberger et al., 2015) style encoder is applied to extract the frame features conditioned by optical flows. It uses a series of convolution layers to extract visual features. Another branch is the audio encoder which takes in the time-domain signal. It has a 1D convolution layer to learn an STFT-like representation and a stack of 2D convolution layers with batch normalization to learn the semantic audio features. We replicate the audio feature, tile them to match the visual feature dimension, and concatenate the audio and visual feature maps. MAVNet has two output heads as well. the U-Net decoder applies a series of up-convolutions and skip-connections from the RGB-F encoder to fused feature maps to learn the binary segmentation mask M . Meanwhile, the fused feature map is fed into a binary classification head consisting of convolution layers and linear layers to predict the audio-visual event relevance y ∈ {0, 1}. Training We use the weighted sum Binary Cross Entropy (BCE) loss as the training objective for both segmentation and the cross matching, We train all components to jointly optimize the location predictions and energy reconstruction. We minimize the total loss Ltotal = BCE(M,M̂) + λ ∗ BCE(y, ŷ) where λ is the hypermeter to set. Inference We only use low FPS to perform MAVNet to avoid dense inference at this stage. Moreover, we do not need to activate the segmentation head until the audio clip and the frame are highly matched. Finally, MAVNet uses this audio clip to retrieve a sequence of frames including the full collision procedure. 4.2 FRAME-LEVEL LOCALIZATION Given a sequence of video frames, our goal is to split them into two sets: the frames before the collision V0 and the frames after the collision V1. This essentially requires us to determine the last frame Ie in V0 before the collision and the first frame Is in V1 after the collision. Thus, we locate the collision between the frame Ie and Is. Based on the analysis of the physical motion, we make an important observation that can help determine Ie and Is. The collision results in a significant acceleration change due to the strong impulse force. Let at = vt − vt−1 and δat = at − at−1 denote the acceleration and acceleration change of frame It, respectively. δa between Ie and Is is large, while δa between adjacent frames before or after the collision is small. If the object stops moving immediately after the collision, we take the static frame Ie+1 as Is. Finally, we select the frames before Ie to generate V0, and select the frames after Is to generate V1. We use the retrieved mask in the last stage to determine the object positions in the frames and calculate the velocity, acceleration, and acceleration change. We find the Ie and Is at the low FPS and then replicate the procedure for frames between Ie and Is at high FPS. Finally, we locate Ie and Is in the high FPS mode efficiently. 4.3 MS-LEVEL LOCALIZATION To further locate the exact moment of the collision, we try to interpolate frames between Ie and Is to recover the skipped frame. Unfortunately, the common assumption of frame-based interpolation is fully broken down. Motion consistency is fundamental for spatio-temporal video processing. If the motion of the object is temporally stable across several frames (e.g., due to a constant force), the position and pose can be predicted in the future frames as well as be interpolated between two frames. We denote it as motion first consistency. However, the impact sound is caused by an impulse force, which results in a rapid change of the motion status. It breaks the motion continuity and consistency. When we observe Ie and Is, we cannot determine whether a collision happens or the object just flies in the air. Luckily, the collision moment retains a new form of motion consistency. We denote it as motion second consistency. It reveals that the motions before and after the collision share the same intersection position. Besides, they keep the motion first consistency separately. Therefore, we can extrapolate the motions based on the motion first consistency and search for the most similar motion extrapolations by leveraging motion second consistency. Note that our final goal is to find the timestamp of the collision instead of the motion status at the shared position. (Kotera et al., 2019; Rozumnyi et al., 2020) try to recover the sub-frame motions and trajectories as well but they require the high FPS ground truth to guide the training. In our context, we care more about when the collision happens than what it looks like. Optical flow extrapolation Optical flow is widely used for frame prediction () and interpolation (Baker et al., 2011) by warping the frame with the estimated optical flow. Because it can capture all motions of pixels and get a finer understanding of the object dynamics. The optical flow sequence is usually generated by adjacent video frames. However, it is not efficient for extrapolation. The drift of pixels in the flow requires extra iterative wrappings to align the corresponding pixels, which results in accumulation errors. Therefore, we compute the optical flows from an anchor frame Ia to the frame sequence V as {I0, I1, ...In}. We can estimate the flow sequence Fa→V as {fa→0, fa→1, ...fa→n}. As fa→n(x, y) represents the movement of the pixel Ia(x, y) to In, Fa→V(x, y) describes how the pixel in Ia(x, y) moves across the frame sequence V . Hence, Fa→V tracks the global motion of each pixel without iterative warpings. With the historical positions of Ia(x, y) from frame I0 to In, we can regress the motion of this pixel and extrapolate the flow to fa→n+δt, which is the relative pixel position to In+δt with an arbitrary δt. In our context, We pick k consecutive frames before the collision Vpre as {Ie−k+1, Ie−k+2, ..., Ie} and after the collision Vpost as {Is+k−1, Is+k−2, ..., Is}. We select the frame Ie as the anchor frame. It is near the collision moment, so its motion to other frames is not dramatic and easy to be estimated. Hence, we can estimate the optical flow sequences Fe→Vpre and Fe→Vpost Meanwhile, we apply the predicted segmentation mask of Ie to filter the pixels of the target object. In the last step, we build up regressors R for each pixel’s motion individually and predict future locations in any sub-frame. Optical flow interpolation We have construct pixel level regressors for Fe→Vpre and corresponding Fe→Vpost . They can extrapolate the flow fe→e+δt0 and fa→s+δt1 , respectively. δt0, δt1 are extrapolation steps. The optimization goal is to min e−s≤δt1≤0≤δt0≤s−e ||fe→e+δt0 , fa→s+δt1 ||2, s.t. e+ δt0 < s+ δt1 The collision duration is s+ δt1 − (e+ δt0), which is always more than 0. e+ δt0 is the target mslevel localization T̂video. We can apply this interpolation methodology to search the intersection of the object’s center trajectory or maximize the Intersection over Union (IoU) of the object’s bounding box. However, both only use several key points so they cannot achieve a fine granularity since the optical flow takes advantage of thousands of pixels. 4.4 DEPTH REGRESSION Based on the estimation T̂video, we directly regress the depth to fit the Taudio and the bias THardware with the supervision of ground truth depth. We observe that the sound generation procedure varies a lot across different objects, materials, shapes, and motions. On one hand, the diverse waveforms make it impractical to measure the exact Taudio manually. On the other hand, each specific waveform has significant implications on what is the best Taudio corresponding to T̂video. To combat the background noise from other sources, we also feed the RGB-F crop of the target object from frame Ie to the depth predictor. It includes the semantic features of the object as well as the motion status just before the collision. These cues can guide the predictor to find the waveform pattern easily. We select a sequence of audio samples starting from Ie and label some anchor samples as 1 at T̂video. It informed the audio sequence about the timestamp of the visual collision directly. We feed the enriched sequence into the 1D convolution layer to extract a 2D representation. It is followed by two residual blocks to learn high-dimension features. Meanwhile, we use ResNet-18 (He et al., 2015) to extract the RGB-F features of the target object. We tile and concatenate the RGB-F features to the audio features along the channel dimension and append another two residual blocks to fuse the features. Finally, it is followed by a pooling layer and a fully connected layer to predict the depth. The output maps to depth by the 2D projection. We use Mean Square Error (MSE) Ldepth = ||d, d̂||2 as the learning objective where d and d̂ are the target depth and the predicted depth. 5 EXPERIMENTS 5.1 SETUP Dataset platform and collection We use an iPhone XR with a 240-fps slow-motion mode to collect the video with audio. The audio sampling rate is 48Khz. We set a stereo camera and a Lidar together to collect ground truth. We include details of data collection in the Appendix B. AVD Dataset We collect 3.6K+ raw audio-visual sequences with a single collision event as the audio-visual depth(AVD) dataset. We randomly sample raw sequences to generate train/val/eval splits, which have 2600/500/522 sequences. We augment the raw sequences by cropping one moving object from a raw video sequence and inserting it into another raw sequence with a random temporal location. Besides, we augment the raw depth with a maximum 3% random change to diversify the depth and shift audio samples accordingly to the video timestamp. More details are described in Appendix B. Baselines We include three types of baseline for comparison. We compare to a monocular depth estimation method NeWCRFs (Yuan et al., 2022), a state-of-the-art(SOTA) on multiple benchmarks. We also compare to stereo matching methods including the ZED built-in ultra depth estimation SDK and a SOTA method LEAStereo (Cheng et al., 2020). We use dense depth maps collected by the Lidar to finetune the NeWCRFs and LEAStereo on images collected by the stereo camera. Despite optical flow based interpolation, we compare to interpolation using key points such as the trajectories of center or bounding boxes. Metrics We use the mean absolute depth errors as AbsErr = 1n ∑n i=1 |d − d̂|, root mean square absolute relative errors RMSE = √ 1 n ∑n i=1(d− d̂)2, AbsRel = 1 n ∑n i=1 |d−d̂| d as the end-to-end performance metrics. FBDepth is a sparse depth estimation. We evaluate the depth of each target object. However, monocular and stereo baselines have dense depth estimations for all pixels of the object. We evaluate the median estimation depth with the median depth of the ground truth dense map. We provide the results over different distance ranges as close(≤ 10m), mid(10m-30m), and far(≥ 30m). Intuitively, there is an upper bound for the temporal resolution so AbsRel at close depths performs worse than at further distances. 5.2 RESULTS Table 1 shows the results on the depth estimation. In all, FBDepth can achieve better performance on all metrics than baselines across different FPS. Several important trends can be observed. Stereo matching methods perform extraordinarily on close objects, where more clear view difference can be captured. The AbsErr and RMSE increase dramatically as the targets become further because the limited baseline cannot resolve the view difference easily. In the other side, the AbsErr and RMSE of FBDepth grows slowly with the increasing distance while its AbsRel decreases gradually. Intuitively, there is a upper bound for the temporal resolution due to the limited FPS, the lack of the accurate timestamp and the small disturbance of audio-video software. Thus FBDepth may not achieve the centimeter level easily. A Further depth can break the assumption of stereo matching methods as well as monocular methods which has a fixed depth range of training data, but FBDepth still holds the physical propagation law in this condition. FBdepth also shows advantages on NeWCRFs. The monocular methods rely on the training set, which includes various scenarios and depths. Although we apply camera remapping with intrinsic matrix and finetuning, NeWCRFs still cannot achieve the best performance as the one in the pretrained dataset. The implicit depth regression has difficulty in domain adaption. In the contrast, stereo methods can be directly applied to the new scenario and achieve awesome estimation because its fundamental is the explicit spatial view difference on stereo images. FBDepth applies the explicit spatial measurement and does not reply on the camera and scenarios heavily. It requires several learning models but these model can be applied to common cameras and microphones. FBDepth can be more general with a more diverse dataset. We show some visual qualitative results in the Appendix B.3. Compared to other methods, the delay between audio and video can be visually recognized, which is similar to object detection. In another word, FBDepth transforms the tough depth estimation problem to a simple interpretable problem. 5.3 ABLATION In the ablation study, we show how each stage contributes to the final results. Event-level localization We invest how the optical flow can help detect the collision event as well as contour the object mask. We define recall and precision as the percentage of correct recognized audio-visual events in all audio-visual events and all recognized events with an IoU more than 0.5, respectively. With the flow, both recall and precision improve as the flow can work as a pre-mask to guide the network. The main failures in recall come from weak collision sounds or simultaneous collisions. The incorrect recognition is mainly due to similar objects in the frame. Frame-level localization Frame rate is most related to the frame-level stage. We observe that increasing the frame rate reduces the numerical error of FBDepth in Table 1. Especially, increasing 30 FPS to 60 FPS yields the largest improvement, and the benefit gradually tapers off with a further increase in the frame rate. We observe that 30 FPS is too slow to capture sudden movements and fast dynamics while 60 FPS is around the borderline. It is consistent with the trend to set 60 FPS as the default video recording and playing. The motion in 120FPS and 240 FPS is even slower so it is more difficult to distinguish the frame Ie. The frame error is no more than the one in the low FPS mode. Thus, 120 FPS and 240 FPS bring less improvement. Ms-level localization We investigate our special interpolation in two perspectives. First, we need to verify whether this method works. However, there is no ground truth timestamp so we cannot directly quantify the accuracy. We set the estimation of 240 FPS as a baseline and compare it with the estimation of lower FPS. If it can get similar numerical results from independent input, which means the algorithm is reliable. In Figure r̃efmicro2, the median temporal error for 30, 60, and 120 FPS is 2.3ms, 0.65ms, and 0.5ms respectively. Considering the frame resolution, we can compute the improvement ratio as frame durationtemporal error . The 60 FPS has the largest 25x improvement over the frame duration. This is strong evidence that our ms-level localization is reasonable and robust. Second, we compare the performance of depth estimation with different interpolation strategies in Table 2. We use the result from frame localization to predict the depth when there is no interpolation. The error is large since this timestamp is ambiguous for the depth prediction. Interpolation with the traces of centers or bounding boxes does not work well. A few key points cannot capture the dynamics in fine granularity. Depth regression Without the RGB-F channel of the target object in depth regression, the estimation will be less robust due to the ambient sound and the background noise as shown in the Table 2 6 LIMITATION AND FUTURE WORKS We classify audio-visual events into 3 categories by the quality and the quantity of visual cues during the sound production. Obvious visual cues during sound production (e.g.collision) This is the main scenario we try to address in this paper. It requires both visible procedure and audible sound to estimate the depth. We can apply it to sports analytics, human stepping, etc. Moreover, it can collect the sparse depth point and accumulate depth points over time. According to existing work on depth completion (Long et al., 2021b; Xu et al., 2019b), adding some accurate depth points can boost the performance of monocular depth. Indirect visual cues during sound production (e.g.speech, playing the piano) This scenario is challenging but common every day. They do not show the vibration visually. Fortunately, there are still lots of visual cues. Existing work on speech synthesis with lip motion(Ephrat & Peleg, 2017), and music generation with pose(Gan et al., 2020) indicates the strong semantic relationship between video and audio. The spatial correlation still holds here. We propose to apply a high-resolution multi-frame alignment between the video and audio to find the accurate propagation delay. No visual cues during sound production (e.g. car engines, mobile phone speaker) We admit that we have no idea to estimate the depth when these sound sources are static because we cannot see them at all. Luckily, we still have a chance when these sound sources move. We propose a Dopplerlike formulation to associate visual cues and audio cues. Another urgent problem is that the microphone is pretty challenging to synchronize with other sensors. Pushing the latency to the sub-ms level can boost many applications including FBDepth. 7 CONCLUSION In this paper, we develop a novel depth estimation method based on the ”Flash-to-Bang”. By aligning the video with the audio and detecting the events from both, we can estimate the depth in the wild without calibration or prior knowledge about the environment or target. Our extensive evaluation shows that our approach yields similar errors across varying distances. In comparison, the errors of several existing methods increase rapidly with distance. Therefore, our method is particularly attractive for large distances. As part of our future work, we are interested in further enhancing the accuracy of our method, generalizing to more contexts, and using the estimated depth to the collision to estimate the depth to other objects in the scene. A BACKGROUND OF DEPTH SENSORS We add more details on the performance of various depth sensors on multiple criteria in Table 3 and Table 4. We especially demo the available depth sensors and corresponding APIs on iPhone Pro 13 in Table 5 as a typical example that depth estimation is well studied at the short range. B DATASET DETAILS We describe the details to build up the data collection pipeline for this novel task and discuss the trade-off during the data collection. B.1 PLATFORM AND COLLISION OBJECTS Figure 3 shows the data collection platform. It includes three devices. Lidar: We use a Livox Mid-70 Lidar(LIVOX, 2021) to collect the ground truth depth. The detection range is 90 m @ 10% reflectivity. The range precision is 2cm. Although the point rate of Mid-70 is low, it has a special non-repetitive scan pattern so that the point cloud can be very dense by accumulation. Thus, it is best to be used to collect the depth in the static scene. Stereo Camera: We use a ZED 2i stereo camera(StereoLab, 2021) with a 12 cm baseline and a focal length of 4mm. The large focal length is designed to increase the maximum effective range. The image resolution is 1242 by 2208 pixels. Table 3 shows detailed performance. We use the ZED 2i camera as an important depth estimation baseline. Video Recorder: A pair of a camera and a microphone can play the basic functionalities of the video recorder. However, it is very challenging to satisfy all the criteria for the audio-visual depth estimation. In this experiment, we use an iPhone XR and record the video by the default Camera app. It has several promising advantages. First, we can record slow-motion 1080P videos with 240 fps. The frame duration is constant so that we can transform the frame number to the timestamp accurately and align it with the audio track which has a 48kHz sampling rate. Second, the audiovisual recording delay Thardware is small as 1 ms and has a small variance within 1 ms on the iPhone. Both specifications above are critical to the audio-visual depth but cannot be satisfied on other platforms such as Android phones. The calibration of the audio-visual recording framework is out of the scope of this work. It is unexpected that the calibration is pretty difficult based on our experience. To capture the remote scene clearly, the telephoto lens has become indispensable in recent smartphones. Samsung Ultra 22 can support 10x optical zoom and 100x hybrid zoom, and Pixel 6 pro has 20x zoom in all. Their zoom performance is much superior to iPhone. The iPhone XR is not equipped with a telephoto lens, so we mount an ARPBEST monocular telescope to enlarge the scene at a large distance. As shown in Figure 4, the image quality of our setup is a bit worse than the one captured by Pixel 6 Pro’s telephoto lens. Thus, our setup does not provide superior image quality compared to existing commercial camera modules on smartphones. The image taken by Pixel Pro 6 is sharp but noisy while the one taken by iPhone XR with the telescope is a bit blurred. Our setup does not take advantage of the external telescope from this perspective. Overall, our setup resembles the hardware available on commercial mobile phones. Collision Objects: In Figure 8, We use 24 objects including various masses, sizes, shapes, and six common materials: wood, metal, foam, rubber, plastic, and paper. These objects are ubiquitous every day. Besides, they do not break down during the collision. B.2 COLLECTION METHODOLOGY Sensor setup: We mount the Lidar, the stereo camera, and the iPhone on one slide. We perform camera Lidar calibration between the left camera of the stereo camera and the Lidar according to (Yuan et al., 2021). We use the left camera to evaluate the monocular depth estimation and use the stereo camera to evaluate the stereo depth estimation. The mobile phone changes the field of view to fit the object at different distances. Hence, its intrinsic is not constant. We use the frames recorded by iPhone only for FBDepth. Collision setup: Since the point cloud is too sparse to measure the instant collision, we control the collision position to get the ground truth depth. First, we select an anchor position and measure the depth from the slide to the anchor by the Lidar. Second, we perform the collision at the anchor. For example, we throw an object to collide with the anchor or strike a hammer into the anchor or step the shoes on the anchor. Finally, the iPhone records the collision procedure. Besides, the Lidar and the stereo camera record the object placed at the anchor. They record the static object corresponding to the moving object in the video frames. We set up various anchors from 2 meters to 60 meters in different environments. Data Augmentation: After data cleaning and annotation, we get 3.6K+ raw audio-visual sequences, including 280K+ frames as te AVD dataset. Each sequence has about 40 to 120 frames and a corresponding audio clip corresponding. We use the stereo camera to capture static images and use the lidar to capture static depth maps. We augment the raw audio-visual sequences to have more than a single collision by cropping one moving object from a raw video sequence and augmenting it to another raw sequence with a random temporal location. Meanwhile, we add up the audio sequence with the same time shift as the video. We have 10K audio-visual sequences. For the event-level localization stage, we segment an audio clip of 66.7ms including the impact sound and sample 20 frames including visible objects from each sequence and pair them as positive pairs. Negative samples pair the frame with the audio clip without impact sounds or with irrelevant impact sounds. Finally, we generate around 400K audiovisual pairs. Besides, we augment the raw depth with a maximum 3% random change to diversify the depth and shift audio samples accordingly to the video timestamp. It can solve the problem of discrete anchor depths. The change cannot be significant because the impulse response of sound is also related to depth. It requires more transformation than just shifting audio samples. We also augment images with low light, flip and rotation, and audio with diverse background noise from WHAM!Wichern et al. (2019). B.3 SAMPLES AND VISUAL QUALITATIVE RESULTS We provide some samples and visual qualitative results. Considering the objects are small in the normal camera, we only show the region of interest in the RGB image and depth map. The most intuitive observation is that our approach simplifies the difficult depth estimation problem to be easily estimated from the visual samples. Humans can give a coarse estimation with the given timestamps, frames, and waveforms. However, we can have no idea to know the depth from the RGB image or stereo image visually.
1. What is the focus and contribution of the paper on object depth estimation? 2. What are the strengths and weaknesses of the proposed approach, particularly in its limitation in detecting the depth of a single object with a clear collision/impact? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any suggestions or requests made by the reviewer regarding additional information or materials that could enhance the paper's impact?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents a new approach for object depth estimation by using audio and video propagation times. Strengths And Weaknesses Strengths I really like this paper. Although the idea of using time of arrival differences is simple, it hasn't been explored so thoroughly before. The execution is nice. One main advantage is that this method works better with larger distances, while other depth estimation methods typically work worse with larger distances. The authors collect a clearly defined dataset of in-the-wild sounds and run experiments with that, showing promising results. Weakness The clear weakness is that it is not nearly as generalizable as existing methods. It almost isn't fair to compare this method to lidar, stereo etc because of how limited this method is. It only detects the depth of a single object with a clear colllision/impact. the collision must be audible but also visible. This means that many auditory events would not be applicable because they don't have a visual onset time (e.g. car engines, speech). This makes the method inapplicable for many applications of RGB->Depth. I would like to see a more clear discussion of the limitations here as well as target application scenarios the authors have in mind Clarity, Quality, Novelty And Reproducibility Clarity: Generally pretty easy to follow. Some of the details of section 4 could probably be moved to supplementary materials, I think the high level idea needs to come through a bit more. Novelty: Gan et al. have used a similar idea, but that paper was more focused on navigation. Overall this work is quite novel Quality/Reproducibility: Both are good I would like to see some example videos in supplementary materials. What does the video at 240fps look like, what does the audio sound like from 60m away etc.
ICLR
Title Exploring the Hidden Dimension in Accelerating Convolutional Neural Networks Abstract DeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks. DeePa optimizes parallelism at the granularity of each individual layer in the network. We present an elimination-based algorithm that finds an optimal parallelism configuration for every layer. Our evaluation shows that DeePa achieves up to 6.5× speedup compared to state-of-the-art deep learning frameworks and reduces data transfers by up to 23×. 1 INTRODUCTION Training convolutional neural networks (CNNs) is increasingly compute-intensive and timeconsuming. It takes days or even weeks to train deep CNNs from scratch (Szegedy et al., 2014; Zeiler & Fergus, 2014; Simonyan & Zisserman, 2014; Szegedy et al., 2016). Existing deep learning frameworks such as TensorFlow, PyTorch, and Caffe2 parallelize the training process onto multiple processors (usually GPUs) using image parallelism1 dividing the entire image dataset into batches with the same number of images and assigning each batch to a dedicated processor. The standard parallelization of CNN training only exploits image parallelism. However, other dimensions can also parallelize the training process. For example, in CNNs for 2D images, data is commonly organized as 4-dimensional tensors (i.e., image, height, width, channel). The image dimension includes an index for each image in the input dataset. The height and width dimensions specify a position in an image. For a particular position, the channel dimension2 indexes different neurons for that position. Exploring these other parallelizable dimensions can potentially reduce the compute time and data transfer cost when training CNNs (see Section 2). Moreover, different layers in a CNN may prefer different parallelism configurations for achieving optimal performance. We propose DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs. To the best of our knowledge, DeePa is the first system that models and exploits the parallelism of neural networks at the granularity of each individual layer. To generate a parallelism configuration for each layer, DeePa uses an elimination-based algorithm that automatically finds the configuration with the best estimated performance. The main contributions of this paper are: • We present DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs. • The parallelization strategy is selected at the granularity of each individual layer. • We present an elimination-based algorithm for finding the parallelism configuration with optimal estimated performance for each layer. • Our evaluation shows that, compared to state-of-the-art deep learning frameworks (e.g., TensorFlow and PyTorch), DeePa achieves 6.5×, 1.9×, and 1.5× speedup for AlexNet, 1Some papers use the term data parallelism to refer to parallelism across images. Since this paper involves parallelizing the training dataset in other data dimensions, we use image parallelism to distinguish this from other parallelization strategies. 2Some papers use the term depth to refer to different neurons for a position. In this paper, depth refers to the number of layers for an entire neural network and we use channel for the neurons for a position. 2 MOTIVATION This work is motivated by the following observations. 2.1 ACCELERATING COMPUTATION THROUGHPUT Convolutional layers generally consume the bulk of the training time in CNNs, and parallelizing training in different data dimensions results in significantly different performance. Figure 1 shows the relative speed of training six different convolutional layers from AlexNet, VGG-16, and Inception-v3. The properties of the convolutional layers are shown in Table 1. For each convolutional layer, we tried parallelizing the computation in each individual parallelizable dimension as well as combinations of different parallelizable dimensions, and we report the performance of the standard parallelization over images along with the worst and best parallelization strategies we discovered. Figure 1 shows that different parallelism configurations result in very different performance, and image parallelism generally achieves suboptimal performance. Therefore, exploring parallelism in other dimensions can potentially accelerate the training of convolutional layers. 2.2 REDUCING DATA TRANSFER COST Different parallelization strategies can also result in significantly different amounts of data movement. Figure 3 shows an example of parallelizing the first fully-connected layer of VGG-16 on two GPUs in different dimensions. In image parallelism (Figure 3a), each GPU processes a batch of images and computes the gradient for the entire fully-connected layer. This requires each GPU to synchronize the gradients for the entire fully-connected layer (shown as the shadow rectangles) after each step. An alternative approach (Figure 3b) parallelizes in the channel dimension by assigning a subset of the output channels to each GPU. As a result, different GPUs compute the gradients for disjoint subsets of the fully-connected layer, which eliminates transferring the fully-connected layer but introduces additional data transfers for input tensors (shown as the shadow rectangles). For this particular case, using parallelism in the channel dimension reduces data transfer costs by 12×. 2.3 OPTIMIZING PER-LAYER PERFORMANCE When processing a batch of images, increasing the number of workers does not always improve overall execution time, due to the data transfer overhead to synchronize gradients across different workers. Figure 2 shows the per-step training time for three different layers in Inception-v3 for a batch size of 512 images on up to 16 GPUs. The training time includes forward processing, backward propagation, and gradient aggregation. The figure shows that different layers in a neural network may prefer different hardware configurations, and there is no single configuration that is optimal for all layers. For example, the third layer performs best on 16 GPUs while the last layer performs best on 4 GPUs. Thus, a parallelism configuration includes both selecting the data dimensions to be parallelized and the number of parallel workers (or, equivalently, the number of subsets into which the data is partitioned). 3 DEEPA Similar to TensorFlow and PyTorch, DeePa uses computation graphs to describe dependencies between operations. In a computation graph G = (V,E), each node n ∈ V is an operation (e.g., a convolution or matrix-multiply), and each directed edge (u, v) ∈ E is a tensor that is an output of u and an input of v. One key difference between DeePa and TensorFlow or PyTorch is that each node in the DeePa computation graph also includes a configuration that describes how the corresponding operation is parallelized across different workers. For each parallelizable dimension (i.e., image, height, width, and channel), the configuration includes an integer that describes the degree of parallelism in that dimension. For a configuration, the product of the integers over all dimensions is the number of workers needed to process the operation in that configuration. Figure 4 demonstrates some example configurations that explore parallelism in a single dimension as well as combinations of different dimensions. DeePa assumes equal partitioning in each dimension. As a result, each worker receives the same size input, which provides well-balanced workload distribution in our experiments. For each node in the computation graph, its configuration describes how the output tensor is divided onto multiple workers. Each worker computes a disjoint subset of the output tensor, and thus each worker can process the operation in parallel without data dependencies. Given a node’s configuration, DeePa calculates the input sets for each worker and automatically schedules proper data transfers between operations. DeePa also provides three additional functions: • For each node v and configuration c, v.compute(c) estimates the time to process the corresponding operation under the parallelism configuration c. This includes both the forward processing and back propagation time and is estimated by running the operation in that configuration multiple times on the device and measuring the average execution time. • For each edge e = (u, v), e.xfer(cu, cv) estimates the time to transfer the input tensor e to each worker, using the size of the data to be moved and the known communication bandwidth. Note that e.xfer(cu, cv) is zero if u and v have the same configuration (i.e., cu = cv), in which case no data is transferred. As with compute(), we precompute the xfer() function for each edge in the graph by calculating the overall data transfer size for all possible source and destination configurations. • For each node v and configuration c, v.update(c) estimates the time to update parameters for the corresponding operation. We use the data transfer time to approximate the update time, since the data transfer time is much longer than the compute time for updating parameters. Note that different configurations can have significantly different update time, as described in Section 2.2. A global configuration g includes a parallelism configuration for each node in a computation graph: g(v) describes the parallelism configuration for node v. Using the functions defined above, we can model the per-step execution time for a computation graph: Cost(g, (V,E)) = ∑ v∈V {v.compute(g(v)) + v.update(g(v))}+ ∑ e=(u,v)∈E e.xfer(g(u), g(v)) (1) Cost(g, (V,E)) estimates the per-step execution time if the computation graph (V,E) is parallelized using global configuration g. This execution time includes forwarding processing, backward propagation, and gradient aggregation. Equation 1 expresses the problem of finding the configuration for each individual node as a global optimization problem. 4 FINDING OPTIMAL GLOBAL CONFIGURATIONS We now describe our algorithm for finding a global configuration that minimizes Equation 1. In DeePa, each node can select any of a fixed (but large) set of parallelism configurations. Therefore the number of potential global configurations is exponential in the number of nodes in a computation graph, which makes it impractical to enumerate all global configurations for deep CNNs such as VGG-16 and Inception-v3. However, the CNNs we have seen in practice exhibit strong locality: each node is only connected to a few nodes with similar depth in a computation graph. Based on this observation, we use the following two elimination strategies to iteratively simplify the computation graph while preserving the globally optimal configuration. Node elimination. For each node w with a single in-edge e1 = (u,w) and a single out-edge e2 = (w, v), we remove node w and the two edges e1 and e2 from the graph and insert a new edge e′ = (u, v) (shown in Figure 5a). The xfer() function for node e′ is e′.xfer(cu, cv) = min cw {e1.xfer(cu, cw) + w.compute(cw) + w.update(cw) + e2.xfer(cw, cv)} (2) Note that because we have precomputed the xfer() function for edges in the original graph, we can similarly compute the xfer() function for the transitive edge added by a node elimination; i.e., we use dynamic programming to compute the optimal configuration for node w for every possible choice of configurations for nodes u and v. For CNNs with a linear computation graph (e.g., AlexNet and VGG-16), node elimination is sufficient to reduce the original graph to a graph with only 2 nodes. Edge elimination. For two edges with the same source and destination node (i.e., e1 = (u, v) and e2 = (u, v)), we can remove e1 and e2 from the graph and insert a new edge e′ = (u, v) (shown in Figure 5b). The xfer() function for node e′ is e′.xfer(cu, cv) = e1.xfer(cu, cv) + e2.xfer(cu, cv) (3) As with node elimination, we compute the xfer() function for e′ using the already computed xfer() functions for e1 and e2. Figure 6 shows how DeePa iteratively eliminates nodes and edges for an Inception-v3 module. The full Inception-v3 computation graph has 120 nodes, which DeePa reduces to a 2-node graph. DeePa iteratively uses node and edge eliminations to simplify a computation graph until neither elimination can be applied. DeePa then enumerates all global configurations for the final graph and chooses the one that minimizes the Cost function in Equation 1. After deciding the configuration for each node in the final graph, DeePa then decides the configuration for the eliminated nodes by undoing the node and edge eliminations in reverse order. When undoing a node elimination for node w, DeePa selects the configuration that minimizes Equation 2 for node w. After undoing all eliminations, DeePa has a configuration for every node in the original graph. In Appendix A.1, we prove that our algorithm finds an optimal global configuration. In our experiments, DeePa finds an optimal configuration for parallelizing the largest CNN we have worked with, Inception-v3, on 16 GPUs in about 100ms. 5 IMPLEMENTATION We found that it is non-trivial to parallelize the training of CNNs in the height, width, and channel dimensions in existing frameworks (e.g., TensorFlow, PyTorch, and Caffe2), and none provides an interface for controlling per-operation parallelism. We implemented DeePa in Legion (Bauer et al., 2012), a high-performance parallel runtime for distributed heterogeneous architectures, and use cuDNN (Chetlur et al., 2014) and cuBLAS (cub, 2016) as the underlying libraries for processing neural network operations. The following Legion features significantly simplify our implementation for DeePa. First, Legion supports high-dimensional partitioning that allows us to parallelize any operation in any combination of the dimensions. Second, Legion allows DeePa to control parallelism at the granularity of each operation. Third, Legion allows fine-grain control over the placement of data in memory. Fourth, Legion’s asynchronous tasking model makes it easy to exploit task as well as image parallelism. We also include two critical optimizations that help achieve good performance. Overlapping computation with data transfers. DeePa manages the gradients of each operation separately and transfers an operation’s gradients as long as its back propagation is completed. We have found that this can effectively hide the data transfer overhead for gradient synchronization. As a result, the synchronous training performance matches asynchronous training in DeePa, which allows users to use synchronous training with its better algorithmic efficiency. Distributing parameter servers. Existing frameworks use parameter servers to store and update variables for a CNN model. Parameter servers are located in CPU memory in TensorFlow and PyTorch. Because DeePa manages the parameters for each operation separately, DeePa can opportunistically distribute the parameter server onto the GPU memories whenever possible. This eliminates data transfers for operations whose gradients and parameter server are located on the same GPU and transforms all GPU to CPU copies into faster GPU to GPU copies. 6 RELATED WORK To the best of our knowledge, DeePa is the first deep learning framework that controls and optimizes the parallelism of neural networks in all dimensions at the granularity of each operation. Existing frameworks such as TensorFlow (Abadi et al., 2016), Caffe2 (Caf, 2016), and PyTorch (Pyt, 2017) use image parallelism to distribute the training of CNNs and only explore parallelism in the image dimension. The standard image parallelism configuration keeps a replica of the entire network on each worker, which results in large data transfers for synchronizing the gradients in each step. Mirhoseini et al. (2017) uses model parallelism that assigns each operation to a dedicated processor for training Inception-v3. It uses a reinforcement learning algorithm to optimize the placement of each operation on a GPU device. The learned device placement on 4 GPUs achieves 19% speedup compared to single GPU performance. However, parallelism in each operation is not explored. Krizhevsky (2014) introduces “one weird trick” (OWT) that combines image parallelism with model parallelism to accelerate the distributed training of AlexNet, which efficiently reduces the data transfer cost compared to the standard image parallelism configuration. In Section 7.1.2, we show that DeePa further reduces the overall data transfers for AlexNet by 3× and the per-step training time by 2.3× compared to OWT. Goyal et al. (2017) empirically shows no loss of accuracy for training ResNet-50 on the ImageNet dataset with a large minibatch size of 8192 images3. It uses the standard image parallelism configuration to distribute the training onto 256 GPUs and includes a number of optimizations for reducing communication overhead. As communication is a bottleneck in distributed deep learning, we believe our techniques for reducing data transfers can substantially benefit training on large numbers of GPUs. 7 EVALUATION We use AlexNet (Krizhevsky, 2014), VGG-16 (Simonyan & Zisserman, 2014), and Inceptionv3 (Szegedy et al., 2016) as benchmark CNNs and use the ImageNet dataset (Russakovsky et al., 2015) as the input. For each CNN, we compare the performance of DeePa against TensorFlow, PyTorch, and OWT. We implement OWT in DeePa by restricting all convolutional and pooling layers to use image parallelism and all fully-connected layers to use model parallelism. 7.1 CASE STUDY ON A 16-GPU MACHINE We conduct a detailed case study for training the three CNNs on a 16-GPU machine, with two Intel 10-core E5-2680 Xeon processors, 256 GB main memory, and 16 NVIDIA Tesla K80 GPUs4. We use all 16 GPUs for training each CNN model with a minibatch size of 512 images. As a result, each GPU processes a batch of 32 images in the image parallelism configuration. DeePa uses the search algorithm in Section 4 to find the optimal parallelism configurations, which requires 0.7, 1.1, and 4.8 seconds for AlexNet, VGG-16, and Inception-v3, respectively. Figure 7 shows the synchronous training throughput for a minibatch size of 512 images on 16 GPUs. When DeePa uses image parallelism for all operations, DeePa achieves competitive performance compared to the best of TensorFlow and PyTorch. The OWT approach that uses model parallelism for fully-connected layers speeds up the training throughput by 1.4×, 1.2×, and 1.07× compared to image parallelism using DeePa. The best configurations found by DeePa achieve 6.5×, 1.9×, and 1.5× speedup compared to TensorFlow and PyTorch. Three main optimizations in DeePa achieve most of the performance benefit over the other frameworks. First, DeePa significantly reduces data transfers in each step, as shown in Figure 8. Compared to image parallelism, the OWT approach reduces data transfers by 1.05-8.4×. However, the best configuration used by DeePa further reduces data transfers by 1.2-2.7× compared to OWT. Second, the optimization for overlapping computation with data transfers (described in Section 5) effectively hides data transfer latency and achieves better GPU utilization. The grey bars in Figure 7 3In SGD, the parameters are updated after processing a minibatch of training examples. 4The machine is equipped with 8 GPU cards, each of which has 2 Tesla K80 GPUs. illustrate DeePa’s performance when the overlap optimization is disabled, which shows that overlapping computation with data transfers can improve the training throughput by 10%-30%. Third, DeePa also improves performance by exploring parallelism in the height and width dimensions (see Section 7.1.3). 7.1.1 THE BEST CONFIGURATIONS We describe the best configurations discovered for AlexNet, VGG-16, and Inception-v3 in Sections 7.1.2 to 7.1.4. The best configurations have several similarities. First, for the beginning layers with large height/width dimensions and small channel dimensions, DeePa uses image parallelism on all available GPUs, since the data transfers for synchronizing gradients are much smaller than the data transfers for moving tensors between operations. Second, deeper layers in CNNs tend to have smaller height/width dimensions and larger channel dimensions. As a result, the cost for moving tensors between different operations decreases, while the cost for synchronizing gradients increases. DeePa adaptively reduces the number of GPU workers for these layers to reduce the expensive data transfers for synchronizing gradients at the cost of introducing cheaper data transfers for moving tensors. Third, DeePa uses model parallelism on a small number of GPU workers for fully-connected layers, because synchronizing gradients and moving tensors are both much more expensive than the compute time for fully-connected layers. DeePa reduces the data transfers for synchronizing gradients and moving tensors at the cost of using fewer GPUs. Conv11x11 Pooling Conv5x5 Pooling Conv3x3 Conv3x3 Conv3x3 Pooling Linear Linear Linear Softmax Figure 9 shows the global configuration for AlexNet on 16 GPU workers. Note that DeePa selects the parallelism configuration that optimizes the performance for each layer. Table 2 lists the cost for different configurations of the first fully-connected layer. The standard image parallelism configuration eliminates the cost for transferring the input tensors but introduces additional data transfers for synchronizing gradients. The OWT approach completely eliminates gradient synchronization at the cost of replicating the input tensors on every GPU worker. The configuration chosen by DeePa only uses 2 GPU workers for training the first fully-connected layer, which prolongs the compute time but significantly reduces the cost for both transferring input tensors and synchronizing gradients. As a result, DeePa reduces the total cost by 5× compared to other approaches. DeePa uses image parallelism for all convolutional and pooling layers, because the additional data transfer cost introduced by transforming configurations outweighs any performance benefits. 2 x Conv3x3 + Pooling 2 x Conv3x3 + Pooling 3 x Conv3x3 + Pooling 3 x Conv3x3 + Pooling 3 x Conv3x3 + Pooling Linear Linear Linear Softmax DeePa uses similar configurations for parallelizing the fully-connected layers in VGG-16 (Figure 10). In addition, DeePa also uses a different configuration to cooperatively accelerate the last three convolutional layers (the yellow node in Figure 10). Table 3 lists the cost for different parallelism configurations for the last three convolutional layers. The configuration with optimal total cost uses only four GPU workers for the last three convolutional layers to reduce data transfers for synchronizing gradients. DeePa also exploits parallelism in the height and width dimensions to further reduce the compute time. 7.1.4 INCEPTION-V3 Conv1x1 Conv3x3 Conv1x1 The Inception-v3 model has multiple Inception modules (Szegedy et al., 2016). Each module has several branches of convolutional and pooling layers, which are then concatenated as the output tensor of the module. Figure 11 shows the global configuration for Inception-v3. DeePa uses different configurations to parallelize different branches for the InceptionE1 module, as shown in Figure 12. We found that this configuration reduces data transfers by 30% in InceptionE1 and InceptionE2 and reduces overall data transfers by 20%. 7.2 MINIBATCH SIZE The minibatch size plays an important rule on the performance of CNNs. Figure 13 compares DeePa, PyTorch, and TensorFlow with different minibatch sizes. All three networks were trained on 16 Tesla K80 GPUs on a single node, as described in Section 7.1. We were not able to train VGG-16 and Inception-v3 with a minibatch size of 2048 images, because the required metadata size exceeds the aggregate memory capacity of the 16 GPUs. Figure 13 shows that, DeePa achieves constant speedups compared to PyTorch and TensorFlow for various minibatch sizes. In particular, DeePa achieves 4.6-6.5×, 1.6-1.9×, and 1.2-1.5× speedup for AlexNet, VGG-16, and Inception-v3, respectively. 7.3 MULTI-NODE RESULTS We evaluate the scalability of different frameworks by comparing their training throughput with different number of GPUs and compute nodes. The experiments were performed on a GPU cluster with 4 nodes, each of which is equipped with two Intel 10-core E5-2600 Xeon processors, 256G main memory, and four NVIDIA Tesla P100 GPUs. GPUs on the same node are connected by NVLink, and nodes are connected over 100Gb/s EDR Infiniband. Figure 14 shows the performance comparison among DeePa, PyTorch, and TensorFlow for weakscaling. DeePa achieves competitive performance compared to PyTorch and TensorFlow for training on a single GPU, in which all three frameworks place all operations on a single GPU. For training on 4 GPUs on a single node, DeePa achieves 3.1×, 1.6×, and 1.3× speedup for AlexNet, VGG-16, and Inception-v3, respectively. DeePa achieves even better performance speedups for trainings on multiple nodes, where the data transfer time becomes a larger component of the per-iteration training time. For training on 4 nodes, DeePa achieves 8.1×, 3.2×, and 1.8× speedup for AlexNet, VGG-16, and Inception-v3, respectively. 8 CONCLUSION We have presented DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs. DeePa optimizes the parallelism configuration chosen at the granularity of individual layers. DeePa achieves up to 6.5× for training CNNs and reduces overall data transfers by up to 23× compared to state-of-the-art deep learning frameworks. A APPENDIX A.1 NODE AND EDGE ELIMINATION We prove the correctness of the node and edge eliminations in Section 4. In particular, we prove that after applying node and edge eliminations, the modified graph has the same optimal configuration as the original graph. A.1.1 NODE ELIMINATION For a given computation graph G = (V,E), applying a node elimination on w requires w having a single in-edge e1 = (u,w) and a single out-edge e2 = (w, v). The node elimination results in a modified graph G′ = (V ′, E′), where V ′ = V − {w}, E′ = E − e1 − e2 + e′, and e′ = (u, v). Theorem 1. Consider graphs (V,E) and the result of a single node elimination (V ′, E′). Then an optimal configuration of V,E) is also an optimal configuration of (V ′, E′), and an optimal configuration of (V ′, E′) is extensible to a an optimal configuration of (V,E). Proof. The Cost function is defined in Equation 1. Let g be any configuration. We first compute the difference between Cost(g, (V,E)) and Cost(g, (V ′, E′)). Cost(g, (V,E))− Cost(g, (V ′, E′)) = ∑ v∈V {v.compute(g(v)) + v.update(g(v))}+ ∑ e=(u,v)∈E e.xfer(g(u), g(v)) − ∑ v∈V ′ {v.compute(g(v)) + v.update(g(v))}+ ∑ e=(u,v)∈E′ e.xfer(g(u), g(v)) =w.compute(g(w)) + w.update(g(w)) + e1.xfer(g(u), g(w)) + e2.xfer(g(w), g(v))− e′.xfer(g(u), g(v)) (4) Now assume g is an optimal configuration for (V,E). Then we have w.compute(g(w)) + w.update(g(w)) + e1.xfer(g(u), g(w)) + e2.xfer(g(w), g(v)) =min cw {w.compute(cw) + w.update(cw) + e1.xfer(g(u), cw) + e2.xfer(cw, g(v))} (5) Therefore, g is an optimal configuration of (V ′, E′). For the other direction, note that if g is an optimal configuration of (V ′, E′), then it can be extended to an optimal configuration of (V,E) by adding the node w with the same minimal assignment. A.1.2 EDGE ELIMINATION For a computation graph G(V,E), applying an edge elimination on e1 = (u, v) and e2 = (u, v) results in a modified graph G′ = (V,E′), where E′ = E − e1 − e2 + e′ and e′ = (u, v). We prove that Cost(g, (V,E)) = Cost(g, (V,E′)) for any global configuration g of (V,E). Theorem 2. For any global configuration g of graph G = (V,E), Cost(g, (V,E)) = Cost(g, (V,E′)), where (V,E′) is the modified graph of (V,E) after an edge elimination. Proof. We compute the difference between Cost(g, (V,E)) and Cost(g, (V,E′)). Cost(g, (V,E))− Cost(g, (V,E′)) =e1.xfer(g(u), g(v))− e2.xfer(g(u), g(v)) + e′.xfer(g(u), g(v)) =0 (6) The last equation uses Equation 3. A.2 RELATED WORK ON OVERLAPPING COMMUNICATION WITH DATA TRANSFER The overlap optimization in Section 5 is motivated by Goyal et al. (2017), which performs gradient aggregation in parallel with back propagation to scale synchronous training to large number of GPUs. We extend their design and implementation by also enabling the optimization for asynchronous training in DeePa. A.3 PROFILING RESULTS We show profiling results for visualizing the performance bottlenecks in different parallelism approaches. The experiment was performed on a single node with four Tesla P100 GPUs (as described in Section 7.3). We enable overlapping computation with data transfers (described in Section 5) in this experiment. Figure 15 shows the profiling results for training VGG-16 on 4 GPUs with different parallelism configurations. Note that DeePa with image parallelism achieves 10% higher training throughput compared to PyTorch and TensorFlow, as shown in Figure 14. Figure 15a shows that all GPUs are highly utilized during forward and backward passes, as indicated by the tight packing of tasks in the timeline. However, the image parallelism approach requires moving 4GB of metadata in every iteration, which cannot be fully overlapped with back propagation, therefore the image parallelism approach has a performance gap between iterations (shown as the white space on the GPU timelines). Figure 15b shows the profiling of the optimal parallelism configuration chosen by DeePa, which uses image parallelism on 4 GPUs for all convolutional layers and pooling layers and uses model parallelism on 2 GPUs for the fully connected layers. Therefore, the training with the optimal configuration includes data transfers for each fully connected layers, which adds small performance gaps at the end of the forward pass and the beginning of the backward pass (shown as the small white space on the GPU timelines). However, the optimal configuration reduces the per-iteration data transfers from 4GB to 490MB, which effectively hides data transfer overhead and achieves better GPU utilization. As a result, the optimal configuration reduces the per-iteration training time from 0.34 seconds to 0.24 seconds. A.4 IMANGENET-22K We compare the performance of DeePa, PyTorch, and TensorFlow on the ImageNet-22K dataset (Russakovsky et al., 2015) that contains 21,841 different categories (the ImageNet dataset used in Section 7 contains 1,000 catagories). The last fully-connected layer in AlexNet, VGG-16, and Inception-v3 originally have 1,000 neurons followed by a 1,000-way softmax layer. To train the three networks on the ImageNet-22K dataset, we change the last fully-connected layer to have 21,841 neurons and use a 21,841-way softmax layer at the end. The modified networks were trained on 16 Tesla K80 GPUs on a single node with a minibatch size of 512 images. Figure 16 compares the training throughput and per-iteration data transfers among DeePa, PyTorch, and TensorFlow on the ImageNet and ImageNet-22K datasets. Figure 16a shows that, on the ImageNet-22K dataset, the training throughput of PyTorch and TensorFlow is reduced by 20%- 45%, while DeePa’s throughput falls off by 3%, compared to training on the original ImageNet dataset. Figure 16b compares the per-iteration data transfers between image parallelism and the global configurations used by DeePa. Using image parallelism increases the data transfers in each iteration by 5-10GB, while DeePa only increases the per-iteration data transfers by 40MB. As a result, for training on the ImageNet-22K dataset, DeePa reduces the per-iteration data transfers by 3.7-44.5× compared to image parallelism.
1. What is the focus of the paper regarding convolutional neural networks? 2. What are the strengths and weaknesses of the proposed framework for parallelization? 3. How does the reviewer assess the presentation and comparisons made in the paper? 4. What are the limitations of the proposed approach, particularly in practical scenarios? 5. How does the reviewer suggest improving the paper's content and experiments?
Review
Review This paper develops a framework for parallelization of convolutional neural nets. In the framework, parallelism on different dimensions are explored for convolutional layers to accelerate the computation. An algorithm is developed to find the best global configuration. The presentation needs to be more organized, it is not very easy to follow. 1. Computation throughput is not defined. 2. Although the author mentions DeePa with Tensorflow or Pytorch several times, I think it is not proper to make this comparison. The main idea of this paper is to optimize the parallelization scheme of CNN, which is independent of the framework used. It is more useful if the configuration searching can be developed on tensorflow / pytorch. 3. The per layer comparison is not very informative for practice because the data transfer costs of convolution layers could be completely hidden in data parallelization. In data parallelism, the GPU devices are often fully occupied during the forward pass and backward pass. Gaps are only in between forward and backward, and between iterations. Model parallelism would add gaps everywhere in each layer. This could be more detrimental when the communication is over ethernet. To be more convincing, it is better to show the profile graph of each run to show which gaps are eliminated, rather than just numbers. 4. The batch size is also a crucial factor, difference batch size would favor different methods. More comparisons are necessary.
ICLR
Title Exploring the Hidden Dimension in Accelerating Convolutional Neural Networks Abstract DeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks. DeePa optimizes parallelism at the granularity of each individual layer in the network. We present an elimination-based algorithm that finds an optimal parallelism configuration for every layer. Our evaluation shows that DeePa achieves up to 6.5× speedup compared to state-of-the-art deep learning frameworks and reduces data transfers by up to 23×. 1 INTRODUCTION Training convolutional neural networks (CNNs) is increasingly compute-intensive and timeconsuming. It takes days or even weeks to train deep CNNs from scratch (Szegedy et al., 2014; Zeiler & Fergus, 2014; Simonyan & Zisserman, 2014; Szegedy et al., 2016). Existing deep learning frameworks such as TensorFlow, PyTorch, and Caffe2 parallelize the training process onto multiple processors (usually GPUs) using image parallelism1 dividing the entire image dataset into batches with the same number of images and assigning each batch to a dedicated processor. The standard parallelization of CNN training only exploits image parallelism. However, other dimensions can also parallelize the training process. For example, in CNNs for 2D images, data is commonly organized as 4-dimensional tensors (i.e., image, height, width, channel). The image dimension includes an index for each image in the input dataset. The height and width dimensions specify a position in an image. For a particular position, the channel dimension2 indexes different neurons for that position. Exploring these other parallelizable dimensions can potentially reduce the compute time and data transfer cost when training CNNs (see Section 2). Moreover, different layers in a CNN may prefer different parallelism configurations for achieving optimal performance. We propose DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs. To the best of our knowledge, DeePa is the first system that models and exploits the parallelism of neural networks at the granularity of each individual layer. To generate a parallelism configuration for each layer, DeePa uses an elimination-based algorithm that automatically finds the configuration with the best estimated performance. The main contributions of this paper are: • We present DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs. • The parallelization strategy is selected at the granularity of each individual layer. • We present an elimination-based algorithm for finding the parallelism configuration with optimal estimated performance for each layer. • Our evaluation shows that, compared to state-of-the-art deep learning frameworks (e.g., TensorFlow and PyTorch), DeePa achieves 6.5×, 1.9×, and 1.5× speedup for AlexNet, 1Some papers use the term data parallelism to refer to parallelism across images. Since this paper involves parallelizing the training dataset in other data dimensions, we use image parallelism to distinguish this from other parallelization strategies. 2Some papers use the term depth to refer to different neurons for a position. In this paper, depth refers to the number of layers for an entire neural network and we use channel for the neurons for a position. 2 MOTIVATION This work is motivated by the following observations. 2.1 ACCELERATING COMPUTATION THROUGHPUT Convolutional layers generally consume the bulk of the training time in CNNs, and parallelizing training in different data dimensions results in significantly different performance. Figure 1 shows the relative speed of training six different convolutional layers from AlexNet, VGG-16, and Inception-v3. The properties of the convolutional layers are shown in Table 1. For each convolutional layer, we tried parallelizing the computation in each individual parallelizable dimension as well as combinations of different parallelizable dimensions, and we report the performance of the standard parallelization over images along with the worst and best parallelization strategies we discovered. Figure 1 shows that different parallelism configurations result in very different performance, and image parallelism generally achieves suboptimal performance. Therefore, exploring parallelism in other dimensions can potentially accelerate the training of convolutional layers. 2.2 REDUCING DATA TRANSFER COST Different parallelization strategies can also result in significantly different amounts of data movement. Figure 3 shows an example of parallelizing the first fully-connected layer of VGG-16 on two GPUs in different dimensions. In image parallelism (Figure 3a), each GPU processes a batch of images and computes the gradient for the entire fully-connected layer. This requires each GPU to synchronize the gradients for the entire fully-connected layer (shown as the shadow rectangles) after each step. An alternative approach (Figure 3b) parallelizes in the channel dimension by assigning a subset of the output channels to each GPU. As a result, different GPUs compute the gradients for disjoint subsets of the fully-connected layer, which eliminates transferring the fully-connected layer but introduces additional data transfers for input tensors (shown as the shadow rectangles). For this particular case, using parallelism in the channel dimension reduces data transfer costs by 12×. 2.3 OPTIMIZING PER-LAYER PERFORMANCE When processing a batch of images, increasing the number of workers does not always improve overall execution time, due to the data transfer overhead to synchronize gradients across different workers. Figure 2 shows the per-step training time for three different layers in Inception-v3 for a batch size of 512 images on up to 16 GPUs. The training time includes forward processing, backward propagation, and gradient aggregation. The figure shows that different layers in a neural network may prefer different hardware configurations, and there is no single configuration that is optimal for all layers. For example, the third layer performs best on 16 GPUs while the last layer performs best on 4 GPUs. Thus, a parallelism configuration includes both selecting the data dimensions to be parallelized and the number of parallel workers (or, equivalently, the number of subsets into which the data is partitioned). 3 DEEPA Similar to TensorFlow and PyTorch, DeePa uses computation graphs to describe dependencies between operations. In a computation graph G = (V,E), each node n ∈ V is an operation (e.g., a convolution or matrix-multiply), and each directed edge (u, v) ∈ E is a tensor that is an output of u and an input of v. One key difference between DeePa and TensorFlow or PyTorch is that each node in the DeePa computation graph also includes a configuration that describes how the corresponding operation is parallelized across different workers. For each parallelizable dimension (i.e., image, height, width, and channel), the configuration includes an integer that describes the degree of parallelism in that dimension. For a configuration, the product of the integers over all dimensions is the number of workers needed to process the operation in that configuration. Figure 4 demonstrates some example configurations that explore parallelism in a single dimension as well as combinations of different dimensions. DeePa assumes equal partitioning in each dimension. As a result, each worker receives the same size input, which provides well-balanced workload distribution in our experiments. For each node in the computation graph, its configuration describes how the output tensor is divided onto multiple workers. Each worker computes a disjoint subset of the output tensor, and thus each worker can process the operation in parallel without data dependencies. Given a node’s configuration, DeePa calculates the input sets for each worker and automatically schedules proper data transfers between operations. DeePa also provides three additional functions: • For each node v and configuration c, v.compute(c) estimates the time to process the corresponding operation under the parallelism configuration c. This includes both the forward processing and back propagation time and is estimated by running the operation in that configuration multiple times on the device and measuring the average execution time. • For each edge e = (u, v), e.xfer(cu, cv) estimates the time to transfer the input tensor e to each worker, using the size of the data to be moved and the known communication bandwidth. Note that e.xfer(cu, cv) is zero if u and v have the same configuration (i.e., cu = cv), in which case no data is transferred. As with compute(), we precompute the xfer() function for each edge in the graph by calculating the overall data transfer size for all possible source and destination configurations. • For each node v and configuration c, v.update(c) estimates the time to update parameters for the corresponding operation. We use the data transfer time to approximate the update time, since the data transfer time is much longer than the compute time for updating parameters. Note that different configurations can have significantly different update time, as described in Section 2.2. A global configuration g includes a parallelism configuration for each node in a computation graph: g(v) describes the parallelism configuration for node v. Using the functions defined above, we can model the per-step execution time for a computation graph: Cost(g, (V,E)) = ∑ v∈V {v.compute(g(v)) + v.update(g(v))}+ ∑ e=(u,v)∈E e.xfer(g(u), g(v)) (1) Cost(g, (V,E)) estimates the per-step execution time if the computation graph (V,E) is parallelized using global configuration g. This execution time includes forwarding processing, backward propagation, and gradient aggregation. Equation 1 expresses the problem of finding the configuration for each individual node as a global optimization problem. 4 FINDING OPTIMAL GLOBAL CONFIGURATIONS We now describe our algorithm for finding a global configuration that minimizes Equation 1. In DeePa, each node can select any of a fixed (but large) set of parallelism configurations. Therefore the number of potential global configurations is exponential in the number of nodes in a computation graph, which makes it impractical to enumerate all global configurations for deep CNNs such as VGG-16 and Inception-v3. However, the CNNs we have seen in practice exhibit strong locality: each node is only connected to a few nodes with similar depth in a computation graph. Based on this observation, we use the following two elimination strategies to iteratively simplify the computation graph while preserving the globally optimal configuration. Node elimination. For each node w with a single in-edge e1 = (u,w) and a single out-edge e2 = (w, v), we remove node w and the two edges e1 and e2 from the graph and insert a new edge e′ = (u, v) (shown in Figure 5a). The xfer() function for node e′ is e′.xfer(cu, cv) = min cw {e1.xfer(cu, cw) + w.compute(cw) + w.update(cw) + e2.xfer(cw, cv)} (2) Note that because we have precomputed the xfer() function for edges in the original graph, we can similarly compute the xfer() function for the transitive edge added by a node elimination; i.e., we use dynamic programming to compute the optimal configuration for node w for every possible choice of configurations for nodes u and v. For CNNs with a linear computation graph (e.g., AlexNet and VGG-16), node elimination is sufficient to reduce the original graph to a graph with only 2 nodes. Edge elimination. For two edges with the same source and destination node (i.e., e1 = (u, v) and e2 = (u, v)), we can remove e1 and e2 from the graph and insert a new edge e′ = (u, v) (shown in Figure 5b). The xfer() function for node e′ is e′.xfer(cu, cv) = e1.xfer(cu, cv) + e2.xfer(cu, cv) (3) As with node elimination, we compute the xfer() function for e′ using the already computed xfer() functions for e1 and e2. Figure 6 shows how DeePa iteratively eliminates nodes and edges for an Inception-v3 module. The full Inception-v3 computation graph has 120 nodes, which DeePa reduces to a 2-node graph. DeePa iteratively uses node and edge eliminations to simplify a computation graph until neither elimination can be applied. DeePa then enumerates all global configurations for the final graph and chooses the one that minimizes the Cost function in Equation 1. After deciding the configuration for each node in the final graph, DeePa then decides the configuration for the eliminated nodes by undoing the node and edge eliminations in reverse order. When undoing a node elimination for node w, DeePa selects the configuration that minimizes Equation 2 for node w. After undoing all eliminations, DeePa has a configuration for every node in the original graph. In Appendix A.1, we prove that our algorithm finds an optimal global configuration. In our experiments, DeePa finds an optimal configuration for parallelizing the largest CNN we have worked with, Inception-v3, on 16 GPUs in about 100ms. 5 IMPLEMENTATION We found that it is non-trivial to parallelize the training of CNNs in the height, width, and channel dimensions in existing frameworks (e.g., TensorFlow, PyTorch, and Caffe2), and none provides an interface for controlling per-operation parallelism. We implemented DeePa in Legion (Bauer et al., 2012), a high-performance parallel runtime for distributed heterogeneous architectures, and use cuDNN (Chetlur et al., 2014) and cuBLAS (cub, 2016) as the underlying libraries for processing neural network operations. The following Legion features significantly simplify our implementation for DeePa. First, Legion supports high-dimensional partitioning that allows us to parallelize any operation in any combination of the dimensions. Second, Legion allows DeePa to control parallelism at the granularity of each operation. Third, Legion allows fine-grain control over the placement of data in memory. Fourth, Legion’s asynchronous tasking model makes it easy to exploit task as well as image parallelism. We also include two critical optimizations that help achieve good performance. Overlapping computation with data transfers. DeePa manages the gradients of each operation separately and transfers an operation’s gradients as long as its back propagation is completed. We have found that this can effectively hide the data transfer overhead for gradient synchronization. As a result, the synchronous training performance matches asynchronous training in DeePa, which allows users to use synchronous training with its better algorithmic efficiency. Distributing parameter servers. Existing frameworks use parameter servers to store and update variables for a CNN model. Parameter servers are located in CPU memory in TensorFlow and PyTorch. Because DeePa manages the parameters for each operation separately, DeePa can opportunistically distribute the parameter server onto the GPU memories whenever possible. This eliminates data transfers for operations whose gradients and parameter server are located on the same GPU and transforms all GPU to CPU copies into faster GPU to GPU copies. 6 RELATED WORK To the best of our knowledge, DeePa is the first deep learning framework that controls and optimizes the parallelism of neural networks in all dimensions at the granularity of each operation. Existing frameworks such as TensorFlow (Abadi et al., 2016), Caffe2 (Caf, 2016), and PyTorch (Pyt, 2017) use image parallelism to distribute the training of CNNs and only explore parallelism in the image dimension. The standard image parallelism configuration keeps a replica of the entire network on each worker, which results in large data transfers for synchronizing the gradients in each step. Mirhoseini et al. (2017) uses model parallelism that assigns each operation to a dedicated processor for training Inception-v3. It uses a reinforcement learning algorithm to optimize the placement of each operation on a GPU device. The learned device placement on 4 GPUs achieves 19% speedup compared to single GPU performance. However, parallelism in each operation is not explored. Krizhevsky (2014) introduces “one weird trick” (OWT) that combines image parallelism with model parallelism to accelerate the distributed training of AlexNet, which efficiently reduces the data transfer cost compared to the standard image parallelism configuration. In Section 7.1.2, we show that DeePa further reduces the overall data transfers for AlexNet by 3× and the per-step training time by 2.3× compared to OWT. Goyal et al. (2017) empirically shows no loss of accuracy for training ResNet-50 on the ImageNet dataset with a large minibatch size of 8192 images3. It uses the standard image parallelism configuration to distribute the training onto 256 GPUs and includes a number of optimizations for reducing communication overhead. As communication is a bottleneck in distributed deep learning, we believe our techniques for reducing data transfers can substantially benefit training on large numbers of GPUs. 7 EVALUATION We use AlexNet (Krizhevsky, 2014), VGG-16 (Simonyan & Zisserman, 2014), and Inceptionv3 (Szegedy et al., 2016) as benchmark CNNs and use the ImageNet dataset (Russakovsky et al., 2015) as the input. For each CNN, we compare the performance of DeePa against TensorFlow, PyTorch, and OWT. We implement OWT in DeePa by restricting all convolutional and pooling layers to use image parallelism and all fully-connected layers to use model parallelism. 7.1 CASE STUDY ON A 16-GPU MACHINE We conduct a detailed case study for training the three CNNs on a 16-GPU machine, with two Intel 10-core E5-2680 Xeon processors, 256 GB main memory, and 16 NVIDIA Tesla K80 GPUs4. We use all 16 GPUs for training each CNN model with a minibatch size of 512 images. As a result, each GPU processes a batch of 32 images in the image parallelism configuration. DeePa uses the search algorithm in Section 4 to find the optimal parallelism configurations, which requires 0.7, 1.1, and 4.8 seconds for AlexNet, VGG-16, and Inception-v3, respectively. Figure 7 shows the synchronous training throughput for a minibatch size of 512 images on 16 GPUs. When DeePa uses image parallelism for all operations, DeePa achieves competitive performance compared to the best of TensorFlow and PyTorch. The OWT approach that uses model parallelism for fully-connected layers speeds up the training throughput by 1.4×, 1.2×, and 1.07× compared to image parallelism using DeePa. The best configurations found by DeePa achieve 6.5×, 1.9×, and 1.5× speedup compared to TensorFlow and PyTorch. Three main optimizations in DeePa achieve most of the performance benefit over the other frameworks. First, DeePa significantly reduces data transfers in each step, as shown in Figure 8. Compared to image parallelism, the OWT approach reduces data transfers by 1.05-8.4×. However, the best configuration used by DeePa further reduces data transfers by 1.2-2.7× compared to OWT. Second, the optimization for overlapping computation with data transfers (described in Section 5) effectively hides data transfer latency and achieves better GPU utilization. The grey bars in Figure 7 3In SGD, the parameters are updated after processing a minibatch of training examples. 4The machine is equipped with 8 GPU cards, each of which has 2 Tesla K80 GPUs. illustrate DeePa’s performance when the overlap optimization is disabled, which shows that overlapping computation with data transfers can improve the training throughput by 10%-30%. Third, DeePa also improves performance by exploring parallelism in the height and width dimensions (see Section 7.1.3). 7.1.1 THE BEST CONFIGURATIONS We describe the best configurations discovered for AlexNet, VGG-16, and Inception-v3 in Sections 7.1.2 to 7.1.4. The best configurations have several similarities. First, for the beginning layers with large height/width dimensions and small channel dimensions, DeePa uses image parallelism on all available GPUs, since the data transfers for synchronizing gradients are much smaller than the data transfers for moving tensors between operations. Second, deeper layers in CNNs tend to have smaller height/width dimensions and larger channel dimensions. As a result, the cost for moving tensors between different operations decreases, while the cost for synchronizing gradients increases. DeePa adaptively reduces the number of GPU workers for these layers to reduce the expensive data transfers for synchronizing gradients at the cost of introducing cheaper data transfers for moving tensors. Third, DeePa uses model parallelism on a small number of GPU workers for fully-connected layers, because synchronizing gradients and moving tensors are both much more expensive than the compute time for fully-connected layers. DeePa reduces the data transfers for synchronizing gradients and moving tensors at the cost of using fewer GPUs. Conv11x11 Pooling Conv5x5 Pooling Conv3x3 Conv3x3 Conv3x3 Pooling Linear Linear Linear Softmax Figure 9 shows the global configuration for AlexNet on 16 GPU workers. Note that DeePa selects the parallelism configuration that optimizes the performance for each layer. Table 2 lists the cost for different configurations of the first fully-connected layer. The standard image parallelism configuration eliminates the cost for transferring the input tensors but introduces additional data transfers for synchronizing gradients. The OWT approach completely eliminates gradient synchronization at the cost of replicating the input tensors on every GPU worker. The configuration chosen by DeePa only uses 2 GPU workers for training the first fully-connected layer, which prolongs the compute time but significantly reduces the cost for both transferring input tensors and synchronizing gradients. As a result, DeePa reduces the total cost by 5× compared to other approaches. DeePa uses image parallelism for all convolutional and pooling layers, because the additional data transfer cost introduced by transforming configurations outweighs any performance benefits. 2 x Conv3x3 + Pooling 2 x Conv3x3 + Pooling 3 x Conv3x3 + Pooling 3 x Conv3x3 + Pooling 3 x Conv3x3 + Pooling Linear Linear Linear Softmax DeePa uses similar configurations for parallelizing the fully-connected layers in VGG-16 (Figure 10). In addition, DeePa also uses a different configuration to cooperatively accelerate the last three convolutional layers (the yellow node in Figure 10). Table 3 lists the cost for different parallelism configurations for the last three convolutional layers. The configuration with optimal total cost uses only four GPU workers for the last three convolutional layers to reduce data transfers for synchronizing gradients. DeePa also exploits parallelism in the height and width dimensions to further reduce the compute time. 7.1.4 INCEPTION-V3 Conv1x1 Conv3x3 Conv1x1 The Inception-v3 model has multiple Inception modules (Szegedy et al., 2016). Each module has several branches of convolutional and pooling layers, which are then concatenated as the output tensor of the module. Figure 11 shows the global configuration for Inception-v3. DeePa uses different configurations to parallelize different branches for the InceptionE1 module, as shown in Figure 12. We found that this configuration reduces data transfers by 30% in InceptionE1 and InceptionE2 and reduces overall data transfers by 20%. 7.2 MINIBATCH SIZE The minibatch size plays an important rule on the performance of CNNs. Figure 13 compares DeePa, PyTorch, and TensorFlow with different minibatch sizes. All three networks were trained on 16 Tesla K80 GPUs on a single node, as described in Section 7.1. We were not able to train VGG-16 and Inception-v3 with a minibatch size of 2048 images, because the required metadata size exceeds the aggregate memory capacity of the 16 GPUs. Figure 13 shows that, DeePa achieves constant speedups compared to PyTorch and TensorFlow for various minibatch sizes. In particular, DeePa achieves 4.6-6.5×, 1.6-1.9×, and 1.2-1.5× speedup for AlexNet, VGG-16, and Inception-v3, respectively. 7.3 MULTI-NODE RESULTS We evaluate the scalability of different frameworks by comparing their training throughput with different number of GPUs and compute nodes. The experiments were performed on a GPU cluster with 4 nodes, each of which is equipped with two Intel 10-core E5-2600 Xeon processors, 256G main memory, and four NVIDIA Tesla P100 GPUs. GPUs on the same node are connected by NVLink, and nodes are connected over 100Gb/s EDR Infiniband. Figure 14 shows the performance comparison among DeePa, PyTorch, and TensorFlow for weakscaling. DeePa achieves competitive performance compared to PyTorch and TensorFlow for training on a single GPU, in which all three frameworks place all operations on a single GPU. For training on 4 GPUs on a single node, DeePa achieves 3.1×, 1.6×, and 1.3× speedup for AlexNet, VGG-16, and Inception-v3, respectively. DeePa achieves even better performance speedups for trainings on multiple nodes, where the data transfer time becomes a larger component of the per-iteration training time. For training on 4 nodes, DeePa achieves 8.1×, 3.2×, and 1.8× speedup for AlexNet, VGG-16, and Inception-v3, respectively. 8 CONCLUSION We have presented DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs. DeePa optimizes the parallelism configuration chosen at the granularity of individual layers. DeePa achieves up to 6.5× for training CNNs and reduces overall data transfers by up to 23× compared to state-of-the-art deep learning frameworks. A APPENDIX A.1 NODE AND EDGE ELIMINATION We prove the correctness of the node and edge eliminations in Section 4. In particular, we prove that after applying node and edge eliminations, the modified graph has the same optimal configuration as the original graph. A.1.1 NODE ELIMINATION For a given computation graph G = (V,E), applying a node elimination on w requires w having a single in-edge e1 = (u,w) and a single out-edge e2 = (w, v). The node elimination results in a modified graph G′ = (V ′, E′), where V ′ = V − {w}, E′ = E − e1 − e2 + e′, and e′ = (u, v). Theorem 1. Consider graphs (V,E) and the result of a single node elimination (V ′, E′). Then an optimal configuration of V,E) is also an optimal configuration of (V ′, E′), and an optimal configuration of (V ′, E′) is extensible to a an optimal configuration of (V,E). Proof. The Cost function is defined in Equation 1. Let g be any configuration. We first compute the difference between Cost(g, (V,E)) and Cost(g, (V ′, E′)). Cost(g, (V,E))− Cost(g, (V ′, E′)) = ∑ v∈V {v.compute(g(v)) + v.update(g(v))}+ ∑ e=(u,v)∈E e.xfer(g(u), g(v)) − ∑ v∈V ′ {v.compute(g(v)) + v.update(g(v))}+ ∑ e=(u,v)∈E′ e.xfer(g(u), g(v)) =w.compute(g(w)) + w.update(g(w)) + e1.xfer(g(u), g(w)) + e2.xfer(g(w), g(v))− e′.xfer(g(u), g(v)) (4) Now assume g is an optimal configuration for (V,E). Then we have w.compute(g(w)) + w.update(g(w)) + e1.xfer(g(u), g(w)) + e2.xfer(g(w), g(v)) =min cw {w.compute(cw) + w.update(cw) + e1.xfer(g(u), cw) + e2.xfer(cw, g(v))} (5) Therefore, g is an optimal configuration of (V ′, E′). For the other direction, note that if g is an optimal configuration of (V ′, E′), then it can be extended to an optimal configuration of (V,E) by adding the node w with the same minimal assignment. A.1.2 EDGE ELIMINATION For a computation graph G(V,E), applying an edge elimination on e1 = (u, v) and e2 = (u, v) results in a modified graph G′ = (V,E′), where E′ = E − e1 − e2 + e′ and e′ = (u, v). We prove that Cost(g, (V,E)) = Cost(g, (V,E′)) for any global configuration g of (V,E). Theorem 2. For any global configuration g of graph G = (V,E), Cost(g, (V,E)) = Cost(g, (V,E′)), where (V,E′) is the modified graph of (V,E) after an edge elimination. Proof. We compute the difference between Cost(g, (V,E)) and Cost(g, (V,E′)). Cost(g, (V,E))− Cost(g, (V,E′)) =e1.xfer(g(u), g(v))− e2.xfer(g(u), g(v)) + e′.xfer(g(u), g(v)) =0 (6) The last equation uses Equation 3. A.2 RELATED WORK ON OVERLAPPING COMMUNICATION WITH DATA TRANSFER The overlap optimization in Section 5 is motivated by Goyal et al. (2017), which performs gradient aggregation in parallel with back propagation to scale synchronous training to large number of GPUs. We extend their design and implementation by also enabling the optimization for asynchronous training in DeePa. A.3 PROFILING RESULTS We show profiling results for visualizing the performance bottlenecks in different parallelism approaches. The experiment was performed on a single node with four Tesla P100 GPUs (as described in Section 7.3). We enable overlapping computation with data transfers (described in Section 5) in this experiment. Figure 15 shows the profiling results for training VGG-16 on 4 GPUs with different parallelism configurations. Note that DeePa with image parallelism achieves 10% higher training throughput compared to PyTorch and TensorFlow, as shown in Figure 14. Figure 15a shows that all GPUs are highly utilized during forward and backward passes, as indicated by the tight packing of tasks in the timeline. However, the image parallelism approach requires moving 4GB of metadata in every iteration, which cannot be fully overlapped with back propagation, therefore the image parallelism approach has a performance gap between iterations (shown as the white space on the GPU timelines). Figure 15b shows the profiling of the optimal parallelism configuration chosen by DeePa, which uses image parallelism on 4 GPUs for all convolutional layers and pooling layers and uses model parallelism on 2 GPUs for the fully connected layers. Therefore, the training with the optimal configuration includes data transfers for each fully connected layers, which adds small performance gaps at the end of the forward pass and the beginning of the backward pass (shown as the small white space on the GPU timelines). However, the optimal configuration reduces the per-iteration data transfers from 4GB to 490MB, which effectively hides data transfer overhead and achieves better GPU utilization. As a result, the optimal configuration reduces the per-iteration training time from 0.34 seconds to 0.24 seconds. A.4 IMANGENET-22K We compare the performance of DeePa, PyTorch, and TensorFlow on the ImageNet-22K dataset (Russakovsky et al., 2015) that contains 21,841 different categories (the ImageNet dataset used in Section 7 contains 1,000 catagories). The last fully-connected layer in AlexNet, VGG-16, and Inception-v3 originally have 1,000 neurons followed by a 1,000-way softmax layer. To train the three networks on the ImageNet-22K dataset, we change the last fully-connected layer to have 21,841 neurons and use a 21,841-way softmax layer at the end. The modified networks were trained on 16 Tesla K80 GPUs on a single node with a minibatch size of 512 images. Figure 16 compares the training throughput and per-iteration data transfers among DeePa, PyTorch, and TensorFlow on the ImageNet and ImageNet-22K datasets. Figure 16a shows that, on the ImageNet-22K dataset, the training throughput of PyTorch and TensorFlow is reduced by 20%- 45%, while DeePa’s throughput falls off by 3%, compared to training on the original ImageNet dataset. Figure 16b compares the per-iteration data transfers between image parallelism and the global configurations used by DeePa. Using image parallelism increases the data transfers in each iteration by 5-10GB, while DeePa only increases the per-iteration data transfers by 40MB. As a result, for training on the ImageNet-22K dataset, DeePa reduces the per-iteration data transfers by 3.7-44.5× compared to image parallelism.
1. What is the main contribution of the paper regarding convolutional neural networks? 2. What are the strengths and weaknesses of the proposed approach compared to other popular frameworks? 3. Do you have any concerns about the paper's claims and results, particularly regarding its significance and applicability to newer devices? 4. Are there any questions you have regarding the paper's content or methodology?
Review
Review The paper proposes an approach that offers speedup on common convolutional neural networks. It presents the approach well and shows results comparing with other popular frameworks used in the field. Originality - The automation of parallelism across the different dimensions in each of the layers appears somewhat new. Although parallelism across each of the individual dimensions has been explored (batch parallel is most common and best supported, height and width is discussed at least in the DistBelief paper), automatically exploring this to find the most efficient approach is new. The splitting across channels seems not to have been covered in a paper before. Significance - Paper shows a significant speedup over existing approaches on a single machine (16 GPUs). It is unclear how well this would translate across machines or to more devices, and also on newer devices - the experiments were all done on 16 K80s (3 generations old GPUs). While the approach is interesting, its impact also depends on the speedup on the common hardware used today. Pros: - Providing better parallelism opportunities for convolutional neural networks - Simple approach to finding optimal global configurations that seems to work well - Positive results with significant speedups across 3 different networks Cons: - Unclear if speedups hold on newer devices - Useful to see how this scales across more than 1 machine - Claim on overlapping computation with data transfer seems incorrect. I am pretty sure TensorFlow and possibly PyTorch supports this. Questions: - How long does finding the optimal global configuration take for each model?
ICLR
Title Exploring the Hidden Dimension in Accelerating Convolutional Neural Networks Abstract DeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks. DeePa optimizes parallelism at the granularity of each individual layer in the network. We present an elimination-based algorithm that finds an optimal parallelism configuration for every layer. Our evaluation shows that DeePa achieves up to 6.5× speedup compared to state-of-the-art deep learning frameworks and reduces data transfers by up to 23×. 1 INTRODUCTION Training convolutional neural networks (CNNs) is increasingly compute-intensive and timeconsuming. It takes days or even weeks to train deep CNNs from scratch (Szegedy et al., 2014; Zeiler & Fergus, 2014; Simonyan & Zisserman, 2014; Szegedy et al., 2016). Existing deep learning frameworks such as TensorFlow, PyTorch, and Caffe2 parallelize the training process onto multiple processors (usually GPUs) using image parallelism1 dividing the entire image dataset into batches with the same number of images and assigning each batch to a dedicated processor. The standard parallelization of CNN training only exploits image parallelism. However, other dimensions can also parallelize the training process. For example, in CNNs for 2D images, data is commonly organized as 4-dimensional tensors (i.e., image, height, width, channel). The image dimension includes an index for each image in the input dataset. The height and width dimensions specify a position in an image. For a particular position, the channel dimension2 indexes different neurons for that position. Exploring these other parallelizable dimensions can potentially reduce the compute time and data transfer cost when training CNNs (see Section 2). Moreover, different layers in a CNN may prefer different parallelism configurations for achieving optimal performance. We propose DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs. To the best of our knowledge, DeePa is the first system that models and exploits the parallelism of neural networks at the granularity of each individual layer. To generate a parallelism configuration for each layer, DeePa uses an elimination-based algorithm that automatically finds the configuration with the best estimated performance. The main contributions of this paper are: • We present DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs. • The parallelization strategy is selected at the granularity of each individual layer. • We present an elimination-based algorithm for finding the parallelism configuration with optimal estimated performance for each layer. • Our evaluation shows that, compared to state-of-the-art deep learning frameworks (e.g., TensorFlow and PyTorch), DeePa achieves 6.5×, 1.9×, and 1.5× speedup for AlexNet, 1Some papers use the term data parallelism to refer to parallelism across images. Since this paper involves parallelizing the training dataset in other data dimensions, we use image parallelism to distinguish this from other parallelization strategies. 2Some papers use the term depth to refer to different neurons for a position. In this paper, depth refers to the number of layers for an entire neural network and we use channel for the neurons for a position. 2 MOTIVATION This work is motivated by the following observations. 2.1 ACCELERATING COMPUTATION THROUGHPUT Convolutional layers generally consume the bulk of the training time in CNNs, and parallelizing training in different data dimensions results in significantly different performance. Figure 1 shows the relative speed of training six different convolutional layers from AlexNet, VGG-16, and Inception-v3. The properties of the convolutional layers are shown in Table 1. For each convolutional layer, we tried parallelizing the computation in each individual parallelizable dimension as well as combinations of different parallelizable dimensions, and we report the performance of the standard parallelization over images along with the worst and best parallelization strategies we discovered. Figure 1 shows that different parallelism configurations result in very different performance, and image parallelism generally achieves suboptimal performance. Therefore, exploring parallelism in other dimensions can potentially accelerate the training of convolutional layers. 2.2 REDUCING DATA TRANSFER COST Different parallelization strategies can also result in significantly different amounts of data movement. Figure 3 shows an example of parallelizing the first fully-connected layer of VGG-16 on two GPUs in different dimensions. In image parallelism (Figure 3a), each GPU processes a batch of images and computes the gradient for the entire fully-connected layer. This requires each GPU to synchronize the gradients for the entire fully-connected layer (shown as the shadow rectangles) after each step. An alternative approach (Figure 3b) parallelizes in the channel dimension by assigning a subset of the output channels to each GPU. As a result, different GPUs compute the gradients for disjoint subsets of the fully-connected layer, which eliminates transferring the fully-connected layer but introduces additional data transfers for input tensors (shown as the shadow rectangles). For this particular case, using parallelism in the channel dimension reduces data transfer costs by 12×. 2.3 OPTIMIZING PER-LAYER PERFORMANCE When processing a batch of images, increasing the number of workers does not always improve overall execution time, due to the data transfer overhead to synchronize gradients across different workers. Figure 2 shows the per-step training time for three different layers in Inception-v3 for a batch size of 512 images on up to 16 GPUs. The training time includes forward processing, backward propagation, and gradient aggregation. The figure shows that different layers in a neural network may prefer different hardware configurations, and there is no single configuration that is optimal for all layers. For example, the third layer performs best on 16 GPUs while the last layer performs best on 4 GPUs. Thus, a parallelism configuration includes both selecting the data dimensions to be parallelized and the number of parallel workers (or, equivalently, the number of subsets into which the data is partitioned). 3 DEEPA Similar to TensorFlow and PyTorch, DeePa uses computation graphs to describe dependencies between operations. In a computation graph G = (V,E), each node n ∈ V is an operation (e.g., a convolution or matrix-multiply), and each directed edge (u, v) ∈ E is a tensor that is an output of u and an input of v. One key difference between DeePa and TensorFlow or PyTorch is that each node in the DeePa computation graph also includes a configuration that describes how the corresponding operation is parallelized across different workers. For each parallelizable dimension (i.e., image, height, width, and channel), the configuration includes an integer that describes the degree of parallelism in that dimension. For a configuration, the product of the integers over all dimensions is the number of workers needed to process the operation in that configuration. Figure 4 demonstrates some example configurations that explore parallelism in a single dimension as well as combinations of different dimensions. DeePa assumes equal partitioning in each dimension. As a result, each worker receives the same size input, which provides well-balanced workload distribution in our experiments. For each node in the computation graph, its configuration describes how the output tensor is divided onto multiple workers. Each worker computes a disjoint subset of the output tensor, and thus each worker can process the operation in parallel without data dependencies. Given a node’s configuration, DeePa calculates the input sets for each worker and automatically schedules proper data transfers between operations. DeePa also provides three additional functions: • For each node v and configuration c, v.compute(c) estimates the time to process the corresponding operation under the parallelism configuration c. This includes both the forward processing and back propagation time and is estimated by running the operation in that configuration multiple times on the device and measuring the average execution time. • For each edge e = (u, v), e.xfer(cu, cv) estimates the time to transfer the input tensor e to each worker, using the size of the data to be moved and the known communication bandwidth. Note that e.xfer(cu, cv) is zero if u and v have the same configuration (i.e., cu = cv), in which case no data is transferred. As with compute(), we precompute the xfer() function for each edge in the graph by calculating the overall data transfer size for all possible source and destination configurations. • For each node v and configuration c, v.update(c) estimates the time to update parameters for the corresponding operation. We use the data transfer time to approximate the update time, since the data transfer time is much longer than the compute time for updating parameters. Note that different configurations can have significantly different update time, as described in Section 2.2. A global configuration g includes a parallelism configuration for each node in a computation graph: g(v) describes the parallelism configuration for node v. Using the functions defined above, we can model the per-step execution time for a computation graph: Cost(g, (V,E)) = ∑ v∈V {v.compute(g(v)) + v.update(g(v))}+ ∑ e=(u,v)∈E e.xfer(g(u), g(v)) (1) Cost(g, (V,E)) estimates the per-step execution time if the computation graph (V,E) is parallelized using global configuration g. This execution time includes forwarding processing, backward propagation, and gradient aggregation. Equation 1 expresses the problem of finding the configuration for each individual node as a global optimization problem. 4 FINDING OPTIMAL GLOBAL CONFIGURATIONS We now describe our algorithm for finding a global configuration that minimizes Equation 1. In DeePa, each node can select any of a fixed (but large) set of parallelism configurations. Therefore the number of potential global configurations is exponential in the number of nodes in a computation graph, which makes it impractical to enumerate all global configurations for deep CNNs such as VGG-16 and Inception-v3. However, the CNNs we have seen in practice exhibit strong locality: each node is only connected to a few nodes with similar depth in a computation graph. Based on this observation, we use the following two elimination strategies to iteratively simplify the computation graph while preserving the globally optimal configuration. Node elimination. For each node w with a single in-edge e1 = (u,w) and a single out-edge e2 = (w, v), we remove node w and the two edges e1 and e2 from the graph and insert a new edge e′ = (u, v) (shown in Figure 5a). The xfer() function for node e′ is e′.xfer(cu, cv) = min cw {e1.xfer(cu, cw) + w.compute(cw) + w.update(cw) + e2.xfer(cw, cv)} (2) Note that because we have precomputed the xfer() function for edges in the original graph, we can similarly compute the xfer() function for the transitive edge added by a node elimination; i.e., we use dynamic programming to compute the optimal configuration for node w for every possible choice of configurations for nodes u and v. For CNNs with a linear computation graph (e.g., AlexNet and VGG-16), node elimination is sufficient to reduce the original graph to a graph with only 2 nodes. Edge elimination. For two edges with the same source and destination node (i.e., e1 = (u, v) and e2 = (u, v)), we can remove e1 and e2 from the graph and insert a new edge e′ = (u, v) (shown in Figure 5b). The xfer() function for node e′ is e′.xfer(cu, cv) = e1.xfer(cu, cv) + e2.xfer(cu, cv) (3) As with node elimination, we compute the xfer() function for e′ using the already computed xfer() functions for e1 and e2. Figure 6 shows how DeePa iteratively eliminates nodes and edges for an Inception-v3 module. The full Inception-v3 computation graph has 120 nodes, which DeePa reduces to a 2-node graph. DeePa iteratively uses node and edge eliminations to simplify a computation graph until neither elimination can be applied. DeePa then enumerates all global configurations for the final graph and chooses the one that minimizes the Cost function in Equation 1. After deciding the configuration for each node in the final graph, DeePa then decides the configuration for the eliminated nodes by undoing the node and edge eliminations in reverse order. When undoing a node elimination for node w, DeePa selects the configuration that minimizes Equation 2 for node w. After undoing all eliminations, DeePa has a configuration for every node in the original graph. In Appendix A.1, we prove that our algorithm finds an optimal global configuration. In our experiments, DeePa finds an optimal configuration for parallelizing the largest CNN we have worked with, Inception-v3, on 16 GPUs in about 100ms. 5 IMPLEMENTATION We found that it is non-trivial to parallelize the training of CNNs in the height, width, and channel dimensions in existing frameworks (e.g., TensorFlow, PyTorch, and Caffe2), and none provides an interface for controlling per-operation parallelism. We implemented DeePa in Legion (Bauer et al., 2012), a high-performance parallel runtime for distributed heterogeneous architectures, and use cuDNN (Chetlur et al., 2014) and cuBLAS (cub, 2016) as the underlying libraries for processing neural network operations. The following Legion features significantly simplify our implementation for DeePa. First, Legion supports high-dimensional partitioning that allows us to parallelize any operation in any combination of the dimensions. Second, Legion allows DeePa to control parallelism at the granularity of each operation. Third, Legion allows fine-grain control over the placement of data in memory. Fourth, Legion’s asynchronous tasking model makes it easy to exploit task as well as image parallelism. We also include two critical optimizations that help achieve good performance. Overlapping computation with data transfers. DeePa manages the gradients of each operation separately and transfers an operation’s gradients as long as its back propagation is completed. We have found that this can effectively hide the data transfer overhead for gradient synchronization. As a result, the synchronous training performance matches asynchronous training in DeePa, which allows users to use synchronous training with its better algorithmic efficiency. Distributing parameter servers. Existing frameworks use parameter servers to store and update variables for a CNN model. Parameter servers are located in CPU memory in TensorFlow and PyTorch. Because DeePa manages the parameters for each operation separately, DeePa can opportunistically distribute the parameter server onto the GPU memories whenever possible. This eliminates data transfers for operations whose gradients and parameter server are located on the same GPU and transforms all GPU to CPU copies into faster GPU to GPU copies. 6 RELATED WORK To the best of our knowledge, DeePa is the first deep learning framework that controls and optimizes the parallelism of neural networks in all dimensions at the granularity of each operation. Existing frameworks such as TensorFlow (Abadi et al., 2016), Caffe2 (Caf, 2016), and PyTorch (Pyt, 2017) use image parallelism to distribute the training of CNNs and only explore parallelism in the image dimension. The standard image parallelism configuration keeps a replica of the entire network on each worker, which results in large data transfers for synchronizing the gradients in each step. Mirhoseini et al. (2017) uses model parallelism that assigns each operation to a dedicated processor for training Inception-v3. It uses a reinforcement learning algorithm to optimize the placement of each operation on a GPU device. The learned device placement on 4 GPUs achieves 19% speedup compared to single GPU performance. However, parallelism in each operation is not explored. Krizhevsky (2014) introduces “one weird trick” (OWT) that combines image parallelism with model parallelism to accelerate the distributed training of AlexNet, which efficiently reduces the data transfer cost compared to the standard image parallelism configuration. In Section 7.1.2, we show that DeePa further reduces the overall data transfers for AlexNet by 3× and the per-step training time by 2.3× compared to OWT. Goyal et al. (2017) empirically shows no loss of accuracy for training ResNet-50 on the ImageNet dataset with a large minibatch size of 8192 images3. It uses the standard image parallelism configuration to distribute the training onto 256 GPUs and includes a number of optimizations for reducing communication overhead. As communication is a bottleneck in distributed deep learning, we believe our techniques for reducing data transfers can substantially benefit training on large numbers of GPUs. 7 EVALUATION We use AlexNet (Krizhevsky, 2014), VGG-16 (Simonyan & Zisserman, 2014), and Inceptionv3 (Szegedy et al., 2016) as benchmark CNNs and use the ImageNet dataset (Russakovsky et al., 2015) as the input. For each CNN, we compare the performance of DeePa against TensorFlow, PyTorch, and OWT. We implement OWT in DeePa by restricting all convolutional and pooling layers to use image parallelism and all fully-connected layers to use model parallelism. 7.1 CASE STUDY ON A 16-GPU MACHINE We conduct a detailed case study for training the three CNNs on a 16-GPU machine, with two Intel 10-core E5-2680 Xeon processors, 256 GB main memory, and 16 NVIDIA Tesla K80 GPUs4. We use all 16 GPUs for training each CNN model with a minibatch size of 512 images. As a result, each GPU processes a batch of 32 images in the image parallelism configuration. DeePa uses the search algorithm in Section 4 to find the optimal parallelism configurations, which requires 0.7, 1.1, and 4.8 seconds for AlexNet, VGG-16, and Inception-v3, respectively. Figure 7 shows the synchronous training throughput for a minibatch size of 512 images on 16 GPUs. When DeePa uses image parallelism for all operations, DeePa achieves competitive performance compared to the best of TensorFlow and PyTorch. The OWT approach that uses model parallelism for fully-connected layers speeds up the training throughput by 1.4×, 1.2×, and 1.07× compared to image parallelism using DeePa. The best configurations found by DeePa achieve 6.5×, 1.9×, and 1.5× speedup compared to TensorFlow and PyTorch. Three main optimizations in DeePa achieve most of the performance benefit over the other frameworks. First, DeePa significantly reduces data transfers in each step, as shown in Figure 8. Compared to image parallelism, the OWT approach reduces data transfers by 1.05-8.4×. However, the best configuration used by DeePa further reduces data transfers by 1.2-2.7× compared to OWT. Second, the optimization for overlapping computation with data transfers (described in Section 5) effectively hides data transfer latency and achieves better GPU utilization. The grey bars in Figure 7 3In SGD, the parameters are updated after processing a minibatch of training examples. 4The machine is equipped with 8 GPU cards, each of which has 2 Tesla K80 GPUs. illustrate DeePa’s performance when the overlap optimization is disabled, which shows that overlapping computation with data transfers can improve the training throughput by 10%-30%. Third, DeePa also improves performance by exploring parallelism in the height and width dimensions (see Section 7.1.3). 7.1.1 THE BEST CONFIGURATIONS We describe the best configurations discovered for AlexNet, VGG-16, and Inception-v3 in Sections 7.1.2 to 7.1.4. The best configurations have several similarities. First, for the beginning layers with large height/width dimensions and small channel dimensions, DeePa uses image parallelism on all available GPUs, since the data transfers for synchronizing gradients are much smaller than the data transfers for moving tensors between operations. Second, deeper layers in CNNs tend to have smaller height/width dimensions and larger channel dimensions. As a result, the cost for moving tensors between different operations decreases, while the cost for synchronizing gradients increases. DeePa adaptively reduces the number of GPU workers for these layers to reduce the expensive data transfers for synchronizing gradients at the cost of introducing cheaper data transfers for moving tensors. Third, DeePa uses model parallelism on a small number of GPU workers for fully-connected layers, because synchronizing gradients and moving tensors are both much more expensive than the compute time for fully-connected layers. DeePa reduces the data transfers for synchronizing gradients and moving tensors at the cost of using fewer GPUs. Conv11x11 Pooling Conv5x5 Pooling Conv3x3 Conv3x3 Conv3x3 Pooling Linear Linear Linear Softmax Figure 9 shows the global configuration for AlexNet on 16 GPU workers. Note that DeePa selects the parallelism configuration that optimizes the performance for each layer. Table 2 lists the cost for different configurations of the first fully-connected layer. The standard image parallelism configuration eliminates the cost for transferring the input tensors but introduces additional data transfers for synchronizing gradients. The OWT approach completely eliminates gradient synchronization at the cost of replicating the input tensors on every GPU worker. The configuration chosen by DeePa only uses 2 GPU workers for training the first fully-connected layer, which prolongs the compute time but significantly reduces the cost for both transferring input tensors and synchronizing gradients. As a result, DeePa reduces the total cost by 5× compared to other approaches. DeePa uses image parallelism for all convolutional and pooling layers, because the additional data transfer cost introduced by transforming configurations outweighs any performance benefits. 2 x Conv3x3 + Pooling 2 x Conv3x3 + Pooling 3 x Conv3x3 + Pooling 3 x Conv3x3 + Pooling 3 x Conv3x3 + Pooling Linear Linear Linear Softmax DeePa uses similar configurations for parallelizing the fully-connected layers in VGG-16 (Figure 10). In addition, DeePa also uses a different configuration to cooperatively accelerate the last three convolutional layers (the yellow node in Figure 10). Table 3 lists the cost for different parallelism configurations for the last three convolutional layers. The configuration with optimal total cost uses only four GPU workers for the last three convolutional layers to reduce data transfers for synchronizing gradients. DeePa also exploits parallelism in the height and width dimensions to further reduce the compute time. 7.1.4 INCEPTION-V3 Conv1x1 Conv3x3 Conv1x1 The Inception-v3 model has multiple Inception modules (Szegedy et al., 2016). Each module has several branches of convolutional and pooling layers, which are then concatenated as the output tensor of the module. Figure 11 shows the global configuration for Inception-v3. DeePa uses different configurations to parallelize different branches for the InceptionE1 module, as shown in Figure 12. We found that this configuration reduces data transfers by 30% in InceptionE1 and InceptionE2 and reduces overall data transfers by 20%. 7.2 MINIBATCH SIZE The minibatch size plays an important rule on the performance of CNNs. Figure 13 compares DeePa, PyTorch, and TensorFlow with different minibatch sizes. All three networks were trained on 16 Tesla K80 GPUs on a single node, as described in Section 7.1. We were not able to train VGG-16 and Inception-v3 with a minibatch size of 2048 images, because the required metadata size exceeds the aggregate memory capacity of the 16 GPUs. Figure 13 shows that, DeePa achieves constant speedups compared to PyTorch and TensorFlow for various minibatch sizes. In particular, DeePa achieves 4.6-6.5×, 1.6-1.9×, and 1.2-1.5× speedup for AlexNet, VGG-16, and Inception-v3, respectively. 7.3 MULTI-NODE RESULTS We evaluate the scalability of different frameworks by comparing their training throughput with different number of GPUs and compute nodes. The experiments were performed on a GPU cluster with 4 nodes, each of which is equipped with two Intel 10-core E5-2600 Xeon processors, 256G main memory, and four NVIDIA Tesla P100 GPUs. GPUs on the same node are connected by NVLink, and nodes are connected over 100Gb/s EDR Infiniband. Figure 14 shows the performance comparison among DeePa, PyTorch, and TensorFlow for weakscaling. DeePa achieves competitive performance compared to PyTorch and TensorFlow for training on a single GPU, in which all three frameworks place all operations on a single GPU. For training on 4 GPUs on a single node, DeePa achieves 3.1×, 1.6×, and 1.3× speedup for AlexNet, VGG-16, and Inception-v3, respectively. DeePa achieves even better performance speedups for trainings on multiple nodes, where the data transfer time becomes a larger component of the per-iteration training time. For training on 4 nodes, DeePa achieves 8.1×, 3.2×, and 1.8× speedup for AlexNet, VGG-16, and Inception-v3, respectively. 8 CONCLUSION We have presented DeePa, a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training of CNNs. DeePa optimizes the parallelism configuration chosen at the granularity of individual layers. DeePa achieves up to 6.5× for training CNNs and reduces overall data transfers by up to 23× compared to state-of-the-art deep learning frameworks. A APPENDIX A.1 NODE AND EDGE ELIMINATION We prove the correctness of the node and edge eliminations in Section 4. In particular, we prove that after applying node and edge eliminations, the modified graph has the same optimal configuration as the original graph. A.1.1 NODE ELIMINATION For a given computation graph G = (V,E), applying a node elimination on w requires w having a single in-edge e1 = (u,w) and a single out-edge e2 = (w, v). The node elimination results in a modified graph G′ = (V ′, E′), where V ′ = V − {w}, E′ = E − e1 − e2 + e′, and e′ = (u, v). Theorem 1. Consider graphs (V,E) and the result of a single node elimination (V ′, E′). Then an optimal configuration of V,E) is also an optimal configuration of (V ′, E′), and an optimal configuration of (V ′, E′) is extensible to a an optimal configuration of (V,E). Proof. The Cost function is defined in Equation 1. Let g be any configuration. We first compute the difference between Cost(g, (V,E)) and Cost(g, (V ′, E′)). Cost(g, (V,E))− Cost(g, (V ′, E′)) = ∑ v∈V {v.compute(g(v)) + v.update(g(v))}+ ∑ e=(u,v)∈E e.xfer(g(u), g(v)) − ∑ v∈V ′ {v.compute(g(v)) + v.update(g(v))}+ ∑ e=(u,v)∈E′ e.xfer(g(u), g(v)) =w.compute(g(w)) + w.update(g(w)) + e1.xfer(g(u), g(w)) + e2.xfer(g(w), g(v))− e′.xfer(g(u), g(v)) (4) Now assume g is an optimal configuration for (V,E). Then we have w.compute(g(w)) + w.update(g(w)) + e1.xfer(g(u), g(w)) + e2.xfer(g(w), g(v)) =min cw {w.compute(cw) + w.update(cw) + e1.xfer(g(u), cw) + e2.xfer(cw, g(v))} (5) Therefore, g is an optimal configuration of (V ′, E′). For the other direction, note that if g is an optimal configuration of (V ′, E′), then it can be extended to an optimal configuration of (V,E) by adding the node w with the same minimal assignment. A.1.2 EDGE ELIMINATION For a computation graph G(V,E), applying an edge elimination on e1 = (u, v) and e2 = (u, v) results in a modified graph G′ = (V,E′), where E′ = E − e1 − e2 + e′ and e′ = (u, v). We prove that Cost(g, (V,E)) = Cost(g, (V,E′)) for any global configuration g of (V,E). Theorem 2. For any global configuration g of graph G = (V,E), Cost(g, (V,E)) = Cost(g, (V,E′)), where (V,E′) is the modified graph of (V,E) after an edge elimination. Proof. We compute the difference between Cost(g, (V,E)) and Cost(g, (V,E′)). Cost(g, (V,E))− Cost(g, (V,E′)) =e1.xfer(g(u), g(v))− e2.xfer(g(u), g(v)) + e′.xfer(g(u), g(v)) =0 (6) The last equation uses Equation 3. A.2 RELATED WORK ON OVERLAPPING COMMUNICATION WITH DATA TRANSFER The overlap optimization in Section 5 is motivated by Goyal et al. (2017), which performs gradient aggregation in parallel with back propagation to scale synchronous training to large number of GPUs. We extend their design and implementation by also enabling the optimization for asynchronous training in DeePa. A.3 PROFILING RESULTS We show profiling results for visualizing the performance bottlenecks in different parallelism approaches. The experiment was performed on a single node with four Tesla P100 GPUs (as described in Section 7.3). We enable overlapping computation with data transfers (described in Section 5) in this experiment. Figure 15 shows the profiling results for training VGG-16 on 4 GPUs with different parallelism configurations. Note that DeePa with image parallelism achieves 10% higher training throughput compared to PyTorch and TensorFlow, as shown in Figure 14. Figure 15a shows that all GPUs are highly utilized during forward and backward passes, as indicated by the tight packing of tasks in the timeline. However, the image parallelism approach requires moving 4GB of metadata in every iteration, which cannot be fully overlapped with back propagation, therefore the image parallelism approach has a performance gap between iterations (shown as the white space on the GPU timelines). Figure 15b shows the profiling of the optimal parallelism configuration chosen by DeePa, which uses image parallelism on 4 GPUs for all convolutional layers and pooling layers and uses model parallelism on 2 GPUs for the fully connected layers. Therefore, the training with the optimal configuration includes data transfers for each fully connected layers, which adds small performance gaps at the end of the forward pass and the beginning of the backward pass (shown as the small white space on the GPU timelines). However, the optimal configuration reduces the per-iteration data transfers from 4GB to 490MB, which effectively hides data transfer overhead and achieves better GPU utilization. As a result, the optimal configuration reduces the per-iteration training time from 0.34 seconds to 0.24 seconds. A.4 IMANGENET-22K We compare the performance of DeePa, PyTorch, and TensorFlow on the ImageNet-22K dataset (Russakovsky et al., 2015) that contains 21,841 different categories (the ImageNet dataset used in Section 7 contains 1,000 catagories). The last fully-connected layer in AlexNet, VGG-16, and Inception-v3 originally have 1,000 neurons followed by a 1,000-way softmax layer. To train the three networks on the ImageNet-22K dataset, we change the last fully-connected layer to have 21,841 neurons and use a 21,841-way softmax layer at the end. The modified networks were trained on 16 Tesla K80 GPUs on a single node with a minibatch size of 512 images. Figure 16 compares the training throughput and per-iteration data transfers among DeePa, PyTorch, and TensorFlow on the ImageNet and ImageNet-22K datasets. Figure 16a shows that, on the ImageNet-22K dataset, the training throughput of PyTorch and TensorFlow is reduced by 20%- 45%, while DeePa’s throughput falls off by 3%, compared to training on the original ImageNet dataset. Figure 16b compares the per-iteration data transfers between image parallelism and the global configurations used by DeePa. Using image parallelism increases the data transfers in each iteration by 5-10GB, while DeePa only increases the per-iteration data transfers by 40MB. As a result, for training on the ImageNet-22K dataset, DeePa reduces the per-iteration data transfers by 3.7-44.5× compared to image parallelism.
1. What is the main contribution of the paper regarding deep learning acceleration? 2. How does the proposed framework, DeePa, support multiple dimensions of parallelism in computation? 3. What are some potential improvements for the paper, particularly in exploring more demanding training workloads? 4. How does DeePa allow for increased training throughput and lower data transfer in practice? 5. Can you explain how DeePa differs from other deep learning frameworks like TensorFlow and PyTorch in terms of its ability to program parallelism configurations?
Review
Review The paper proposes a deep learning framework called DeePa that supports multiple dimensions of parallelism in computation to accelerate training of convolutional neural networks. Whereas the majority of work on parallel or distributed deep learning partitions training over bootstrap samples of training data (called image parallelism in the paper), DeePa is able to additionally partition the operations over image height, width and channel. This gives more options to parallelize different parts of the neural network. For example, the best DeePa configurations studied in the paper for AlexNet, VGG-16, and Inception-v3 typically use image parallelism for the initial layers, reduce GPU utilization for the deeper layers to reduce data transfer overhead, and use model parallelism on a smaller number of GPUs for fully connected layers. The net is that DeePa allows such configurations to be created that provide an increase in training throughput and lower data transfer in practice for training these networks. These configurations for parellism are not easily programmed in other frameworks like TensorFlow and PyTorch. The paper can potentially be improved in a few ways. One is to explore more demanding training workloads that require larger-scale distribution and parallelism. The ImageNet 22-K would be a good example and would really highlight the benefits of the DeePa in practice. Beyond that, more complex workloads like 3D CNNs for video modeling would also provide a strong motivation for having multiple dimensions of the data for partitioning operations.
ICLR
Title CUDA: Curriculum of Data Augmentation for Long-tailed Recognition Abstract Class imbalance problems frequently occur in real-world tasks, and conventional deep learning algorithms are well known for performance degradation on imbalanced training datasets. To mitigate this problem, many approaches have aimed to balance among given classes by re-weighting or re-sampling training samples. These re-balancing methods increase the impact of minority classes and reduce the influence of majority classes on the output of models. However, the extracted representations may be of poor quality owing to the limited number of minority samples. To handle this restriction, several methods have been developed that increase the representations of minority samples by leveraging the features of the majority samples. Despite extensive recent studies, no deep analysis has been conducted on determination of classes to be augmented and strength of augmentation has been conducted. In this study, we first investigate the correlation between the degree of augmentation and class-wise performance, and find that the proper degree of augmentation must be allocated for each class to mitigate class imbalance problems. Motivated by this finding, we propose a simple and efficient novel curriculum, which is designed to find the appropriate per-class strength of data augmentation, called CUDA: CUrriculum of Data Augmentation for long-tailed recognition. CUDA can simply be integrated into existing long-tailed recognition methods. We present the results of experiments showing that CUDA effectively achieves better generalization performance compared to the state-of-the-art method on various imbalanced datasets such as CIFAR-100-LT, ImageNet-LT, and iNaturalist 2018. 1 1 INTRODUCTION Deep neural networks (DNNs) have significantly improved over the past few decades on a wide range of tasks (He et al., 2017; Redmon & Farhadi, 2017; Qi et al., 2017). This effective performance is made possible by come from well-organized datasets such as MNIST (LeCun et al., 1998), CIFAR10/100 (Krizhevsky et al., 2009), and ImageNet (Russakovsky et al., 2015). However, as Van Horn et al. (2018) indicated, gathering such balanced datasets is notoriously difficult in real-world applications. In addition, the models perform poorly when trained on an improperly organized dataset, e.g., in cases with class imbalance, because minority samples can be ignored due to their small portion. The simplest solution to the class imbalance problem is to prevent the model from ignoring minority classes. To improve generalization performance, many studies have aimed to emphasize minority classes or reduce the influence of the majority samples. Reweighting (Cao et al., 2019; Menon et al., 2021) or resampling (Buda et al., 2018; Van Hulse et al., 2007) are two representative methods that have been frequently applied to achieve this goal. (i) Reweighting techniques increase the weight of the training loss of the samples in the minority classes. (ii) Resampling techniques reconstruct a class-balanced training dataset by upsampling minority classes or downsampling majority classes. Although these elaborate rebalancing approaches have been adopted in some applications, limited information on minority classes due to fewer samples remains problematic. To address this issue, some works have attempted to spawn minority samples by leveraging the information of the minority ∗Two authors contribute equally 1Code is available at Link samples themselves. For example, Chawla et al. (2002); Ando & Huang (2017) proposed a method to generate interpolated minority samples. Recently (Kim et al., 2020; Chu et al., 2020; Park et al., 2022) suggested enriching the information of minority classes by transferring information gathered from majority classes to the minority ones. For example, Kim et al. (2020) generated a balanced training dataset by creating adversarial examples from the majority class to consider them as minority. Although many approaches have been proposed to utilize data augmentation methods to generate various information about minority samples, relatively few works have considered the influence of the degree of augmentation of different classes on class imbalance problems. In particular, few detailed observations have been conducted as to which classes should be augmented and how intensively. To this end, we first consider that controlling the strength of class-wise augmentation can provide another dimension to mitigate the class imbalance problem. In this paper, we use the number of augmentation operations and their magnitude to control the extent of the augmentation, which we refer to herein as its strength, e.g., a strength parameter of 2 means that two randomly sampled operations with a pre-defined magnitude index of 2 are used. Our key finding is that class-wise augmentation improves performance in the non-augmented classes while that for the augmented classes may not be significantly improved, and in some cases, performances may even decrease. As described in Figure 1, regardless of whether a given dataset is class imbalanced, conventional class imbalance methods show similar trends: when only the major classes are strongly augmented (e.g., strength 4), the performance of majority classes decreases, whereas that for the minority classes have better results. To explain this finding, we further find that strongly augmented classes get diversified feature representation, preventing the growth of the norm of a linear classifier for corresponding classes. As a result, the softmax outputs of the strongly augmented classes are reduced, and thus the accuracy of those classes decreases. It is described in Appendix A. This result motivates us to find the proper augmentation strength for each class to improve the performance for other classes while maintaining its own performance. Contribution. We propose a simple algorithm called CUrriculum of Data Augmentation (CUDA) to find the proper class-wise augmentation strength for long-tailed recognition. Based on our motivation, we have to increase the augmentation strength of majorities for the performance of minorities when the model successfully predicts the majorities. On the other hand, we have to lower the strength of majorities when the model makes wrong predictions about majorities. The proposed method consists of two modules, which compute a level-of-learning score for each class and leverage the score to determine the augmentation. Therefore, CUDA increases and decreases the augmentation strength of the class that was successfully and wrongly predicted by the trained model. To the best of our knowledge, this work is the first to suggest a class-wise augmentation method to find a proper augmentation strength for class imbalance problem. We empirically examine performance of CUDA on synthetically imbalanced datasets such as CIFAR100-LT (Cao et al., 2019), ImageNet-LT (Liu et al., 2019), and a real-world benchmark, iNaturalist 2018 (Van Horn et al., 2018). With the high compatibility of CUDA, we apply our framework to various long-tailed recognition methods and achieve better performance compared to the existing long-tailed recognition methods. Furthermore, we conduct an extensive exploratory analysis to obtain a better understanding of CUDA. The results of these analyses verify that CUDA exhibits two effects that mitigate class imbalance, including its balanced classifier and improved feature extractor. 2 RELATED WORKS Long-tailed Recognition (LTR). The datasets with class imbalances can lead DNNs to learn biases toward training data, and their performance may decrease significantly on the balanced test data. To improve the robustness of such models to imbalance, LTR methods have been evolving in two main directions: (1) reweighting (Cui et al., 2019; Cao et al., 2019; Park et al., 2021) methods that reweight the loss for each class by a factor inversely proportional to the number of data points, and (2) resampling methods (Kubat et al., 1997; Chawla et al., 2002; Ando & Huang, 2017) that balance the number of training samples for each class in the training set. However, studies along these lines commonly sacrifice performance on majority classes to enhance that on minority classes, because the overfitting problem occurs with limited information on minority classes as a result of increasing the weight of a small number of minority samples. Several methods have recently been developed to alleviate the overfitting issues in various categories: (1) two-stage training (Cao et al., 2019; Kang et al., 2020; Liu et al., 2019), (2) ensemble methods (Zhou et al., 2020a; Xiang et al., 2020; Wang et al., 2021; Cai et al., 2021), and (3) contrastive learning approach (Kang et al., 2021; Cui et al., 2021; Zhu et al., 2022; Li et al., 2022a;b). To re-balance the classifier layers after achieving a good representation on the imbalanced training dataset in an early phase, Cao et al. (2019) proposed deferred resampling (DRS) and reweighting (DRW) approaches. Kang et al. (2020) decoupled the learning procedure into representation learning and training linear classifier, achieved higher performance than previous balancing methods. Wang et al. (2021) and Cai et al. (2021) suggested efficient ensemble methods using multiple experts with a routing module and a shared architecture for experts to capture various representations. Liu et al. (2022) found that self-supervised representations are more robust to class imbalance than supervised representations, and some works have developed supervised contrastive learning methods (Khosla et al., 2020) for imbalanced datasets (Cui et al., 2021; Zhu et al., 2022; Li et al., 2022b). Another line of research has considered augmentation methods in terms of both input and feature spaces (Kim et al., 2020; Chu et al., 2020; Li et al., 2021). Recently, Park et al. (2022) mixed minority and majority images by using CutMix with different sampling strategies to enhance balancing and robustness simultaneously. These methods commonly focus on utilizing the rich context of majority samples to improve the diversity of minority samples. Zhou et al. (2022) proposed an augmentation-based contrastive learning method which boosts memorization of each samples for long-tailed learning. Moreover, these augmentation-based methods are relatively in easy to apply orthogonally with other LTR methods. Data Augmentation (DA). DA has been studied to mitigate overfitting which may occur due to a lack of data samples. Some works have been proposed to erase random parts of images to enhance the generalization performance of neural networks (DeVries & Taylor, 2017; Zhong et al., 2020; Kumar Singh & Jae Lee, 2017; Choe & Shim, 2019). Recently, variants of MixUp (Zhang et al., 2018) have been proposed; this method combines two images with specific weights (Tokozume et al., 2018; Guo et al., 2019; Takahashi et al., 2018; DeVries & Taylor, 2017; Verma et al., 2019). By aggregating two approaches, CutMix (Yun et al., 2019) was proposed to erase and replace a small rectangular part of an image into another image. In another line of research, methods have been proposed to automatically configure augmentation operations (Cubuk et al., 2019; Lim et al., 2019; Li et al., 2020b; Hataya et al., 2020; Gudovskiy et al., 2021). In addition, Cubuk et al. (2020) randomly selected augmentation operations using the given hyperparameters of the number of sampling augmentation and their magnitudes. Recently, class-wise or per-sample auto-augmentation methods have also been proposed (Cheung & Yeung, 2021; Rommel et al., 2022). 3 CURRICULUM OF DATA AUGMENTATION FOR LONG-TAILED RECOGNITION The core philosophy of CUDA is to “generate an augmented sample that becomes the most difficult sample without losing its original information.” In this section, we describe design of CUDA in terms of two parts: (1) a method to generate the augmented samples based on the given strength parameter, and (2) a method to measure a Level-of-Learning (LoL) score for each class. 3.1 PROBLEM FORMULATION OF LONG-TAILED RECOGNITION Suppose that the training dataset D = {(xi, yi)}Ni=1 is composed of images with size d, xi ∈ Rd, and their corresponding labels yi ∈ {1, ..., C}. Dc ⊂ D is a set of class c, i.e., Dc = {(x, y)|y = c, (x, y) ∈ D}. Without loss of generality, we assume |D1| ≥ |D2| ≥ · · · ≥ |DC |, where |D| denotes the cardinality of the set D. We denote the Nmax := |D1| and Nmin := |DC |. LTR algorithms, ALTR(fθ,D), mainly focus on training the model fθ with parameter θ when the class distribution of training dataset Ptrain(y) and test dataset Ptest(y) are not identical. More precisely, Ptrain(y) is highly imbalanced while Ptest(y) is balanced, i.e., uniform distribution. 3.2 CURRICULUM OF DATA AUGMENTATION In this section, we describe our proposed DA with strength parameter, and the methods used to measured the LoL score. Then, we integrate the two methods in a single framework to propose CUDA. DA with a strength parameter. Let us assume that there exist pre-defined K augmentation operations. We utilize visual augmentation operations which is indexed as k ∈ {1, · · · ,K}, e.g., Gaussian blur, Rotation, Horizontal flip. Each augmentation operation Omk(s)k : Rd → Rd has its own predefined augmentation magnitude function mk(s) where the strength parameter s ∈ {0, ..., S}. These operations are described in detail along with each magnitude functions in Appendix D. Given an augmentation strength parameter s and an input image x, we model a sequence of augmentation operations O(x; s) as follows: O(x; s) = Omks (s)ks ◦ O mks−1 (s) ks−1 ◦ · · · ◦ Omk1 (s)k1 (x), ki ∼ Cat(K,U(K)) ∀i = {1, . . . , s}, where, Cat(·) and U(·) denote categorical and discrete uniform distributions, respectively. The sequential augmentation operation O(x; s) samples s operations from the categorical distribution when the probability of seeing the operations follows uniform distribution. As depicted on the left side Figure 2, suppose that the random sampled augmentations k1, k2, and k3 are brightness, X-shift, and Y-shift, respectively. Then, O(x; 3) outputs an image in which bright is raised by mbright(3) and moved by mx-shift(3) on the x-axis and shifted by my-shift(3) on the y-axis. Algorithm 1: CUrriculum of Data Augmentation Input: LTR algorithm ALTR(f,D), training dataset D = {(xi, yi)}Ni=1, train epochs E, aug. probability paug, threshold γ, number of sample coefficient T . Output: trained model fθ Initialize: L0c = 0 ∀c ∈ {1, ..., C} for e ≤ E do Update Lec = VLoL(Dc, Le−1c , fθ, γ, T ) ∀c // Alg. 2 Generate DCUDA = {(x̄i, yi)|(xi, yi) ∈ D} where x̄i = { O(xi, Leyi) with prob. paug xi otherwise. Run LTR algorithm using DCUDA, i.e., ALTR (fθ,DCUDA). end Algorithm 2: VLoL: Update LoL score Input: Dc, L, fθ, γ, T Output: updated L Initialize: check = 1 for l ≤ L do /* Vcorrect(Dc, l, fθ, T ) */ Sample D′c ⊂ Dc s.t. |D′c| = T (l + 1) Compute v = ∑ x∈D′c 1{f(O(x;l)=c} if v ≤ γT (l + 1) then check← 0; break end end if check = 1 then L← L+ 1 else L← L− 1 Level-of-Learning (LoL). To control the strength of augmentation properly, we check whether the model can correctly predict augmented versions without losing the original information. To enable this, we define the LoL for each class c at epoch e, i.e., Lec, which is adaptively updated as the training continues as follows: Lec = VLoL(Dc, Le−1c , fθ, γ, T ), where VLoL(Dc, Le−1c , fθ, γ, T ) = { Le−1c + 1 if VCorrect(Dc, l, fθ, T ) ≥ γT (l + 1) ∀l ∈ {0, ..., Le−1c } Le−1c − 1 otherwise . Here, γ ∈ [0, 1] is threshold hyperparameter, T is coefficient of the number of samples used to updating LoL. Vcorrect is a function which outputs the number of correctly predicted examples by the model fθ among l + 1 randomly augmented samples with strength l. Vcorrect is defined as: VCorrect(Dc, l, fθ, T ) = ∑ x∈D′c 1{fθ(O(x;l))=c} where D′c ⊂ Dc. Note that D′c is a randomly sampled subset of Dc with replacement and its size is T (l + 1). The key philosophy of this criterion is two fold. (1) If samples in the class c are trained sufficiently with an augmentation strength of Lec, the model is ready to learn a more difficult version with augmentation strength of Le+1c ← Lec + 1. In contrast, if the model predicts incorrectly, it should re-learn the easier sample with an augmentation strength of Le+1c ← Lec − 1. (2) As the strength parameter increases, the number of candidates for the sequential augmentation operation O(x;L) increases exponentially. For example, the amount of increment is NL(N − 1) when L is increases to L+ 1. To control the LoL in a large sequential augmentation operation space, we take more random samples to check as the strength parameter gets bigger. In our experiments, linearly increasing the number of samples to evaluate corresponding to the strength with a small additional computation time was sufficient. VLoL is described in Figure 2 and Algorithm 2. Curriculum of DA. By combining two components, including DA with a strength parameter and LoL, our CUDA provides class-wise adaptive augmentation to enhance the performance of the others without losing its own information. As shown in Figure 2 and Algorithm 1, we measure the LoL score Lc for all classes in the training dataset to determine the augmentation strength for every epoch. Based on Lc, we generate the augmented version O(x;Lc) for x ∈ Dc and train the model with the augmented samples. Additionally, we randomly use the original sample instead of the augmented sample with probability paug so that the trained models do not forget the original information. In our experiments, this operation improved performance robustly on a wide range of paug values. The results are provided in Section 4.3. Advantage of CUDA design. Our proposed approach mainly has three advantages. (1) CUDA adaptively finds proper augmentation strengths for each class without need for a validation set. (2) Following the spirits of existing curriculum learning methods (Hacohen & Weinshall, 2019; Zhou et al., 2020b; Wu et al., 2021), CUDA enables modeling by first presenting easier examples earlier during training to improve generalization. This encourages the model to learn difficult samples (i.e., within high augmentation strength) better. (3) Moreover, owing to the universality of data augmentation, CUDA is easily compatible with other LTR algorithms, such as (Cao et al., 2019; Ren et al., 2020; Wang et al., 2021). 4 EXPERIMENTS In this section, we present empirical evaluation, the results of which demonstrate the superior performance of our proposed algorithm for class imbalance. We first describe the long-tailed classification benchmarks and implementations in detail (Section 4.1). Then, we describe the experimental results on several synthetic (CIFAR-100-LT, ImageNet-LT) and real-world (iNaturalist 2018) long-tailed benchmark datasets in Section 4.2. Moreover, we conduct additional experiments to obtain a better understanding of CUDA, and this analysis is provided in Section 4.3. 4.1 EXPERIMENTAL SETUP Datasets. We evaluate CUDA on the most commonly used long-tailed image classification tasks: CIFAR-100-LT (Cao et al., 2019), ImageNet-LT (Liu et al., 2019), and iNaturalist 2018 (Van Horn et al., 2018). CIFAR-100-LT and ImageNet-LT are provided with imbalanced classes by synthetically sampling the training samples. CIFAR-100-LT is examined with various imbalance ratios {100, 50, 10}, where an imbalance ratio is defined as Nmax/Nmin. iNaturalist 2018 is a large-scale real-world dataset includes natural long-tailed imbalance. We utilize the officially provided datasets. Baselines. We compare CUDA with previous long-tailed learning algorithms , including cross-entropy loss (CE), two-stage approaches: CE-DRW (Cao et al., 2019) and cRT (Kang et al., 2020), balanced loss approaches: LDAM-DRW (Cao et al., 2019) and Balanced Softmax (BS; Ren et al. 2020), the ensemble method: RIDE with three experts (Wang et al., 2021), resampling algorithms: Remix (Chou et al., 2020) and CMO (Park et al., 2022), and contrastive learning-based approach: BCL (Zhu et al., 2022). We integrate CUDA with CE, CE-DRW, LDAM-DRW, BS, RIDE, and BCL algorithms. For longer epochs, we compare CUDA with PaCo (Cui et al., 2021), BCL, and NCL (Li et al., 2022a), by combining CUDA with BCL and NCL. For a fair comparison of the computational cost, we train the network with the official one-stage implementation of RIDE (i.e., without distillation and routing). Implementation. For CIFAR-100-LT dataset, almost all implementations follow the general setting from Cao et al. (2019), whereas cRT (Kang et al., 2020), BCL, NCL and RIDE follow the settings used in their original implementation. Following Cao et al. (2019), we use ResNet-32 (He et al., 2016) as a backbone network for CIFAR-100-LT. The network is trained on SGD with a momentum of 0.9 and a weight decay of 2× 10−4. The initial learning rate is 0.1 and a linear learning rate warm-up is used in the first 5 epochs to reach the initial learning rate. During training over 200 epochs, the learning rate is decayed at the 160th and 180th epochs by 0.01. For the ImageNet-LT and iNaturalist, the ResNet-50 is used as a backbone network and is trained for 100 epochs. The learning rate is decayed at the 60th and 80th epochs by 0.1. As with CIFAR, for cRT, RIDE, and BCL, we follow the original experimental settings of the official released code. For the hyperparameter values of CUDA, we apply a paug of 0.5 and T of 10 for all experiments. For γ, we set the values as 0.6 for CIFAR-100-LT and 0.4 for ImageNet-LT and iNaturalist 2018. The detailed implementation for baselines are in Appendix B. 4.2 EXPERIMENTAL RESULTS In this section, we report the performances of the methods compared on the CIFAR-100-LT, ImageNetLT, and iNaturalist 2018. We include four different categories of accuracy: all, many, med(ium), and few. Each represents the average accuracy of all samples, classes containing more than 100 samples, 20 to 100 samples, and under 20 samples, respectively. CIFAR-100-LT. In Table 1, we report the performance when CUDA is applied to the various algorithms: CE, CE-DRW (Cao et al., 2019), LDAM-DRW (Cao et al., 2019), BS (Ren et al., 2020), RIDE (Wang et al., 2021) with 3 experts, RIDE+CMO (Park et al., 2022), and BCL (Zhu et al., 2022). Compared to the cases without CUDA, balanced validation performance is increased when we apply the proposed approach. Recently, some works (Cui et al., 2021; Alshammari et al., 2022; Zhu et al., 2022; Li et al., 2022a) have shown impressive performances with diverse augmentation strategies and longer training epochs. For a fair comparison with these methods, we examine CUDA using the same experimental setups from PaCo (Cui et al. 2021; 400 epochs with batch size of 64). Table 3 shows that augmented images using CUDA can enhance LTR performance compared to the other baselines. In particular, CUDA with NCL obtains the best performance over 400 epochs. As noted by Li et al. (2022a), the NCL algorithm utilizes six times as much memory compared to the vanilla architecture with three experts. Hereinafter in large-scale benchmarks, we focus on the cases with similar network size. ImageNet-LT and iNaturalist 2018. To evaluate the performance of CUDA on larger datasets, we conduct experiments on ImageNet-LT (Liu et al., 2019) and iNaturalist 2018 (Van Horn et al., 2018). Table 2 summarizes the performance of various LTR methods and the performance gain when integrated with CUDA. Our proposed method consistently improves performance regardless of the LTR method and target dataset by simply adding class-wise data augmentation without complicated methodological modification. Additionally, to evaluate the performance gain of CUDA on other Epoch architectures, we experiment with CUDA on ImageNet-LT with ResNet-10 (Liu et al., 2019) and ResNeXt-50 (Xie et al., 2017), as reported in Appendix C. 4.3 ANALYSIS We design our analyses to answer the following questions. (1) How does CUDA perform? (2) Does CUDA perform better than other augmentation methods? (3) How does LoL score change over training epochs when combined with various LTR methods? (4) Which part of CUDA is important to improved performance? These analyses provide additional explanations to understand CUDA. All experiments are conducted on CIFAR-100-LT with imbalance ratio of 100. How does CUDA mitigate the class imbalance problem? To deeply understand CUDA, we observe two types of metrics: (1) variance of weight L1-Norm of linear classifier between each class (2) feature alignment gain for each class (i.e., cosine similarity with and without CUDA) on validation dataset. The classifier weight norm is usually used to measure how balanced the model consider the input from a class-wise perspective (Kang et al., 2020; Alshammari et al., 2022). Feature alignment, especially feature cosine similarity amongst samples belonging to the same class, is a measure of the extent to which the extracted features are aligned (Oh et al., 2021). As shown in Figure 3, CUDA has two forces for alleviating imbalance. For all cases, CUDA reduces the variance of the weight norm (i.e., balance the weight norm), and thus the trained model consider the minority classes in a balanced manner. Note that because LDAM-DRW and RIDE utilize a cosine classifier (i.e., utilizing L2 normalized linear weight), their standard deviation scale is quite different from those other methods. Because LDAM-DRW, BS, and RIDE include balancing logic in their loss function, they exhibit lower variance reduction compared to the CE and CE-DRW. Second, as shown in the bottom row in Figure 3, CUDA obtains feature alignment gains for almost all classes. This shows that CUDA facilitates a network to learn to extract meaningful features. Compared with other augmentations. To verify the impact of CUDA, we examine the other augmentation methods as follows. We compare five augmentation methods, including AutoAugment (AA, Cubuk et al. 2019), Fast AutoAugment (FAA, Lim et al. 2019), DADA (Li et al., 2020b), RandAugment (RA, Cubuk et al. 2020), and the proposed method CUDA. Because AA, FAA, and DADA provide their policies searched by using CIFAR, SVHN (for AA), and ImageNet, we leverage their results. Furthermore, RA suggests using their parameter (n,m) = (1, 2) for CIFAR, and we follow their guidelines. As shown in Table 4, even though the automated augmentation methods use additional computation resources to search, CUDA outperforms the other pre-searched augmentations. This shows that CUDA is computationally efficient. Dynamics of LoL score. We evaluate how LoL scores vary with algorithms: CE, CE-DRW, LDAMDRW, BS, and RIDE. Note that we set a lower class index (i.e., 0) as the most common class (i.e., the number of samples is 500), while an index of 100 represents the rarest class (i.e., with five samples). As described in Figure 4, as training progressed, the LoL score of all algorithms increase. After learning rate decay (i.e., 160 epoch) all algorithms are able to learn to classify minority classes more easily than before. In particular, except for BS, the majority classes of most algorithms show a steep increment. The reason that BS exhibit a similar increasing speed for majority and minority classes is that it includes a module to balance the impact of majority and minority samples. Furthermore, we found that CE-DRW and BS have similar end average accuracy in the case of applying CUDA but different LoL score dynamics. We can conclude that LoL score on one category of classes has a high correlation with the performance of opposite classes from the observation that CE-DRW has higher and lower performance gain for many and few, respectively, than BS. Parameter sensitivity. For further analysis, we conduct a sensitivity analysis of hyperparameters in CUDA. More precisely, we study three kinds of parameters, including augmentation probability paug (Figure 5a), number of tests T (Figure 5b), and LoL update threshold γ (Figure 5c). We examine each hyperparameter sensitivity on a CUDA case with RIDE and the remainder of the hyperparameters are fixed to the default values in Section 4.1. All results show that the performance gains of CUDA decreases if the parameters are adjusted to make the augmentation too strong or weak. For example, the augmentation strength of all classes steeply increases when γ becomes small. The strength cannot increase when γ becomes large, and thus it cannot improve the performance of the model. Moreover, as shown in Figure 5b, the performance of CUDA increases as T increases. However, larger T spends computational overhead, we set T as 10 and obtained cost-effective performance gain. Impact of curriculum. In addition to studying the impact of CUDA, we examine its performance component-wise. In particular, we test the case where class-wise augmentation strength is searched based on the hyperparameter optimization algorithm. We check five cases overall: baseline algorithm, hyperparameter optimization (HO), re-searched DADA for CIFAR-100-LT, CUDA without curriculum, (i.e., re-training utilizing the final augmentation strength of CUDA), and CUDA. We provide detailed description for each method in Appendix E. As described in Figure 5d, CUDA finds better augmentation strengths compare to the hyperparameter search case. This means that CUDA exhibits not only a lower searching time but also obtains better augmentation strength. Moreover, by comparing the performance of with or without curriculum, the curriculum also can provide additional advance to the model to achieve better generalization. Additionally, as Figure 4, lower augmentation strength at the beginning of training is more effective than static higher augmentation strength. These results are consistent with the results of previous studies on curriculum learning methods (Zhou et al., 2020b). 5 CONCLUSION In this study, we proposed CUDA to address the class imbalance problem. The proposed approach is also compatible with existing methods. To design a proper augmentation for LTR, we first studied the impact of augmentation strength for LTR. We found that the strength of augmentation for a specific type of class (e.g., major class) could affect the performance of the other type (e.g., minor class). From this finding, we designed CUDA to adaptively find an appropriate augmentation strength without any further searching phase by measuring the LoL score for each epoch and determining the augmentation accordingly. To verify the superior performance of proposed approach, we examined each performance with various methods and obtained the best performance among the methods compared, including synthetically generated and real-world benchmarks. Furthermore, from our analyses, we validated that our CUDA enhanced balance and feature extraction ability, which can consistently improve performance for majority and minority classes. ACKNOWLEDGEMENT This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST), 10%) and the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022- 0-00871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration, 90%) Appendix CUDA: Curriculum of Data Augmentation for Long-tailed Recognition Owing to the page limitation of the main manuscript, we provide detailed information in this supplementary as follows. (1) In Appendix A, we summarize the experimental setup of Figure 1, and further explain why augmentation on one side causes performance degradation on the opposite side. (2) In Appendix B, we describe in detail our experimental setting, including dataset configuration, data preprocessing, and training implementation. (3) In Appendix C, we show ImageNet-LT performance on different size and architecture networks, training time analysis, and accuracy on the balanced dataset case. (4) In Appendix D, we present in detail the augmentation operations that CUDA utilizes. (5) In Appendix E, we describe the experimental setting of Figure 5d. A DETAIL FOR FIGURE 1 A.1 EXPERIMENTAL SETTINGS Major and minor group decomposition. To check the impact of augmentation on majority and minority classes, we split the training dataset into two clusters. The majority cluster is the top 50 classes by sorting through the number of samples for each class. The bottom 50 classes are in the minority cluster. For simplicity, we utilize class indices of 0 to 49 as the majority and 50 to 99 as the minority, respectively. For the balanced case, we utilize 0 to 49 classes as cluster 1, and the others as cluster 2. Controlling augmentation strength. We set the augmentation strength as the number of augmentation and its augmentation magnitude by following the augmentation rule of CUDA. For example, the samples in the majority classes with magnitude parameter 4 represents that they are augmented with randomly sampled 4 augmentations with their own pre-defined augmentation magnitude. Training setting. For heatmaps in Figure 1, we follow the training recipe of CIFAR-100-LT for CE case, e.g., ResNet-32, learning rate of 0.1, and so on. Further details, hyperparameters, and datasets are described in section 4 and Appendix B. 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 Tr ai n C os -S im Te st C os -S im W ei gh t N or m Without Augment Partial Augment All Augment Figure 6: Analysis on Balanced CIFAR-100. 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 Tr ai n C os -S im Te st C os -S im W ei gh t N or m Without Augment Partial Augment All Augment Figure 7: Analysis on CIFAR-100-LT (IR 100). A.2 ANALYSIS Analysis for Figure 1. To figure out the reason for the phenomena in Figure 1, we conduct further analysis as shown in Figure 6 and Figure 7. Our experimental setups are as follows: • Train the networks with three augmentation strategies, respectively (without, partial, and all), then measure the class-wise feature alignment and linear classifier weight norm for all networks. (Experiment 1) • From a trained network without augmentation in Experiment 1, we freeze the feature extractor and train the linear classifier layer with augmenting partial classes. Then, we measure the class-wise L1-norm for all linear classifiers. (Experiment 2) From the Figure 6 and Figure 7, we have three observations from Experiment 1: 1. When we conduct augmentation only for partial classes (0-49 classes), the feature alignment for augmented classes of the training dataset is degraded compared to the non-augmented classes. This is because the augmentation classes have more diversified training data than non-augmentation classes, which leads to more diversification in feature space. We observe the balance between alignment between classes in the cases of without augmentation and with all augmentation since all classes have similar diversity. (See the first rows in Figure 6, 7) 2. However, all three augmentation strategies have balanced class-wise feature alignment for the same test dataset. This tendency can be observed in both balanced and imbalanced datasets. This result is consistent with Kang et al. (2020). Furthermore, the values for feature alignment are increased when we conduct augmentation partially or all, compared to without augmentation. This result shows that augmentation enhances the feature extraction ability, which is consistent with conventional studies. (See the second rows in Figure 6, 7) 3. When we conduct augmentation only for partial classes on a balanced dataset, the class-wise weight norm of the linear classifier is larger for non-augmentation classes. This result incurs performance improvement for non-augmentation classes and reduction for augmentation classes since this linear classifier has a tendency to classify non-augmented classes with larger weight values. However, we observe that class-wise weight norms are balanced in “without augmentation” and “all augmentation” cases. (See the third row in Figure 6) 4. We observe that the class-wise weight norm of the linear classifier is larger for majorities for all classes that have the same augmentation strength. These results are consistent with previous works (Kang et al., 2020; Alshammari et al., 2022). However, when we conduct augmentation only for majorities, the class-wise weight norm is more balanced. This phenomenon is similar to the balanced case in that partial augmentation incurs a reduction in the norm of the linear classifier for augmented classes. (See the third row in Figure 7) Our observations from Experiment 1 are highly consistent in both balanced and imbalanced datasets. The results in Figure 1, Figure 6 and Figure 7 highly motivate the design of CUDA. Moreover, our results for Experiment 2 can explain these observations as shown in Figure 8 and Figure 9. We observe that in the presence of feature alignment degradation from augmentation, the corresponding norm is relatively small, as shown in Figure 8. This is because in the class that has lower feature alignment, the variation of the gradient for the linear classifier is larger than in the class with high feature alignment. As shown in Figure 9, from Experiment 2, we observe that ∥∆w∥, the norm of class-wise difference of between current and initialize linear classifier parameters ∆w := w −w0, have smaller value in augmented classes than non-augmented classes. From our experimental analysis in Figure 6, 7, and 9, we can conclude that augmentation breaks the consistency of feature alignment and it makes the weight norm of the linear classifier decreases. B IMPLEMENTATION DETAIL IN SECTION 4 B.1 DATASET DESCRIPTION CIFAR-100-LT. CIFAR-100-LT is a subset of CIFAR-100. Following Wang et al. (2021); Park et al. (2022); Zhu et al. (2022), we use the same long-tailed version for a fair comparison. The number of samples of kth class is determined as follows: (1) Compute the imbalanced factor Nmax/Nmin, which reflects the degree of imbalance in the data. (2) |Dk| between |D1| = Nmax and |D100| = Nmin follows an exponential decay (i.e., |Dk| = |D1| × (Nmax/Nmin)k/100). The imbalance factors used in the experiment are set to 100, 50, and 10. ImageNet-LT. ImageNet-LT (Liu et al., 2019) is a modified version of the large-scale real-world dataset (Russakovsky et al., 2015). Subsampling is conducted by following the Pareto distribution with power value α = 0.6. It consists of 115.8K images of 1, 000 classes in total. The most common or rare class has 1, 280 or 5 images, respectively. iNaturalist 2018. iNaturalist (Van Horn et al., 2018) is a large-scale real-world dataset which consists of 437.5K images from 8, 142 classes. It has long-tailed property by nature, with an extremely class imbalanced. In addition to long-tailed recognition, this dataset is also used for evaluating the finegrained classification task. B.2 DATA PREPROCESSING For data preprocessing, we follow the default settings of Cao et al. (2019). For CIFAR-100-LT, each side of the image is padded with 4 pixels, and a 32× 32 crop is randomly selected from the padded image or its horizontal flip. For ImageNet-LT and iNaturalist 2018, after resizing each image by setting the shorter side to 256 pixels, a 224 × 224 crop is randomly sampled from an image or its horizontal flip. For BCL and NCL, which use AutoAugment (Cubuk et al., 2019) or RandAugment (Cubuk et al., 2020) as default data augmentation, we apply them after random cropping by following their original papers (Zhu et al., 2022; Li et al., 2022a). Then, we finally conduct CUDA after all default augmentation operations, and then normalize the image with following mean and standard deviation values sequentially: CIFAR-100-LT ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ImageNetLT ((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), and iNaturalist 2019 ((0.466, 0.471, 0.380), (0.195, 0.194, 0.192)). B.3 DETAILED IMPLEMENTATION Because some official codes do not open their entire implementations, we re-implement by following the rules. For re-implementation, we reproduce the code based on their partial code and the authors’ responses. RIDE. We follow the officially offered code2. Among various experimental configurations of official code (e.g., one-stage RIDE, RIDE-EA, Distill-RIDE), for fair comparison (to leverage similar computation resources), we utilize one-stage training (i.e., one-stage RIDE) for all cases. We confirm that CMO (Park et al., 2022) also utilizes this setup for RIDE + CMO from the response of the authors. CMO. We re-implement all CMO results from their official code3 in our work. However, the official code of CMO does not contain code for RIDE + CMO. Therefore, we re-implement by injecting the CMO part for BS in the official code (weighted sampler and mixup part) into the RIDE code. Furthermore, for iNaturalist 2018, we train the model for 100 epochs for a fair comparison with other methods (whereas the original RIDE + CMO is trained for 200 epochs on iNaturalist 2018). BCL. The officially released code4 of BCL only contains ImageNet-LT and iNaturalist 2018. Whereas the official code applies a cosine classifier for ImageNet-LT and iNaturalist 2018, we apply 2https://github.com/frank-xwang/RIDE-LongTailRecognition 3https://github.com/naver-ai/cmo 4https://github.com/FlamieZhu/Balanced-Contrastive-Learning an ordinary linear classifier for CIFAR-100-LT from the author’s response. All hyperparameters are the same as the experiment settings of the original work (Zhu et al., 2022). B.4 GUIDELINE FOR HYPER-PARAMETER TUNING Although we did not tune the hyper-parameters extensively. However, we give a guideline to select the hyper-parameters. The number of samples for updating LoL (T ). We can set this value according to the given computing resources (i.e., the largest T under computing resource constraint). This is because the performance improves as T increases from obtaining a definite LoL score by testing many samples. The acceptance threshold (γ). Our strategy for tuning gamma is to select the largest value in which at least one of LoL scores among all classes increases within 20 epochs. This is because for large-scale datasets, the network fail to infer even the easier-to-learn majority classes. Here is the detailed tuning strategy for γ. • We initially set γ as 0.6. • We decrease the threshold γ by 0.1 points whenever it fails to raise any of LoL score for the first 20 training epochs. We condcut this search on CE with CIFAR-100-LT with IR 100 and using the same γ value of the other algorithms with remaining IR settings. Also, we conduct this search rule on ImageNet-LT with CE and use the same value to the other large-scale dataset, i.e., iNaturalist 2018 with remaining algorithms. The augmentation probability (paug). While we did not tune this hyper-parameter, we offer the guideline how to tune this value based on Figure 5a. As shown in Figure 5a, the shape of graph between paug and performance is concave. Thanks to concavity, we think that it is easy to find the optimal value for this hyper-parameter. Note that the reason for the concavity is because the decision of paug value has a trade-off between preserving the information of the original image and exploring diversified images. Further sensitivity analysis on ImageNet-LT In Section 4, we apply different values of γ in CIFAR-100-LT (0.6) and large-scale datasets (0.4; ImageNet-LT and iNaturalist 2018). In addition to Figure 5, we further conduct the sensitivity analysis for γ on the ImageNet-LT to verify CUDA works well robustly with different values of γ on large-scale datasets. As shown in Table 5, our proposed method CUDA is also robust to hyper-parameter selection for γ not only the small datasets such as CIFAR-100-LT but also large-scale datasets. C FURTHER ANALYSES Training Time Analysis. CUDA requires additional computation for computing LoL score. We measure the additional training time for adding CUDA on various algorithms. As shown in Figure 11, when utilizing CUDA additional training time is spent. However, the additional operation for searching the LoL score does not require a large value. For example, BS with CUDA spends ×1.29 time to obtain adequate augmentation strength. Network Architecture Analysis. We also present our ResNet-10 (Liu et al., 2019) and ResNeXt50 (Xie et al., 2017) experiments on the ImageNet-LT dataset in Figure 10, respectively. These results show that CUDA consistently improves performance regardless of network sizes and corresponding LTR methods. w/ CUDA w/o CUDA ResNet-10 CE CD LD BS w/ CUDA w/o CUDA Ac cu ra cy (% ) 46 48 50 52 54 56 ResNeXt-50 CE CD LD BS w/o CUDA w/ CUDA Tr ai ni ng ti m e (m in .) 0 10 20 30 40 Algorithm CE LDAM BS RIDE BCL Figure 11: Training time. What if CUDA is ran on the balanced dataset. We examine that if CUDA is applied to the balanced case, i.e., imbalance ratio is 1. As described in the Table 6 CUDA obtains 1.9% accuracy gain, which is lower than the other auto augmentation methods. However, other autoaugmentation methods spend more computation time searching a good augmentation than CUDA. Furthermore, as described in Figure 4, CUDA has higher performance than the others when the class imbalance dataset is given. D AUGMENTATION PRESET D.1 DATA AUGMENTATION OPERATIONS USED IN CUDA. There have been numerous data augmentation operations in vision tasks. We used totally 22 augmentations for CUDA with their own parameter set. Details of the operation set and parameters are described in Table 7. For augmentation magnitude parameter mk(s), we divide parameters into thirty values linearly. For example of, ShearX case, its max and min values are 3 and 0, respectively. Therefore, mShearX(s) = (3− 0)/30 ∗ s, thus mShearX(1) = 0.01 = (3− 0)/30 ∗ 1. D.2 FURTHER ANALYSIS ON AUGMENTATION PRESET To get further intuition on the effect of number of predefined augmentation operations, we conduct several exploratory experiments. Validity of our main finding (Figure 1) under a few predefined augmentation. The observation in Figure 1 is caused by minorities becoming relatively easy to learn since majorities have become difficult. Therefore, if the sample of majorities becomes difficult enough to learn, the same phenomenon as Figure 1 occurs regardless of the number of augmentation presets. To verify that our main finding is valid regardless of the number of predefined augmentations, we conduct the experimental with ten augmentation operations (Mirror, ShearX, Invert, Smooth, ResizeCrop, Color, Brightness, Sharpness, Rotate, AutoContrast). Table 8 describes the performance of (0,0), (0,4), (4,0), and (4,4) that each configuration denotes the augmentation strength of (majority; top 50 class, minor; bottom 50 class). Through the results, we verify that the finding in Figure 1 is valid even in a small number of predefined augmentation operations. Effect of number of predefined augmentation. We further analyze the impact of predefined augmentation operations (K in Figure 2); we additionally experiment by replacing the augmentation preset in Appendix D with the following two augmentation presets: (1) 10 randomly sampled augmentations (Mirror, ShearX, Invert, Smooth, ResizeCrop, Color, Brightness, Sharpness, Rotate, Operation Parameter Description Flip On/Off Flip top and bottom Mirror On/Off Flip left and right Edge Enhancement On/Off Increasing the contrast of the pixels around the targeted edges Detail On/Off Utilize convolutional kernel [[0,−1, 0], [−1, 10,−1], [0,−1, 0]] Smooth On/Off Utilize convolutional kernel [[1, 1, 1], [1, 5, 1], [1, 1, 1]] AutoContrast On/Off Remove a specific percent of the lightest and darkest pixels Equalize On/Off apply non-linear mapping to make uniform distribution Invert On/Off Negate the image Gaussian Blur [0,2] Blurring an image using Gaussian function Resize Crop [1,1.3] Resizing and center random cropping Rotate [0,30] Rotate the image Posterize [0,4] Reduce the number of bits for each channel Solarize [0,256] Invert all pixel values above a threshold SolarizeAdd [0,110] Adding value and run solarize Color [0.1, 1.9] Colorize gray scale values Contrast [0.1,1.9] Distance between the colors Brightness [0.1,1.9] Adjust image brightness Sharpness [0.1,1.9] Adjust image sharp Shear X [0,0.3] Shearing X-axis Shear Y [0,0.3] Shearing Y-axis Translate X [0,100] Shift X-axis Translate Y [0,100] Shifting Y-axis AutoContrast) and (2) RandAugment (Cubuk et al., 2020) preset that consists of (AutoContrast, Equalize, Invert, Rotate, Posterize, Solarize, SolarizeAdd, Color, Contrast, Brightness, Sharpness, ShearX, ShearY, CutoutAbs, TranslateXabs, TranslateYabs). Table 9 demonstrates that the accuracy slightly increases when the size of the augmentation preset increases. However, the gap between the RandAugment preset (14 operations) and our original preset (22 operations) is small compared to the gap between the vanilla (without CUDA case) and the RandAugment case. These results verify our belief that the impact of the number of predefined augmentations is small. Effect of randomly ordered data augmentation. Our proposed CUDA operates randomly sequential of the selected augmentations based on the strength of DA. To study the impact of these randomly ordered augmentations, we compare CUDA and CUDA with fixed order augmentations. For examples, when the operation indices (6, 3, 5) among 22 augmentations are samples, it is applied with (3, 5, 6). Table 10 shows small performance differences between the two methods. Thus, we believe that the effect of the augmentation order on the difficulty is negligible. This is because the effectiveness of CUDA is expected to be sufficiently high even in a given order of augmentations since the goal is to make it harder to learn, regardless of the ordered (determined or random) order. Comparison with random augmentation. To verify that the success of CUDA is not simply from a richer dataset made by DA, we compare our proposed method CUDA to randomly sampled augmentation for every iteration. Our comparison methods are Random 5 and Random 10, which denote the conduct of five and ten randomly sampled augmentations for every iteration. As shown in Table 11, while Random 10 generates the most diversifying images, the network trained with this showed the worst performance, even lower than vanilla. Our CUDA achieves the best performance among all methods. E EXPERIMENTAL SETTING OF FIGURE 5D To further analyze the impact of curriculum, we compare CUDA with the performance of previous hyper-parameter search algorithms and auto-augmentation methods, especially DADA (Li et al., 2020b). We describe each setting in detail as follows. Baseline. This is the case of training with standard data augmentation that consists of random cropping and probabilistic horizontal flip. Hyper-parameter search. We utilize the strength score-based augmentation module in CUDA to verify the hyper-parameter search. In other words, samples in each class utilize K augmentation operations. Therefore, we search the class-wise augmentation on the search space KN where N is the number of classes. We leverage the hyper-parameter searching open-source library, Ray (Liaw et al., 2018), for search KN space efficiently. Among various search modules, we utilize the HyperOptSearch module, which is the implementation of the Tree-structured Parzen Estimator (Bergstra et al., 2013). Moreover, for fast search, we use the Asynchronous Successive Halving Algorithm (ASHA) (Li et al., 2020a). We run 1, 000 trials for each algorithms which spends almost 20 GPU hours (i.e., ×80 overhead compare to CUDA). Researched DADA operation on imbalanced CIFAR. Because the officially offered policies on CIFAR by Li et al. (2020b) are searched for a balanced CIFAR dataset, we have to re-search the augmentation policy for the imbalanced dataset. We utilize the official code of DADA and replace the dataloader to re-search the operations. It spends 48 minutes for searching the augmentation policy (×8.6 than the overhead of CUDA). Despite this additional overhead, DADA outputs worse performance than CUDA (even CUDA without curriculum case). This is because (1) DADA does not consider class-wise augmentation and (2) it does not consider the impact of class imbalance. CUDA without curriculum To verify the impact of curriculum itself, we ran the following steps. (1) We conduct experiments with CUDA and get the strength of data augmentation for each class at the final epoch. (2) We re-train the network from scratch by using the strength parameter obtained from (1). F FURTHER ANALYSES To get better understanding, we conduct several analyses for our proposed method, CUDA. F.1 FURTHER ANALYSIS ON LOL SCORE In this section, we conduct experimental ablation studies to understand the performance gain of our proposed method, CUDA. Suitability of LoL score as metric for class-wise difficulty. The superiority of LoL score is to measure the difficulty metric based on the augmentation strength for each class, which is motivated by our main findings. To verify the suitability of LoL score as a metric for class-wise difficulty, we compared CUDA and the case where LoL score is replaced by the score in Sinha et al. (2022). As same with our proposed method, we increase the strength parameter when the score in Sinha et al. (2022) is larger than the same threshold γ = 0.6. Table 12 summarizes the results that our LoL score showed performance improvement compared to the case of Sinha et al. (2022). From the results, we can conclude that this improvement comes from the characteristic of LoL score that is directly related to augmentation strength. Effect of random sampling for computing LoL score To implement the computation of LoL score efficiently, we randomly selected the instances for each class. The reason for using random sampling to compute VCorrect is that we want to measure how much the model learns entire information for each class. To understand the effect of random sampling, we compare our random sampling method to sampling instances with larger (or smaller) losses. Table 13 describes the comparison of performance between various sampling strategies. As shown in the results, if CUDA measures the degree of learning with only easy samples (the samples with small losses), CUDA increases the strength of augmentation too quickly and generates performance degradation. Therefore, it is a better way to grasp the degree of learning for each class without prejudice through uniform random sampling. Furthermore, computing loss for all samples for sorting them at the beginning of each epoch requires ×1.5 times of computation overhead than our method. Numerical values of LoL score dynamics. We provide the numerical values for Figure 4 that is, the average values (for every 20 epochs) of LoL score for the classes with indices 1-10 and the classes with indices 91-100. From the numerical values, we can easily understand the explanation which is discussed in Section 4. F.2 ANALYSIS THE CASE OF WITHOUT CLASS-WISE To examine the validity of class-wise augmentation of CUDA, we apply the CUDA with the same strength of DA for all classes. Instead of computing LoL score class-wisely, we computed only one LoL score for the entire dataset by uniformly random sampling instances in the training dataset regardless of class. Table 15 shows the significant performance degradation of CUDA without classwise augmentation compared to CUDA. This is because, without class-wise augmentation, we cannot allocate the appropriate strength of augmentation to each class.
1. What is the focus and contribution of the paper on long-tailed recognition? 2. What are the strengths of the proposed approach, particularly regarding its insight and strategy? 3. What are the weaknesses or concerns regarding the paper's assumptions and experimental procedures? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes an auxiliary data augmentation technique that can be used on top of several long-tailed recognition methods. The main insight of the paper is that class-wise augmentation actually improves the performance of non-augmented classes. This is an important as well as counter-intuitive observation. The authors devise a data augmentation strategy that relies on the `strength' of augmentation, which is calculated using level-of-learning. Strengths And Weaknesses Pros: The main observation of the paper- "class-wise augmentation actually improves the performance of non-augmented classes" is very crucial and important. The data augmentation strategy devised by the authors also seems clever and logical. However, there are a few things that are concerning (see concerns). Concerns: The way DA with strength parameter is devised is concerning because it assumes that the sequence of augmentation is permutation invariant and preserves the identity of the original sample. This need not be the case with many augmentations. From Fig. 5c it seems that the method is sensitive to threshold / accept rate \gamma, in experiments also, the authors choose 2 different values of \gamma (0.4 and 0.6). What procedure do the authors follow to decide this value? Is there any strategy one can follow to come up with this value? In Table 2, the improvement with CUDA looks very incremental. For such close values it is suggested that the authors report the scores in the form of mean \pm std. dev to give a more clear view of how close the scores actually are. The authors show that the variance of L1-norm of a linear classifier decreases due to CUDA. In general, any data-augmentation strategy should decrease the variance because of an increase in the data samples. Clarity, Quality, Novelty And Reproducibility Please see the comments in the previous section.
ICLR
Title CUDA: Curriculum of Data Augmentation for Long-tailed Recognition Abstract Class imbalance problems frequently occur in real-world tasks, and conventional deep learning algorithms are well known for performance degradation on imbalanced training datasets. To mitigate this problem, many approaches have aimed to balance among given classes by re-weighting or re-sampling training samples. These re-balancing methods increase the impact of minority classes and reduce the influence of majority classes on the output of models. However, the extracted representations may be of poor quality owing to the limited number of minority samples. To handle this restriction, several methods have been developed that increase the representations of minority samples by leveraging the features of the majority samples. Despite extensive recent studies, no deep analysis has been conducted on determination of classes to be augmented and strength of augmentation has been conducted. In this study, we first investigate the correlation between the degree of augmentation and class-wise performance, and find that the proper degree of augmentation must be allocated for each class to mitigate class imbalance problems. Motivated by this finding, we propose a simple and efficient novel curriculum, which is designed to find the appropriate per-class strength of data augmentation, called CUDA: CUrriculum of Data Augmentation for long-tailed recognition. CUDA can simply be integrated into existing long-tailed recognition methods. We present the results of experiments showing that CUDA effectively achieves better generalization performance compared to the state-of-the-art method on various imbalanced datasets such as CIFAR-100-LT, ImageNet-LT, and iNaturalist 2018. 1 1 INTRODUCTION Deep neural networks (DNNs) have significantly improved over the past few decades on a wide range of tasks (He et al., 2017; Redmon & Farhadi, 2017; Qi et al., 2017). This effective performance is made possible by come from well-organized datasets such as MNIST (LeCun et al., 1998), CIFAR10/100 (Krizhevsky et al., 2009), and ImageNet (Russakovsky et al., 2015). However, as Van Horn et al. (2018) indicated, gathering such balanced datasets is notoriously difficult in real-world applications. In addition, the models perform poorly when trained on an improperly organized dataset, e.g., in cases with class imbalance, because minority samples can be ignored due to their small portion. The simplest solution to the class imbalance problem is to prevent the model from ignoring minority classes. To improve generalization performance, many studies have aimed to emphasize minority classes or reduce the influence of the majority samples. Reweighting (Cao et al., 2019; Menon et al., 2021) or resampling (Buda et al., 2018; Van Hulse et al., 2007) are two representative methods that have been frequently applied to achieve this goal. (i) Reweighting techniques increase the weight of the training loss of the samples in the minority classes. (ii) Resampling techniques reconstruct a class-balanced training dataset by upsampling minority classes or downsampling majority classes. Although these elaborate rebalancing approaches have been adopted in some applications, limited information on minority classes due to fewer samples remains problematic. To address this issue, some works have attempted to spawn minority samples by leveraging the information of the minority ∗Two authors contribute equally 1Code is available at Link samples themselves. For example, Chawla et al. (2002); Ando & Huang (2017) proposed a method to generate interpolated minority samples. Recently (Kim et al., 2020; Chu et al., 2020; Park et al., 2022) suggested enriching the information of minority classes by transferring information gathered from majority classes to the minority ones. For example, Kim et al. (2020) generated a balanced training dataset by creating adversarial examples from the majority class to consider them as minority. Although many approaches have been proposed to utilize data augmentation methods to generate various information about minority samples, relatively few works have considered the influence of the degree of augmentation of different classes on class imbalance problems. In particular, few detailed observations have been conducted as to which classes should be augmented and how intensively. To this end, we first consider that controlling the strength of class-wise augmentation can provide another dimension to mitigate the class imbalance problem. In this paper, we use the number of augmentation operations and their magnitude to control the extent of the augmentation, which we refer to herein as its strength, e.g., a strength parameter of 2 means that two randomly sampled operations with a pre-defined magnitude index of 2 are used. Our key finding is that class-wise augmentation improves performance in the non-augmented classes while that for the augmented classes may not be significantly improved, and in some cases, performances may even decrease. As described in Figure 1, regardless of whether a given dataset is class imbalanced, conventional class imbalance methods show similar trends: when only the major classes are strongly augmented (e.g., strength 4), the performance of majority classes decreases, whereas that for the minority classes have better results. To explain this finding, we further find that strongly augmented classes get diversified feature representation, preventing the growth of the norm of a linear classifier for corresponding classes. As a result, the softmax outputs of the strongly augmented classes are reduced, and thus the accuracy of those classes decreases. It is described in Appendix A. This result motivates us to find the proper augmentation strength for each class to improve the performance for other classes while maintaining its own performance. Contribution. We propose a simple algorithm called CUrriculum of Data Augmentation (CUDA) to find the proper class-wise augmentation strength for long-tailed recognition. Based on our motivation, we have to increase the augmentation strength of majorities for the performance of minorities when the model successfully predicts the majorities. On the other hand, we have to lower the strength of majorities when the model makes wrong predictions about majorities. The proposed method consists of two modules, which compute a level-of-learning score for each class and leverage the score to determine the augmentation. Therefore, CUDA increases and decreases the augmentation strength of the class that was successfully and wrongly predicted by the trained model. To the best of our knowledge, this work is the first to suggest a class-wise augmentation method to find a proper augmentation strength for class imbalance problem. We empirically examine performance of CUDA on synthetically imbalanced datasets such as CIFAR100-LT (Cao et al., 2019), ImageNet-LT (Liu et al., 2019), and a real-world benchmark, iNaturalist 2018 (Van Horn et al., 2018). With the high compatibility of CUDA, we apply our framework to various long-tailed recognition methods and achieve better performance compared to the existing long-tailed recognition methods. Furthermore, we conduct an extensive exploratory analysis to obtain a better understanding of CUDA. The results of these analyses verify that CUDA exhibits two effects that mitigate class imbalance, including its balanced classifier and improved feature extractor. 2 RELATED WORKS Long-tailed Recognition (LTR). The datasets with class imbalances can lead DNNs to learn biases toward training data, and their performance may decrease significantly on the balanced test data. To improve the robustness of such models to imbalance, LTR methods have been evolving in two main directions: (1) reweighting (Cui et al., 2019; Cao et al., 2019; Park et al., 2021) methods that reweight the loss for each class by a factor inversely proportional to the number of data points, and (2) resampling methods (Kubat et al., 1997; Chawla et al., 2002; Ando & Huang, 2017) that balance the number of training samples for each class in the training set. However, studies along these lines commonly sacrifice performance on majority classes to enhance that on minority classes, because the overfitting problem occurs with limited information on minority classes as a result of increasing the weight of a small number of minority samples. Several methods have recently been developed to alleviate the overfitting issues in various categories: (1) two-stage training (Cao et al., 2019; Kang et al., 2020; Liu et al., 2019), (2) ensemble methods (Zhou et al., 2020a; Xiang et al., 2020; Wang et al., 2021; Cai et al., 2021), and (3) contrastive learning approach (Kang et al., 2021; Cui et al., 2021; Zhu et al., 2022; Li et al., 2022a;b). To re-balance the classifier layers after achieving a good representation on the imbalanced training dataset in an early phase, Cao et al. (2019) proposed deferred resampling (DRS) and reweighting (DRW) approaches. Kang et al. (2020) decoupled the learning procedure into representation learning and training linear classifier, achieved higher performance than previous balancing methods. Wang et al. (2021) and Cai et al. (2021) suggested efficient ensemble methods using multiple experts with a routing module and a shared architecture for experts to capture various representations. Liu et al. (2022) found that self-supervised representations are more robust to class imbalance than supervised representations, and some works have developed supervised contrastive learning methods (Khosla et al., 2020) for imbalanced datasets (Cui et al., 2021; Zhu et al., 2022; Li et al., 2022b). Another line of research has considered augmentation methods in terms of both input and feature spaces (Kim et al., 2020; Chu et al., 2020; Li et al., 2021). Recently, Park et al. (2022) mixed minority and majority images by using CutMix with different sampling strategies to enhance balancing and robustness simultaneously. These methods commonly focus on utilizing the rich context of majority samples to improve the diversity of minority samples. Zhou et al. (2022) proposed an augmentation-based contrastive learning method which boosts memorization of each samples for long-tailed learning. Moreover, these augmentation-based methods are relatively in easy to apply orthogonally with other LTR methods. Data Augmentation (DA). DA has been studied to mitigate overfitting which may occur due to a lack of data samples. Some works have been proposed to erase random parts of images to enhance the generalization performance of neural networks (DeVries & Taylor, 2017; Zhong et al., 2020; Kumar Singh & Jae Lee, 2017; Choe & Shim, 2019). Recently, variants of MixUp (Zhang et al., 2018) have been proposed; this method combines two images with specific weights (Tokozume et al., 2018; Guo et al., 2019; Takahashi et al., 2018; DeVries & Taylor, 2017; Verma et al., 2019). By aggregating two approaches, CutMix (Yun et al., 2019) was proposed to erase and replace a small rectangular part of an image into another image. In another line of research, methods have been proposed to automatically configure augmentation operations (Cubuk et al., 2019; Lim et al., 2019; Li et al., 2020b; Hataya et al., 2020; Gudovskiy et al., 2021). In addition, Cubuk et al. (2020) randomly selected augmentation operations using the given hyperparameters of the number of sampling augmentation and their magnitudes. Recently, class-wise or per-sample auto-augmentation methods have also been proposed (Cheung & Yeung, 2021; Rommel et al., 2022). 3 CURRICULUM OF DATA AUGMENTATION FOR LONG-TAILED RECOGNITION The core philosophy of CUDA is to “generate an augmented sample that becomes the most difficult sample without losing its original information.” In this section, we describe design of CUDA in terms of two parts: (1) a method to generate the augmented samples based on the given strength parameter, and (2) a method to measure a Level-of-Learning (LoL) score for each class. 3.1 PROBLEM FORMULATION OF LONG-TAILED RECOGNITION Suppose that the training dataset D = {(xi, yi)}Ni=1 is composed of images with size d, xi ∈ Rd, and their corresponding labels yi ∈ {1, ..., C}. Dc ⊂ D is a set of class c, i.e., Dc = {(x, y)|y = c, (x, y) ∈ D}. Without loss of generality, we assume |D1| ≥ |D2| ≥ · · · ≥ |DC |, where |D| denotes the cardinality of the set D. We denote the Nmax := |D1| and Nmin := |DC |. LTR algorithms, ALTR(fθ,D), mainly focus on training the model fθ with parameter θ when the class distribution of training dataset Ptrain(y) and test dataset Ptest(y) are not identical. More precisely, Ptrain(y) is highly imbalanced while Ptest(y) is balanced, i.e., uniform distribution. 3.2 CURRICULUM OF DATA AUGMENTATION In this section, we describe our proposed DA with strength parameter, and the methods used to measured the LoL score. Then, we integrate the two methods in a single framework to propose CUDA. DA with a strength parameter. Let us assume that there exist pre-defined K augmentation operations. We utilize visual augmentation operations which is indexed as k ∈ {1, · · · ,K}, e.g., Gaussian blur, Rotation, Horizontal flip. Each augmentation operation Omk(s)k : Rd → Rd has its own predefined augmentation magnitude function mk(s) where the strength parameter s ∈ {0, ..., S}. These operations are described in detail along with each magnitude functions in Appendix D. Given an augmentation strength parameter s and an input image x, we model a sequence of augmentation operations O(x; s) as follows: O(x; s) = Omks (s)ks ◦ O mks−1 (s) ks−1 ◦ · · · ◦ Omk1 (s)k1 (x), ki ∼ Cat(K,U(K)) ∀i = {1, . . . , s}, where, Cat(·) and U(·) denote categorical and discrete uniform distributions, respectively. The sequential augmentation operation O(x; s) samples s operations from the categorical distribution when the probability of seeing the operations follows uniform distribution. As depicted on the left side Figure 2, suppose that the random sampled augmentations k1, k2, and k3 are brightness, X-shift, and Y-shift, respectively. Then, O(x; 3) outputs an image in which bright is raised by mbright(3) and moved by mx-shift(3) on the x-axis and shifted by my-shift(3) on the y-axis. Algorithm 1: CUrriculum of Data Augmentation Input: LTR algorithm ALTR(f,D), training dataset D = {(xi, yi)}Ni=1, train epochs E, aug. probability paug, threshold γ, number of sample coefficient T . Output: trained model fθ Initialize: L0c = 0 ∀c ∈ {1, ..., C} for e ≤ E do Update Lec = VLoL(Dc, Le−1c , fθ, γ, T ) ∀c // Alg. 2 Generate DCUDA = {(x̄i, yi)|(xi, yi) ∈ D} where x̄i = { O(xi, Leyi) with prob. paug xi otherwise. Run LTR algorithm using DCUDA, i.e., ALTR (fθ,DCUDA). end Algorithm 2: VLoL: Update LoL score Input: Dc, L, fθ, γ, T Output: updated L Initialize: check = 1 for l ≤ L do /* Vcorrect(Dc, l, fθ, T ) */ Sample D′c ⊂ Dc s.t. |D′c| = T (l + 1) Compute v = ∑ x∈D′c 1{f(O(x;l)=c} if v ≤ γT (l + 1) then check← 0; break end end if check = 1 then L← L+ 1 else L← L− 1 Level-of-Learning (LoL). To control the strength of augmentation properly, we check whether the model can correctly predict augmented versions without losing the original information. To enable this, we define the LoL for each class c at epoch e, i.e., Lec, which is adaptively updated as the training continues as follows: Lec = VLoL(Dc, Le−1c , fθ, γ, T ), where VLoL(Dc, Le−1c , fθ, γ, T ) = { Le−1c + 1 if VCorrect(Dc, l, fθ, T ) ≥ γT (l + 1) ∀l ∈ {0, ..., Le−1c } Le−1c − 1 otherwise . Here, γ ∈ [0, 1] is threshold hyperparameter, T is coefficient of the number of samples used to updating LoL. Vcorrect is a function which outputs the number of correctly predicted examples by the model fθ among l + 1 randomly augmented samples with strength l. Vcorrect is defined as: VCorrect(Dc, l, fθ, T ) = ∑ x∈D′c 1{fθ(O(x;l))=c} where D′c ⊂ Dc. Note that D′c is a randomly sampled subset of Dc with replacement and its size is T (l + 1). The key philosophy of this criterion is two fold. (1) If samples in the class c are trained sufficiently with an augmentation strength of Lec, the model is ready to learn a more difficult version with augmentation strength of Le+1c ← Lec + 1. In contrast, if the model predicts incorrectly, it should re-learn the easier sample with an augmentation strength of Le+1c ← Lec − 1. (2) As the strength parameter increases, the number of candidates for the sequential augmentation operation O(x;L) increases exponentially. For example, the amount of increment is NL(N − 1) when L is increases to L+ 1. To control the LoL in a large sequential augmentation operation space, we take more random samples to check as the strength parameter gets bigger. In our experiments, linearly increasing the number of samples to evaluate corresponding to the strength with a small additional computation time was sufficient. VLoL is described in Figure 2 and Algorithm 2. Curriculum of DA. By combining two components, including DA with a strength parameter and LoL, our CUDA provides class-wise adaptive augmentation to enhance the performance of the others without losing its own information. As shown in Figure 2 and Algorithm 1, we measure the LoL score Lc for all classes in the training dataset to determine the augmentation strength for every epoch. Based on Lc, we generate the augmented version O(x;Lc) for x ∈ Dc and train the model with the augmented samples. Additionally, we randomly use the original sample instead of the augmented sample with probability paug so that the trained models do not forget the original information. In our experiments, this operation improved performance robustly on a wide range of paug values. The results are provided in Section 4.3. Advantage of CUDA design. Our proposed approach mainly has three advantages. (1) CUDA adaptively finds proper augmentation strengths for each class without need for a validation set. (2) Following the spirits of existing curriculum learning methods (Hacohen & Weinshall, 2019; Zhou et al., 2020b; Wu et al., 2021), CUDA enables modeling by first presenting easier examples earlier during training to improve generalization. This encourages the model to learn difficult samples (i.e., within high augmentation strength) better. (3) Moreover, owing to the universality of data augmentation, CUDA is easily compatible with other LTR algorithms, such as (Cao et al., 2019; Ren et al., 2020; Wang et al., 2021). 4 EXPERIMENTS In this section, we present empirical evaluation, the results of which demonstrate the superior performance of our proposed algorithm for class imbalance. We first describe the long-tailed classification benchmarks and implementations in detail (Section 4.1). Then, we describe the experimental results on several synthetic (CIFAR-100-LT, ImageNet-LT) and real-world (iNaturalist 2018) long-tailed benchmark datasets in Section 4.2. Moreover, we conduct additional experiments to obtain a better understanding of CUDA, and this analysis is provided in Section 4.3. 4.1 EXPERIMENTAL SETUP Datasets. We evaluate CUDA on the most commonly used long-tailed image classification tasks: CIFAR-100-LT (Cao et al., 2019), ImageNet-LT (Liu et al., 2019), and iNaturalist 2018 (Van Horn et al., 2018). CIFAR-100-LT and ImageNet-LT are provided with imbalanced classes by synthetically sampling the training samples. CIFAR-100-LT is examined with various imbalance ratios {100, 50, 10}, where an imbalance ratio is defined as Nmax/Nmin. iNaturalist 2018 is a large-scale real-world dataset includes natural long-tailed imbalance. We utilize the officially provided datasets. Baselines. We compare CUDA with previous long-tailed learning algorithms , including cross-entropy loss (CE), two-stage approaches: CE-DRW (Cao et al., 2019) and cRT (Kang et al., 2020), balanced loss approaches: LDAM-DRW (Cao et al., 2019) and Balanced Softmax (BS; Ren et al. 2020), the ensemble method: RIDE with three experts (Wang et al., 2021), resampling algorithms: Remix (Chou et al., 2020) and CMO (Park et al., 2022), and contrastive learning-based approach: BCL (Zhu et al., 2022). We integrate CUDA with CE, CE-DRW, LDAM-DRW, BS, RIDE, and BCL algorithms. For longer epochs, we compare CUDA with PaCo (Cui et al., 2021), BCL, and NCL (Li et al., 2022a), by combining CUDA with BCL and NCL. For a fair comparison of the computational cost, we train the network with the official one-stage implementation of RIDE (i.e., without distillation and routing). Implementation. For CIFAR-100-LT dataset, almost all implementations follow the general setting from Cao et al. (2019), whereas cRT (Kang et al., 2020), BCL, NCL and RIDE follow the settings used in their original implementation. Following Cao et al. (2019), we use ResNet-32 (He et al., 2016) as a backbone network for CIFAR-100-LT. The network is trained on SGD with a momentum of 0.9 and a weight decay of 2× 10−4. The initial learning rate is 0.1 and a linear learning rate warm-up is used in the first 5 epochs to reach the initial learning rate. During training over 200 epochs, the learning rate is decayed at the 160th and 180th epochs by 0.01. For the ImageNet-LT and iNaturalist, the ResNet-50 is used as a backbone network and is trained for 100 epochs. The learning rate is decayed at the 60th and 80th epochs by 0.1. As with CIFAR, for cRT, RIDE, and BCL, we follow the original experimental settings of the official released code. For the hyperparameter values of CUDA, we apply a paug of 0.5 and T of 10 for all experiments. For γ, we set the values as 0.6 for CIFAR-100-LT and 0.4 for ImageNet-LT and iNaturalist 2018. The detailed implementation for baselines are in Appendix B. 4.2 EXPERIMENTAL RESULTS In this section, we report the performances of the methods compared on the CIFAR-100-LT, ImageNetLT, and iNaturalist 2018. We include four different categories of accuracy: all, many, med(ium), and few. Each represents the average accuracy of all samples, classes containing more than 100 samples, 20 to 100 samples, and under 20 samples, respectively. CIFAR-100-LT. In Table 1, we report the performance when CUDA is applied to the various algorithms: CE, CE-DRW (Cao et al., 2019), LDAM-DRW (Cao et al., 2019), BS (Ren et al., 2020), RIDE (Wang et al., 2021) with 3 experts, RIDE+CMO (Park et al., 2022), and BCL (Zhu et al., 2022). Compared to the cases without CUDA, balanced validation performance is increased when we apply the proposed approach. Recently, some works (Cui et al., 2021; Alshammari et al., 2022; Zhu et al., 2022; Li et al., 2022a) have shown impressive performances with diverse augmentation strategies and longer training epochs. For a fair comparison with these methods, we examine CUDA using the same experimental setups from PaCo (Cui et al. 2021; 400 epochs with batch size of 64). Table 3 shows that augmented images using CUDA can enhance LTR performance compared to the other baselines. In particular, CUDA with NCL obtains the best performance over 400 epochs. As noted by Li et al. (2022a), the NCL algorithm utilizes six times as much memory compared to the vanilla architecture with three experts. Hereinafter in large-scale benchmarks, we focus on the cases with similar network size. ImageNet-LT and iNaturalist 2018. To evaluate the performance of CUDA on larger datasets, we conduct experiments on ImageNet-LT (Liu et al., 2019) and iNaturalist 2018 (Van Horn et al., 2018). Table 2 summarizes the performance of various LTR methods and the performance gain when integrated with CUDA. Our proposed method consistently improves performance regardless of the LTR method and target dataset by simply adding class-wise data augmentation without complicated methodological modification. Additionally, to evaluate the performance gain of CUDA on other Epoch architectures, we experiment with CUDA on ImageNet-LT with ResNet-10 (Liu et al., 2019) and ResNeXt-50 (Xie et al., 2017), as reported in Appendix C. 4.3 ANALYSIS We design our analyses to answer the following questions. (1) How does CUDA perform? (2) Does CUDA perform better than other augmentation methods? (3) How does LoL score change over training epochs when combined with various LTR methods? (4) Which part of CUDA is important to improved performance? These analyses provide additional explanations to understand CUDA. All experiments are conducted on CIFAR-100-LT with imbalance ratio of 100. How does CUDA mitigate the class imbalance problem? To deeply understand CUDA, we observe two types of metrics: (1) variance of weight L1-Norm of linear classifier between each class (2) feature alignment gain for each class (i.e., cosine similarity with and without CUDA) on validation dataset. The classifier weight norm is usually used to measure how balanced the model consider the input from a class-wise perspective (Kang et al., 2020; Alshammari et al., 2022). Feature alignment, especially feature cosine similarity amongst samples belonging to the same class, is a measure of the extent to which the extracted features are aligned (Oh et al., 2021). As shown in Figure 3, CUDA has two forces for alleviating imbalance. For all cases, CUDA reduces the variance of the weight norm (i.e., balance the weight norm), and thus the trained model consider the minority classes in a balanced manner. Note that because LDAM-DRW and RIDE utilize a cosine classifier (i.e., utilizing L2 normalized linear weight), their standard deviation scale is quite different from those other methods. Because LDAM-DRW, BS, and RIDE include balancing logic in their loss function, they exhibit lower variance reduction compared to the CE and CE-DRW. Second, as shown in the bottom row in Figure 3, CUDA obtains feature alignment gains for almost all classes. This shows that CUDA facilitates a network to learn to extract meaningful features. Compared with other augmentations. To verify the impact of CUDA, we examine the other augmentation methods as follows. We compare five augmentation methods, including AutoAugment (AA, Cubuk et al. 2019), Fast AutoAugment (FAA, Lim et al. 2019), DADA (Li et al., 2020b), RandAugment (RA, Cubuk et al. 2020), and the proposed method CUDA. Because AA, FAA, and DADA provide their policies searched by using CIFAR, SVHN (for AA), and ImageNet, we leverage their results. Furthermore, RA suggests using their parameter (n,m) = (1, 2) for CIFAR, and we follow their guidelines. As shown in Table 4, even though the automated augmentation methods use additional computation resources to search, CUDA outperforms the other pre-searched augmentations. This shows that CUDA is computationally efficient. Dynamics of LoL score. We evaluate how LoL scores vary with algorithms: CE, CE-DRW, LDAMDRW, BS, and RIDE. Note that we set a lower class index (i.e., 0) as the most common class (i.e., the number of samples is 500), while an index of 100 represents the rarest class (i.e., with five samples). As described in Figure 4, as training progressed, the LoL score of all algorithms increase. After learning rate decay (i.e., 160 epoch) all algorithms are able to learn to classify minority classes more easily than before. In particular, except for BS, the majority classes of most algorithms show a steep increment. The reason that BS exhibit a similar increasing speed for majority and minority classes is that it includes a module to balance the impact of majority and minority samples. Furthermore, we found that CE-DRW and BS have similar end average accuracy in the case of applying CUDA but different LoL score dynamics. We can conclude that LoL score on one category of classes has a high correlation with the performance of opposite classes from the observation that CE-DRW has higher and lower performance gain for many and few, respectively, than BS. Parameter sensitivity. For further analysis, we conduct a sensitivity analysis of hyperparameters in CUDA. More precisely, we study three kinds of parameters, including augmentation probability paug (Figure 5a), number of tests T (Figure 5b), and LoL update threshold γ (Figure 5c). We examine each hyperparameter sensitivity on a CUDA case with RIDE and the remainder of the hyperparameters are fixed to the default values in Section 4.1. All results show that the performance gains of CUDA decreases if the parameters are adjusted to make the augmentation too strong or weak. For example, the augmentation strength of all classes steeply increases when γ becomes small. The strength cannot increase when γ becomes large, and thus it cannot improve the performance of the model. Moreover, as shown in Figure 5b, the performance of CUDA increases as T increases. However, larger T spends computational overhead, we set T as 10 and obtained cost-effective performance gain. Impact of curriculum. In addition to studying the impact of CUDA, we examine its performance component-wise. In particular, we test the case where class-wise augmentation strength is searched based on the hyperparameter optimization algorithm. We check five cases overall: baseline algorithm, hyperparameter optimization (HO), re-searched DADA for CIFAR-100-LT, CUDA without curriculum, (i.e., re-training utilizing the final augmentation strength of CUDA), and CUDA. We provide detailed description for each method in Appendix E. As described in Figure 5d, CUDA finds better augmentation strengths compare to the hyperparameter search case. This means that CUDA exhibits not only a lower searching time but also obtains better augmentation strength. Moreover, by comparing the performance of with or without curriculum, the curriculum also can provide additional advance to the model to achieve better generalization. Additionally, as Figure 4, lower augmentation strength at the beginning of training is more effective than static higher augmentation strength. These results are consistent with the results of previous studies on curriculum learning methods (Zhou et al., 2020b). 5 CONCLUSION In this study, we proposed CUDA to address the class imbalance problem. The proposed approach is also compatible with existing methods. To design a proper augmentation for LTR, we first studied the impact of augmentation strength for LTR. We found that the strength of augmentation for a specific type of class (e.g., major class) could affect the performance of the other type (e.g., minor class). From this finding, we designed CUDA to adaptively find an appropriate augmentation strength without any further searching phase by measuring the LoL score for each epoch and determining the augmentation accordingly. To verify the superior performance of proposed approach, we examined each performance with various methods and obtained the best performance among the methods compared, including synthetically generated and real-world benchmarks. Furthermore, from our analyses, we validated that our CUDA enhanced balance and feature extraction ability, which can consistently improve performance for majority and minority classes. ACKNOWLEDGEMENT This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST), 10%) and the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022- 0-00871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration, 90%) Appendix CUDA: Curriculum of Data Augmentation for Long-tailed Recognition Owing to the page limitation of the main manuscript, we provide detailed information in this supplementary as follows. (1) In Appendix A, we summarize the experimental setup of Figure 1, and further explain why augmentation on one side causes performance degradation on the opposite side. (2) In Appendix B, we describe in detail our experimental setting, including dataset configuration, data preprocessing, and training implementation. (3) In Appendix C, we show ImageNet-LT performance on different size and architecture networks, training time analysis, and accuracy on the balanced dataset case. (4) In Appendix D, we present in detail the augmentation operations that CUDA utilizes. (5) In Appendix E, we describe the experimental setting of Figure 5d. A DETAIL FOR FIGURE 1 A.1 EXPERIMENTAL SETTINGS Major and minor group decomposition. To check the impact of augmentation on majority and minority classes, we split the training dataset into two clusters. The majority cluster is the top 50 classes by sorting through the number of samples for each class. The bottom 50 classes are in the minority cluster. For simplicity, we utilize class indices of 0 to 49 as the majority and 50 to 99 as the minority, respectively. For the balanced case, we utilize 0 to 49 classes as cluster 1, and the others as cluster 2. Controlling augmentation strength. We set the augmentation strength as the number of augmentation and its augmentation magnitude by following the augmentation rule of CUDA. For example, the samples in the majority classes with magnitude parameter 4 represents that they are augmented with randomly sampled 4 augmentations with their own pre-defined augmentation magnitude. Training setting. For heatmaps in Figure 1, we follow the training recipe of CIFAR-100-LT for CE case, e.g., ResNet-32, learning rate of 0.1, and so on. Further details, hyperparameters, and datasets are described in section 4 and Appendix B. 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 Tr ai n C os -S im Te st C os -S im W ei gh t N or m Without Augment Partial Augment All Augment Figure 6: Analysis on Balanced CIFAR-100. 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 Tr ai n C os -S im Te st C os -S im W ei gh t N or m Without Augment Partial Augment All Augment Figure 7: Analysis on CIFAR-100-LT (IR 100). A.2 ANALYSIS Analysis for Figure 1. To figure out the reason for the phenomena in Figure 1, we conduct further analysis as shown in Figure 6 and Figure 7. Our experimental setups are as follows: • Train the networks with three augmentation strategies, respectively (without, partial, and all), then measure the class-wise feature alignment and linear classifier weight norm for all networks. (Experiment 1) • From a trained network without augmentation in Experiment 1, we freeze the feature extractor and train the linear classifier layer with augmenting partial classes. Then, we measure the class-wise L1-norm for all linear classifiers. (Experiment 2) From the Figure 6 and Figure 7, we have three observations from Experiment 1: 1. When we conduct augmentation only for partial classes (0-49 classes), the feature alignment for augmented classes of the training dataset is degraded compared to the non-augmented classes. This is because the augmentation classes have more diversified training data than non-augmentation classes, which leads to more diversification in feature space. We observe the balance between alignment between classes in the cases of without augmentation and with all augmentation since all classes have similar diversity. (See the first rows in Figure 6, 7) 2. However, all three augmentation strategies have balanced class-wise feature alignment for the same test dataset. This tendency can be observed in both balanced and imbalanced datasets. This result is consistent with Kang et al. (2020). Furthermore, the values for feature alignment are increased when we conduct augmentation partially or all, compared to without augmentation. This result shows that augmentation enhances the feature extraction ability, which is consistent with conventional studies. (See the second rows in Figure 6, 7) 3. When we conduct augmentation only for partial classes on a balanced dataset, the class-wise weight norm of the linear classifier is larger for non-augmentation classes. This result incurs performance improvement for non-augmentation classes and reduction for augmentation classes since this linear classifier has a tendency to classify non-augmented classes with larger weight values. However, we observe that class-wise weight norms are balanced in “without augmentation” and “all augmentation” cases. (See the third row in Figure 6) 4. We observe that the class-wise weight norm of the linear classifier is larger for majorities for all classes that have the same augmentation strength. These results are consistent with previous works (Kang et al., 2020; Alshammari et al., 2022). However, when we conduct augmentation only for majorities, the class-wise weight norm is more balanced. This phenomenon is similar to the balanced case in that partial augmentation incurs a reduction in the norm of the linear classifier for augmented classes. (See the third row in Figure 7) Our observations from Experiment 1 are highly consistent in both balanced and imbalanced datasets. The results in Figure 1, Figure 6 and Figure 7 highly motivate the design of CUDA. Moreover, our results for Experiment 2 can explain these observations as shown in Figure 8 and Figure 9. We observe that in the presence of feature alignment degradation from augmentation, the corresponding norm is relatively small, as shown in Figure 8. This is because in the class that has lower feature alignment, the variation of the gradient for the linear classifier is larger than in the class with high feature alignment. As shown in Figure 9, from Experiment 2, we observe that ∥∆w∥, the norm of class-wise difference of between current and initialize linear classifier parameters ∆w := w −w0, have smaller value in augmented classes than non-augmented classes. From our experimental analysis in Figure 6, 7, and 9, we can conclude that augmentation breaks the consistency of feature alignment and it makes the weight norm of the linear classifier decreases. B IMPLEMENTATION DETAIL IN SECTION 4 B.1 DATASET DESCRIPTION CIFAR-100-LT. CIFAR-100-LT is a subset of CIFAR-100. Following Wang et al. (2021); Park et al. (2022); Zhu et al. (2022), we use the same long-tailed version for a fair comparison. The number of samples of kth class is determined as follows: (1) Compute the imbalanced factor Nmax/Nmin, which reflects the degree of imbalance in the data. (2) |Dk| between |D1| = Nmax and |D100| = Nmin follows an exponential decay (i.e., |Dk| = |D1| × (Nmax/Nmin)k/100). The imbalance factors used in the experiment are set to 100, 50, and 10. ImageNet-LT. ImageNet-LT (Liu et al., 2019) is a modified version of the large-scale real-world dataset (Russakovsky et al., 2015). Subsampling is conducted by following the Pareto distribution with power value α = 0.6. It consists of 115.8K images of 1, 000 classes in total. The most common or rare class has 1, 280 or 5 images, respectively. iNaturalist 2018. iNaturalist (Van Horn et al., 2018) is a large-scale real-world dataset which consists of 437.5K images from 8, 142 classes. It has long-tailed property by nature, with an extremely class imbalanced. In addition to long-tailed recognition, this dataset is also used for evaluating the finegrained classification task. B.2 DATA PREPROCESSING For data preprocessing, we follow the default settings of Cao et al. (2019). For CIFAR-100-LT, each side of the image is padded with 4 pixels, and a 32× 32 crop is randomly selected from the padded image or its horizontal flip. For ImageNet-LT and iNaturalist 2018, after resizing each image by setting the shorter side to 256 pixels, a 224 × 224 crop is randomly sampled from an image or its horizontal flip. For BCL and NCL, which use AutoAugment (Cubuk et al., 2019) or RandAugment (Cubuk et al., 2020) as default data augmentation, we apply them after random cropping by following their original papers (Zhu et al., 2022; Li et al., 2022a). Then, we finally conduct CUDA after all default augmentation operations, and then normalize the image with following mean and standard deviation values sequentially: CIFAR-100-LT ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ImageNetLT ((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), and iNaturalist 2019 ((0.466, 0.471, 0.380), (0.195, 0.194, 0.192)). B.3 DETAILED IMPLEMENTATION Because some official codes do not open their entire implementations, we re-implement by following the rules. For re-implementation, we reproduce the code based on their partial code and the authors’ responses. RIDE. We follow the officially offered code2. Among various experimental configurations of official code (e.g., one-stage RIDE, RIDE-EA, Distill-RIDE), for fair comparison (to leverage similar computation resources), we utilize one-stage training (i.e., one-stage RIDE) for all cases. We confirm that CMO (Park et al., 2022) also utilizes this setup for RIDE + CMO from the response of the authors. CMO. We re-implement all CMO results from their official code3 in our work. However, the official code of CMO does not contain code for RIDE + CMO. Therefore, we re-implement by injecting the CMO part for BS in the official code (weighted sampler and mixup part) into the RIDE code. Furthermore, for iNaturalist 2018, we train the model for 100 epochs for a fair comparison with other methods (whereas the original RIDE + CMO is trained for 200 epochs on iNaturalist 2018). BCL. The officially released code4 of BCL only contains ImageNet-LT and iNaturalist 2018. Whereas the official code applies a cosine classifier for ImageNet-LT and iNaturalist 2018, we apply 2https://github.com/frank-xwang/RIDE-LongTailRecognition 3https://github.com/naver-ai/cmo 4https://github.com/FlamieZhu/Balanced-Contrastive-Learning an ordinary linear classifier for CIFAR-100-LT from the author’s response. All hyperparameters are the same as the experiment settings of the original work (Zhu et al., 2022). B.4 GUIDELINE FOR HYPER-PARAMETER TUNING Although we did not tune the hyper-parameters extensively. However, we give a guideline to select the hyper-parameters. The number of samples for updating LoL (T ). We can set this value according to the given computing resources (i.e., the largest T under computing resource constraint). This is because the performance improves as T increases from obtaining a definite LoL score by testing many samples. The acceptance threshold (γ). Our strategy for tuning gamma is to select the largest value in which at least one of LoL scores among all classes increases within 20 epochs. This is because for large-scale datasets, the network fail to infer even the easier-to-learn majority classes. Here is the detailed tuning strategy for γ. • We initially set γ as 0.6. • We decrease the threshold γ by 0.1 points whenever it fails to raise any of LoL score for the first 20 training epochs. We condcut this search on CE with CIFAR-100-LT with IR 100 and using the same γ value of the other algorithms with remaining IR settings. Also, we conduct this search rule on ImageNet-LT with CE and use the same value to the other large-scale dataset, i.e., iNaturalist 2018 with remaining algorithms. The augmentation probability (paug). While we did not tune this hyper-parameter, we offer the guideline how to tune this value based on Figure 5a. As shown in Figure 5a, the shape of graph between paug and performance is concave. Thanks to concavity, we think that it is easy to find the optimal value for this hyper-parameter. Note that the reason for the concavity is because the decision of paug value has a trade-off between preserving the information of the original image and exploring diversified images. Further sensitivity analysis on ImageNet-LT In Section 4, we apply different values of γ in CIFAR-100-LT (0.6) and large-scale datasets (0.4; ImageNet-LT and iNaturalist 2018). In addition to Figure 5, we further conduct the sensitivity analysis for γ on the ImageNet-LT to verify CUDA works well robustly with different values of γ on large-scale datasets. As shown in Table 5, our proposed method CUDA is also robust to hyper-parameter selection for γ not only the small datasets such as CIFAR-100-LT but also large-scale datasets. C FURTHER ANALYSES Training Time Analysis. CUDA requires additional computation for computing LoL score. We measure the additional training time for adding CUDA on various algorithms. As shown in Figure 11, when utilizing CUDA additional training time is spent. However, the additional operation for searching the LoL score does not require a large value. For example, BS with CUDA spends ×1.29 time to obtain adequate augmentation strength. Network Architecture Analysis. We also present our ResNet-10 (Liu et al., 2019) and ResNeXt50 (Xie et al., 2017) experiments on the ImageNet-LT dataset in Figure 10, respectively. These results show that CUDA consistently improves performance regardless of network sizes and corresponding LTR methods. w/ CUDA w/o CUDA ResNet-10 CE CD LD BS w/ CUDA w/o CUDA Ac cu ra cy (% ) 46 48 50 52 54 56 ResNeXt-50 CE CD LD BS w/o CUDA w/ CUDA Tr ai ni ng ti m e (m in .) 0 10 20 30 40 Algorithm CE LDAM BS RIDE BCL Figure 11: Training time. What if CUDA is ran on the balanced dataset. We examine that if CUDA is applied to the balanced case, i.e., imbalance ratio is 1. As described in the Table 6 CUDA obtains 1.9% accuracy gain, which is lower than the other auto augmentation methods. However, other autoaugmentation methods spend more computation time searching a good augmentation than CUDA. Furthermore, as described in Figure 4, CUDA has higher performance than the others when the class imbalance dataset is given. D AUGMENTATION PRESET D.1 DATA AUGMENTATION OPERATIONS USED IN CUDA. There have been numerous data augmentation operations in vision tasks. We used totally 22 augmentations for CUDA with their own parameter set. Details of the operation set and parameters are described in Table 7. For augmentation magnitude parameter mk(s), we divide parameters into thirty values linearly. For example of, ShearX case, its max and min values are 3 and 0, respectively. Therefore, mShearX(s) = (3− 0)/30 ∗ s, thus mShearX(1) = 0.01 = (3− 0)/30 ∗ 1. D.2 FURTHER ANALYSIS ON AUGMENTATION PRESET To get further intuition on the effect of number of predefined augmentation operations, we conduct several exploratory experiments. Validity of our main finding (Figure 1) under a few predefined augmentation. The observation in Figure 1 is caused by minorities becoming relatively easy to learn since majorities have become difficult. Therefore, if the sample of majorities becomes difficult enough to learn, the same phenomenon as Figure 1 occurs regardless of the number of augmentation presets. To verify that our main finding is valid regardless of the number of predefined augmentations, we conduct the experimental with ten augmentation operations (Mirror, ShearX, Invert, Smooth, ResizeCrop, Color, Brightness, Sharpness, Rotate, AutoContrast). Table 8 describes the performance of (0,0), (0,4), (4,0), and (4,4) that each configuration denotes the augmentation strength of (majority; top 50 class, minor; bottom 50 class). Through the results, we verify that the finding in Figure 1 is valid even in a small number of predefined augmentation operations. Effect of number of predefined augmentation. We further analyze the impact of predefined augmentation operations (K in Figure 2); we additionally experiment by replacing the augmentation preset in Appendix D with the following two augmentation presets: (1) 10 randomly sampled augmentations (Mirror, ShearX, Invert, Smooth, ResizeCrop, Color, Brightness, Sharpness, Rotate, Operation Parameter Description Flip On/Off Flip top and bottom Mirror On/Off Flip left and right Edge Enhancement On/Off Increasing the contrast of the pixels around the targeted edges Detail On/Off Utilize convolutional kernel [[0,−1, 0], [−1, 10,−1], [0,−1, 0]] Smooth On/Off Utilize convolutional kernel [[1, 1, 1], [1, 5, 1], [1, 1, 1]] AutoContrast On/Off Remove a specific percent of the lightest and darkest pixels Equalize On/Off apply non-linear mapping to make uniform distribution Invert On/Off Negate the image Gaussian Blur [0,2] Blurring an image using Gaussian function Resize Crop [1,1.3] Resizing and center random cropping Rotate [0,30] Rotate the image Posterize [0,4] Reduce the number of bits for each channel Solarize [0,256] Invert all pixel values above a threshold SolarizeAdd [0,110] Adding value and run solarize Color [0.1, 1.9] Colorize gray scale values Contrast [0.1,1.9] Distance between the colors Brightness [0.1,1.9] Adjust image brightness Sharpness [0.1,1.9] Adjust image sharp Shear X [0,0.3] Shearing X-axis Shear Y [0,0.3] Shearing Y-axis Translate X [0,100] Shift X-axis Translate Y [0,100] Shifting Y-axis AutoContrast) and (2) RandAugment (Cubuk et al., 2020) preset that consists of (AutoContrast, Equalize, Invert, Rotate, Posterize, Solarize, SolarizeAdd, Color, Contrast, Brightness, Sharpness, ShearX, ShearY, CutoutAbs, TranslateXabs, TranslateYabs). Table 9 demonstrates that the accuracy slightly increases when the size of the augmentation preset increases. However, the gap between the RandAugment preset (14 operations) and our original preset (22 operations) is small compared to the gap between the vanilla (without CUDA case) and the RandAugment case. These results verify our belief that the impact of the number of predefined augmentations is small. Effect of randomly ordered data augmentation. Our proposed CUDA operates randomly sequential of the selected augmentations based on the strength of DA. To study the impact of these randomly ordered augmentations, we compare CUDA and CUDA with fixed order augmentations. For examples, when the operation indices (6, 3, 5) among 22 augmentations are samples, it is applied with (3, 5, 6). Table 10 shows small performance differences between the two methods. Thus, we believe that the effect of the augmentation order on the difficulty is negligible. This is because the effectiveness of CUDA is expected to be sufficiently high even in a given order of augmentations since the goal is to make it harder to learn, regardless of the ordered (determined or random) order. Comparison with random augmentation. To verify that the success of CUDA is not simply from a richer dataset made by DA, we compare our proposed method CUDA to randomly sampled augmentation for every iteration. Our comparison methods are Random 5 and Random 10, which denote the conduct of five and ten randomly sampled augmentations for every iteration. As shown in Table 11, while Random 10 generates the most diversifying images, the network trained with this showed the worst performance, even lower than vanilla. Our CUDA achieves the best performance among all methods. E EXPERIMENTAL SETTING OF FIGURE 5D To further analyze the impact of curriculum, we compare CUDA with the performance of previous hyper-parameter search algorithms and auto-augmentation methods, especially DADA (Li et al., 2020b). We describe each setting in detail as follows. Baseline. This is the case of training with standard data augmentation that consists of random cropping and probabilistic horizontal flip. Hyper-parameter search. We utilize the strength score-based augmentation module in CUDA to verify the hyper-parameter search. In other words, samples in each class utilize K augmentation operations. Therefore, we search the class-wise augmentation on the search space KN where N is the number of classes. We leverage the hyper-parameter searching open-source library, Ray (Liaw et al., 2018), for search KN space efficiently. Among various search modules, we utilize the HyperOptSearch module, which is the implementation of the Tree-structured Parzen Estimator (Bergstra et al., 2013). Moreover, for fast search, we use the Asynchronous Successive Halving Algorithm (ASHA) (Li et al., 2020a). We run 1, 000 trials for each algorithms which spends almost 20 GPU hours (i.e., ×80 overhead compare to CUDA). Researched DADA operation on imbalanced CIFAR. Because the officially offered policies on CIFAR by Li et al. (2020b) are searched for a balanced CIFAR dataset, we have to re-search the augmentation policy for the imbalanced dataset. We utilize the official code of DADA and replace the dataloader to re-search the operations. It spends 48 minutes for searching the augmentation policy (×8.6 than the overhead of CUDA). Despite this additional overhead, DADA outputs worse performance than CUDA (even CUDA without curriculum case). This is because (1) DADA does not consider class-wise augmentation and (2) it does not consider the impact of class imbalance. CUDA without curriculum To verify the impact of curriculum itself, we ran the following steps. (1) We conduct experiments with CUDA and get the strength of data augmentation for each class at the final epoch. (2) We re-train the network from scratch by using the strength parameter obtained from (1). F FURTHER ANALYSES To get better understanding, we conduct several analyses for our proposed method, CUDA. F.1 FURTHER ANALYSIS ON LOL SCORE In this section, we conduct experimental ablation studies to understand the performance gain of our proposed method, CUDA. Suitability of LoL score as metric for class-wise difficulty. The superiority of LoL score is to measure the difficulty metric based on the augmentation strength for each class, which is motivated by our main findings. To verify the suitability of LoL score as a metric for class-wise difficulty, we compared CUDA and the case where LoL score is replaced by the score in Sinha et al. (2022). As same with our proposed method, we increase the strength parameter when the score in Sinha et al. (2022) is larger than the same threshold γ = 0.6. Table 12 summarizes the results that our LoL score showed performance improvement compared to the case of Sinha et al. (2022). From the results, we can conclude that this improvement comes from the characteristic of LoL score that is directly related to augmentation strength. Effect of random sampling for computing LoL score To implement the computation of LoL score efficiently, we randomly selected the instances for each class. The reason for using random sampling to compute VCorrect is that we want to measure how much the model learns entire information for each class. To understand the effect of random sampling, we compare our random sampling method to sampling instances with larger (or smaller) losses. Table 13 describes the comparison of performance between various sampling strategies. As shown in the results, if CUDA measures the degree of learning with only easy samples (the samples with small losses), CUDA increases the strength of augmentation too quickly and generates performance degradation. Therefore, it is a better way to grasp the degree of learning for each class without prejudice through uniform random sampling. Furthermore, computing loss for all samples for sorting them at the beginning of each epoch requires ×1.5 times of computation overhead than our method. Numerical values of LoL score dynamics. We provide the numerical values for Figure 4 that is, the average values (for every 20 epochs) of LoL score for the classes with indices 1-10 and the classes with indices 91-100. From the numerical values, we can easily understand the explanation which is discussed in Section 4. F.2 ANALYSIS THE CASE OF WITHOUT CLASS-WISE To examine the validity of class-wise augmentation of CUDA, we apply the CUDA with the same strength of DA for all classes. Instead of computing LoL score class-wisely, we computed only one LoL score for the entire dataset by uniformly random sampling instances in the training dataset regardless of class. Table 15 shows the significant performance degradation of CUDA without classwise augmentation compared to CUDA. This is because, without class-wise augmentation, we cannot allocate the appropriate strength of augmentation to each class.
1. What is the main contribution of the paper regarding data augmentation in long-tailed recognition? 2. What are the strengths and weaknesses of the proposed approach, particularly in its motivation, findings, and experimental results? 3. Do you have any concerns or questions about the methodology, such as the absence of a baseline or the impact of augmentation order and number? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper found that augmentations in one class may be negative for itself but positive for other classes. Thus, they propose a novel data augmentation method, call CUDA for long-tailed recognition. This method can generate proper class-wise augmentation strength for long-tailed recognition. They first compute a level-of-learning score for each class and leverage the score to determine the augmentation. The authors conduct experiments on cifar-lt, imagenet-lt and inaturalist 2018. The comparisons with previous SOTA methods demonstrate the effectiveness of CUDA. Strengths And Weaknesses Strength: "Our key finding is that class-wise augmentation improves performance in the non-augmented classes while that for the augmented classes may not be significantly improved, and in some cases, performances may even decrease." The motivation and the findings in this paper are really interesting. The results are good and comprehensive in the experiments. The paper is well-presented and easy to follow. Weakness: One of the key ablations is missing. What are the results of uniformly assigning all data augmentations to all classes? This should be a baseline and presented in the tables. That is, the conventional operation Will the order of different augmentations have an effect on the final results? What is the effect of preset augmentation number K? Clarity, Quality, Novelty And Reproducibility The motivation of this paper is novel and interesting. In this paper, 22 augmentations are considered. I am curious if the phenomenon that "Our key finding is that class-wise augmentation improves performance in the non-augmented classes while that for the augmented classes may not be significantly improved, and in some cases, performances may even decrease." only exists in this setting. If we only consider several simple augmentations, for example, flip and resize, will the conclusion still satisfy?
ICLR
Title CUDA: Curriculum of Data Augmentation for Long-tailed Recognition Abstract Class imbalance problems frequently occur in real-world tasks, and conventional deep learning algorithms are well known for performance degradation on imbalanced training datasets. To mitigate this problem, many approaches have aimed to balance among given classes by re-weighting or re-sampling training samples. These re-balancing methods increase the impact of minority classes and reduce the influence of majority classes on the output of models. However, the extracted representations may be of poor quality owing to the limited number of minority samples. To handle this restriction, several methods have been developed that increase the representations of minority samples by leveraging the features of the majority samples. Despite extensive recent studies, no deep analysis has been conducted on determination of classes to be augmented and strength of augmentation has been conducted. In this study, we first investigate the correlation between the degree of augmentation and class-wise performance, and find that the proper degree of augmentation must be allocated for each class to mitigate class imbalance problems. Motivated by this finding, we propose a simple and efficient novel curriculum, which is designed to find the appropriate per-class strength of data augmentation, called CUDA: CUrriculum of Data Augmentation for long-tailed recognition. CUDA can simply be integrated into existing long-tailed recognition methods. We present the results of experiments showing that CUDA effectively achieves better generalization performance compared to the state-of-the-art method on various imbalanced datasets such as CIFAR-100-LT, ImageNet-LT, and iNaturalist 2018. 1 1 INTRODUCTION Deep neural networks (DNNs) have significantly improved over the past few decades on a wide range of tasks (He et al., 2017; Redmon & Farhadi, 2017; Qi et al., 2017). This effective performance is made possible by come from well-organized datasets such as MNIST (LeCun et al., 1998), CIFAR10/100 (Krizhevsky et al., 2009), and ImageNet (Russakovsky et al., 2015). However, as Van Horn et al. (2018) indicated, gathering such balanced datasets is notoriously difficult in real-world applications. In addition, the models perform poorly when trained on an improperly organized dataset, e.g., in cases with class imbalance, because minority samples can be ignored due to their small portion. The simplest solution to the class imbalance problem is to prevent the model from ignoring minority classes. To improve generalization performance, many studies have aimed to emphasize minority classes or reduce the influence of the majority samples. Reweighting (Cao et al., 2019; Menon et al., 2021) or resampling (Buda et al., 2018; Van Hulse et al., 2007) are two representative methods that have been frequently applied to achieve this goal. (i) Reweighting techniques increase the weight of the training loss of the samples in the minority classes. (ii) Resampling techniques reconstruct a class-balanced training dataset by upsampling minority classes or downsampling majority classes. Although these elaborate rebalancing approaches have been adopted in some applications, limited information on minority classes due to fewer samples remains problematic. To address this issue, some works have attempted to spawn minority samples by leveraging the information of the minority ∗Two authors contribute equally 1Code is available at Link samples themselves. For example, Chawla et al. (2002); Ando & Huang (2017) proposed a method to generate interpolated minority samples. Recently (Kim et al., 2020; Chu et al., 2020; Park et al., 2022) suggested enriching the information of minority classes by transferring information gathered from majority classes to the minority ones. For example, Kim et al. (2020) generated a balanced training dataset by creating adversarial examples from the majority class to consider them as minority. Although many approaches have been proposed to utilize data augmentation methods to generate various information about minority samples, relatively few works have considered the influence of the degree of augmentation of different classes on class imbalance problems. In particular, few detailed observations have been conducted as to which classes should be augmented and how intensively. To this end, we first consider that controlling the strength of class-wise augmentation can provide another dimension to mitigate the class imbalance problem. In this paper, we use the number of augmentation operations and their magnitude to control the extent of the augmentation, which we refer to herein as its strength, e.g., a strength parameter of 2 means that two randomly sampled operations with a pre-defined magnitude index of 2 are used. Our key finding is that class-wise augmentation improves performance in the non-augmented classes while that for the augmented classes may not be significantly improved, and in some cases, performances may even decrease. As described in Figure 1, regardless of whether a given dataset is class imbalanced, conventional class imbalance methods show similar trends: when only the major classes are strongly augmented (e.g., strength 4), the performance of majority classes decreases, whereas that for the minority classes have better results. To explain this finding, we further find that strongly augmented classes get diversified feature representation, preventing the growth of the norm of a linear classifier for corresponding classes. As a result, the softmax outputs of the strongly augmented classes are reduced, and thus the accuracy of those classes decreases. It is described in Appendix A. This result motivates us to find the proper augmentation strength for each class to improve the performance for other classes while maintaining its own performance. Contribution. We propose a simple algorithm called CUrriculum of Data Augmentation (CUDA) to find the proper class-wise augmentation strength for long-tailed recognition. Based on our motivation, we have to increase the augmentation strength of majorities for the performance of minorities when the model successfully predicts the majorities. On the other hand, we have to lower the strength of majorities when the model makes wrong predictions about majorities. The proposed method consists of two modules, which compute a level-of-learning score for each class and leverage the score to determine the augmentation. Therefore, CUDA increases and decreases the augmentation strength of the class that was successfully and wrongly predicted by the trained model. To the best of our knowledge, this work is the first to suggest a class-wise augmentation method to find a proper augmentation strength for class imbalance problem. We empirically examine performance of CUDA on synthetically imbalanced datasets such as CIFAR100-LT (Cao et al., 2019), ImageNet-LT (Liu et al., 2019), and a real-world benchmark, iNaturalist 2018 (Van Horn et al., 2018). With the high compatibility of CUDA, we apply our framework to various long-tailed recognition methods and achieve better performance compared to the existing long-tailed recognition methods. Furthermore, we conduct an extensive exploratory analysis to obtain a better understanding of CUDA. The results of these analyses verify that CUDA exhibits two effects that mitigate class imbalance, including its balanced classifier and improved feature extractor. 2 RELATED WORKS Long-tailed Recognition (LTR). The datasets with class imbalances can lead DNNs to learn biases toward training data, and their performance may decrease significantly on the balanced test data. To improve the robustness of such models to imbalance, LTR methods have been evolving in two main directions: (1) reweighting (Cui et al., 2019; Cao et al., 2019; Park et al., 2021) methods that reweight the loss for each class by a factor inversely proportional to the number of data points, and (2) resampling methods (Kubat et al., 1997; Chawla et al., 2002; Ando & Huang, 2017) that balance the number of training samples for each class in the training set. However, studies along these lines commonly sacrifice performance on majority classes to enhance that on minority classes, because the overfitting problem occurs with limited information on minority classes as a result of increasing the weight of a small number of minority samples. Several methods have recently been developed to alleviate the overfitting issues in various categories: (1) two-stage training (Cao et al., 2019; Kang et al., 2020; Liu et al., 2019), (2) ensemble methods (Zhou et al., 2020a; Xiang et al., 2020; Wang et al., 2021; Cai et al., 2021), and (3) contrastive learning approach (Kang et al., 2021; Cui et al., 2021; Zhu et al., 2022; Li et al., 2022a;b). To re-balance the classifier layers after achieving a good representation on the imbalanced training dataset in an early phase, Cao et al. (2019) proposed deferred resampling (DRS) and reweighting (DRW) approaches. Kang et al. (2020) decoupled the learning procedure into representation learning and training linear classifier, achieved higher performance than previous balancing methods. Wang et al. (2021) and Cai et al. (2021) suggested efficient ensemble methods using multiple experts with a routing module and a shared architecture for experts to capture various representations. Liu et al. (2022) found that self-supervised representations are more robust to class imbalance than supervised representations, and some works have developed supervised contrastive learning methods (Khosla et al., 2020) for imbalanced datasets (Cui et al., 2021; Zhu et al., 2022; Li et al., 2022b). Another line of research has considered augmentation methods in terms of both input and feature spaces (Kim et al., 2020; Chu et al., 2020; Li et al., 2021). Recently, Park et al. (2022) mixed minority and majority images by using CutMix with different sampling strategies to enhance balancing and robustness simultaneously. These methods commonly focus on utilizing the rich context of majority samples to improve the diversity of minority samples. Zhou et al. (2022) proposed an augmentation-based contrastive learning method which boosts memorization of each samples for long-tailed learning. Moreover, these augmentation-based methods are relatively in easy to apply orthogonally with other LTR methods. Data Augmentation (DA). DA has been studied to mitigate overfitting which may occur due to a lack of data samples. Some works have been proposed to erase random parts of images to enhance the generalization performance of neural networks (DeVries & Taylor, 2017; Zhong et al., 2020; Kumar Singh & Jae Lee, 2017; Choe & Shim, 2019). Recently, variants of MixUp (Zhang et al., 2018) have been proposed; this method combines two images with specific weights (Tokozume et al., 2018; Guo et al., 2019; Takahashi et al., 2018; DeVries & Taylor, 2017; Verma et al., 2019). By aggregating two approaches, CutMix (Yun et al., 2019) was proposed to erase and replace a small rectangular part of an image into another image. In another line of research, methods have been proposed to automatically configure augmentation operations (Cubuk et al., 2019; Lim et al., 2019; Li et al., 2020b; Hataya et al., 2020; Gudovskiy et al., 2021). In addition, Cubuk et al. (2020) randomly selected augmentation operations using the given hyperparameters of the number of sampling augmentation and their magnitudes. Recently, class-wise or per-sample auto-augmentation methods have also been proposed (Cheung & Yeung, 2021; Rommel et al., 2022). 3 CURRICULUM OF DATA AUGMENTATION FOR LONG-TAILED RECOGNITION The core philosophy of CUDA is to “generate an augmented sample that becomes the most difficult sample without losing its original information.” In this section, we describe design of CUDA in terms of two parts: (1) a method to generate the augmented samples based on the given strength parameter, and (2) a method to measure a Level-of-Learning (LoL) score for each class. 3.1 PROBLEM FORMULATION OF LONG-TAILED RECOGNITION Suppose that the training dataset D = {(xi, yi)}Ni=1 is composed of images with size d, xi ∈ Rd, and their corresponding labels yi ∈ {1, ..., C}. Dc ⊂ D is a set of class c, i.e., Dc = {(x, y)|y = c, (x, y) ∈ D}. Without loss of generality, we assume |D1| ≥ |D2| ≥ · · · ≥ |DC |, where |D| denotes the cardinality of the set D. We denote the Nmax := |D1| and Nmin := |DC |. LTR algorithms, ALTR(fθ,D), mainly focus on training the model fθ with parameter θ when the class distribution of training dataset Ptrain(y) and test dataset Ptest(y) are not identical. More precisely, Ptrain(y) is highly imbalanced while Ptest(y) is balanced, i.e., uniform distribution. 3.2 CURRICULUM OF DATA AUGMENTATION In this section, we describe our proposed DA with strength parameter, and the methods used to measured the LoL score. Then, we integrate the two methods in a single framework to propose CUDA. DA with a strength parameter. Let us assume that there exist pre-defined K augmentation operations. We utilize visual augmentation operations which is indexed as k ∈ {1, · · · ,K}, e.g., Gaussian blur, Rotation, Horizontal flip. Each augmentation operation Omk(s)k : Rd → Rd has its own predefined augmentation magnitude function mk(s) where the strength parameter s ∈ {0, ..., S}. These operations are described in detail along with each magnitude functions in Appendix D. Given an augmentation strength parameter s and an input image x, we model a sequence of augmentation operations O(x; s) as follows: O(x; s) = Omks (s)ks ◦ O mks−1 (s) ks−1 ◦ · · · ◦ Omk1 (s)k1 (x), ki ∼ Cat(K,U(K)) ∀i = {1, . . . , s}, where, Cat(·) and U(·) denote categorical and discrete uniform distributions, respectively. The sequential augmentation operation O(x; s) samples s operations from the categorical distribution when the probability of seeing the operations follows uniform distribution. As depicted on the left side Figure 2, suppose that the random sampled augmentations k1, k2, and k3 are brightness, X-shift, and Y-shift, respectively. Then, O(x; 3) outputs an image in which bright is raised by mbright(3) and moved by mx-shift(3) on the x-axis and shifted by my-shift(3) on the y-axis. Algorithm 1: CUrriculum of Data Augmentation Input: LTR algorithm ALTR(f,D), training dataset D = {(xi, yi)}Ni=1, train epochs E, aug. probability paug, threshold γ, number of sample coefficient T . Output: trained model fθ Initialize: L0c = 0 ∀c ∈ {1, ..., C} for e ≤ E do Update Lec = VLoL(Dc, Le−1c , fθ, γ, T ) ∀c // Alg. 2 Generate DCUDA = {(x̄i, yi)|(xi, yi) ∈ D} where x̄i = { O(xi, Leyi) with prob. paug xi otherwise. Run LTR algorithm using DCUDA, i.e., ALTR (fθ,DCUDA). end Algorithm 2: VLoL: Update LoL score Input: Dc, L, fθ, γ, T Output: updated L Initialize: check = 1 for l ≤ L do /* Vcorrect(Dc, l, fθ, T ) */ Sample D′c ⊂ Dc s.t. |D′c| = T (l + 1) Compute v = ∑ x∈D′c 1{f(O(x;l)=c} if v ≤ γT (l + 1) then check← 0; break end end if check = 1 then L← L+ 1 else L← L− 1 Level-of-Learning (LoL). To control the strength of augmentation properly, we check whether the model can correctly predict augmented versions without losing the original information. To enable this, we define the LoL for each class c at epoch e, i.e., Lec, which is adaptively updated as the training continues as follows: Lec = VLoL(Dc, Le−1c , fθ, γ, T ), where VLoL(Dc, Le−1c , fθ, γ, T ) = { Le−1c + 1 if VCorrect(Dc, l, fθ, T ) ≥ γT (l + 1) ∀l ∈ {0, ..., Le−1c } Le−1c − 1 otherwise . Here, γ ∈ [0, 1] is threshold hyperparameter, T is coefficient of the number of samples used to updating LoL. Vcorrect is a function which outputs the number of correctly predicted examples by the model fθ among l + 1 randomly augmented samples with strength l. Vcorrect is defined as: VCorrect(Dc, l, fθ, T ) = ∑ x∈D′c 1{fθ(O(x;l))=c} where D′c ⊂ Dc. Note that D′c is a randomly sampled subset of Dc with replacement and its size is T (l + 1). The key philosophy of this criterion is two fold. (1) If samples in the class c are trained sufficiently with an augmentation strength of Lec, the model is ready to learn a more difficult version with augmentation strength of Le+1c ← Lec + 1. In contrast, if the model predicts incorrectly, it should re-learn the easier sample with an augmentation strength of Le+1c ← Lec − 1. (2) As the strength parameter increases, the number of candidates for the sequential augmentation operation O(x;L) increases exponentially. For example, the amount of increment is NL(N − 1) when L is increases to L+ 1. To control the LoL in a large sequential augmentation operation space, we take more random samples to check as the strength parameter gets bigger. In our experiments, linearly increasing the number of samples to evaluate corresponding to the strength with a small additional computation time was sufficient. VLoL is described in Figure 2 and Algorithm 2. Curriculum of DA. By combining two components, including DA with a strength parameter and LoL, our CUDA provides class-wise adaptive augmentation to enhance the performance of the others without losing its own information. As shown in Figure 2 and Algorithm 1, we measure the LoL score Lc for all classes in the training dataset to determine the augmentation strength for every epoch. Based on Lc, we generate the augmented version O(x;Lc) for x ∈ Dc and train the model with the augmented samples. Additionally, we randomly use the original sample instead of the augmented sample with probability paug so that the trained models do not forget the original information. In our experiments, this operation improved performance robustly on a wide range of paug values. The results are provided in Section 4.3. Advantage of CUDA design. Our proposed approach mainly has three advantages. (1) CUDA adaptively finds proper augmentation strengths for each class without need for a validation set. (2) Following the spirits of existing curriculum learning methods (Hacohen & Weinshall, 2019; Zhou et al., 2020b; Wu et al., 2021), CUDA enables modeling by first presenting easier examples earlier during training to improve generalization. This encourages the model to learn difficult samples (i.e., within high augmentation strength) better. (3) Moreover, owing to the universality of data augmentation, CUDA is easily compatible with other LTR algorithms, such as (Cao et al., 2019; Ren et al., 2020; Wang et al., 2021). 4 EXPERIMENTS In this section, we present empirical evaluation, the results of which demonstrate the superior performance of our proposed algorithm for class imbalance. We first describe the long-tailed classification benchmarks and implementations in detail (Section 4.1). Then, we describe the experimental results on several synthetic (CIFAR-100-LT, ImageNet-LT) and real-world (iNaturalist 2018) long-tailed benchmark datasets in Section 4.2. Moreover, we conduct additional experiments to obtain a better understanding of CUDA, and this analysis is provided in Section 4.3. 4.1 EXPERIMENTAL SETUP Datasets. We evaluate CUDA on the most commonly used long-tailed image classification tasks: CIFAR-100-LT (Cao et al., 2019), ImageNet-LT (Liu et al., 2019), and iNaturalist 2018 (Van Horn et al., 2018). CIFAR-100-LT and ImageNet-LT are provided with imbalanced classes by synthetically sampling the training samples. CIFAR-100-LT is examined with various imbalance ratios {100, 50, 10}, where an imbalance ratio is defined as Nmax/Nmin. iNaturalist 2018 is a large-scale real-world dataset includes natural long-tailed imbalance. We utilize the officially provided datasets. Baselines. We compare CUDA with previous long-tailed learning algorithms , including cross-entropy loss (CE), two-stage approaches: CE-DRW (Cao et al., 2019) and cRT (Kang et al., 2020), balanced loss approaches: LDAM-DRW (Cao et al., 2019) and Balanced Softmax (BS; Ren et al. 2020), the ensemble method: RIDE with three experts (Wang et al., 2021), resampling algorithms: Remix (Chou et al., 2020) and CMO (Park et al., 2022), and contrastive learning-based approach: BCL (Zhu et al., 2022). We integrate CUDA with CE, CE-DRW, LDAM-DRW, BS, RIDE, and BCL algorithms. For longer epochs, we compare CUDA with PaCo (Cui et al., 2021), BCL, and NCL (Li et al., 2022a), by combining CUDA with BCL and NCL. For a fair comparison of the computational cost, we train the network with the official one-stage implementation of RIDE (i.e., without distillation and routing). Implementation. For CIFAR-100-LT dataset, almost all implementations follow the general setting from Cao et al. (2019), whereas cRT (Kang et al., 2020), BCL, NCL and RIDE follow the settings used in their original implementation. Following Cao et al. (2019), we use ResNet-32 (He et al., 2016) as a backbone network for CIFAR-100-LT. The network is trained on SGD with a momentum of 0.9 and a weight decay of 2× 10−4. The initial learning rate is 0.1 and a linear learning rate warm-up is used in the first 5 epochs to reach the initial learning rate. During training over 200 epochs, the learning rate is decayed at the 160th and 180th epochs by 0.01. For the ImageNet-LT and iNaturalist, the ResNet-50 is used as a backbone network and is trained for 100 epochs. The learning rate is decayed at the 60th and 80th epochs by 0.1. As with CIFAR, for cRT, RIDE, and BCL, we follow the original experimental settings of the official released code. For the hyperparameter values of CUDA, we apply a paug of 0.5 and T of 10 for all experiments. For γ, we set the values as 0.6 for CIFAR-100-LT and 0.4 for ImageNet-LT and iNaturalist 2018. The detailed implementation for baselines are in Appendix B. 4.2 EXPERIMENTAL RESULTS In this section, we report the performances of the methods compared on the CIFAR-100-LT, ImageNetLT, and iNaturalist 2018. We include four different categories of accuracy: all, many, med(ium), and few. Each represents the average accuracy of all samples, classes containing more than 100 samples, 20 to 100 samples, and under 20 samples, respectively. CIFAR-100-LT. In Table 1, we report the performance when CUDA is applied to the various algorithms: CE, CE-DRW (Cao et al., 2019), LDAM-DRW (Cao et al., 2019), BS (Ren et al., 2020), RIDE (Wang et al., 2021) with 3 experts, RIDE+CMO (Park et al., 2022), and BCL (Zhu et al., 2022). Compared to the cases without CUDA, balanced validation performance is increased when we apply the proposed approach. Recently, some works (Cui et al., 2021; Alshammari et al., 2022; Zhu et al., 2022; Li et al., 2022a) have shown impressive performances with diverse augmentation strategies and longer training epochs. For a fair comparison with these methods, we examine CUDA using the same experimental setups from PaCo (Cui et al. 2021; 400 epochs with batch size of 64). Table 3 shows that augmented images using CUDA can enhance LTR performance compared to the other baselines. In particular, CUDA with NCL obtains the best performance over 400 epochs. As noted by Li et al. (2022a), the NCL algorithm utilizes six times as much memory compared to the vanilla architecture with three experts. Hereinafter in large-scale benchmarks, we focus on the cases with similar network size. ImageNet-LT and iNaturalist 2018. To evaluate the performance of CUDA on larger datasets, we conduct experiments on ImageNet-LT (Liu et al., 2019) and iNaturalist 2018 (Van Horn et al., 2018). Table 2 summarizes the performance of various LTR methods and the performance gain when integrated with CUDA. Our proposed method consistently improves performance regardless of the LTR method and target dataset by simply adding class-wise data augmentation without complicated methodological modification. Additionally, to evaluate the performance gain of CUDA on other Epoch architectures, we experiment with CUDA on ImageNet-LT with ResNet-10 (Liu et al., 2019) and ResNeXt-50 (Xie et al., 2017), as reported in Appendix C. 4.3 ANALYSIS We design our analyses to answer the following questions. (1) How does CUDA perform? (2) Does CUDA perform better than other augmentation methods? (3) How does LoL score change over training epochs when combined with various LTR methods? (4) Which part of CUDA is important to improved performance? These analyses provide additional explanations to understand CUDA. All experiments are conducted on CIFAR-100-LT with imbalance ratio of 100. How does CUDA mitigate the class imbalance problem? To deeply understand CUDA, we observe two types of metrics: (1) variance of weight L1-Norm of linear classifier between each class (2) feature alignment gain for each class (i.e., cosine similarity with and without CUDA) on validation dataset. The classifier weight norm is usually used to measure how balanced the model consider the input from a class-wise perspective (Kang et al., 2020; Alshammari et al., 2022). Feature alignment, especially feature cosine similarity amongst samples belonging to the same class, is a measure of the extent to which the extracted features are aligned (Oh et al., 2021). As shown in Figure 3, CUDA has two forces for alleviating imbalance. For all cases, CUDA reduces the variance of the weight norm (i.e., balance the weight norm), and thus the trained model consider the minority classes in a balanced manner. Note that because LDAM-DRW and RIDE utilize a cosine classifier (i.e., utilizing L2 normalized linear weight), their standard deviation scale is quite different from those other methods. Because LDAM-DRW, BS, and RIDE include balancing logic in their loss function, they exhibit lower variance reduction compared to the CE and CE-DRW. Second, as shown in the bottom row in Figure 3, CUDA obtains feature alignment gains for almost all classes. This shows that CUDA facilitates a network to learn to extract meaningful features. Compared with other augmentations. To verify the impact of CUDA, we examine the other augmentation methods as follows. We compare five augmentation methods, including AutoAugment (AA, Cubuk et al. 2019), Fast AutoAugment (FAA, Lim et al. 2019), DADA (Li et al., 2020b), RandAugment (RA, Cubuk et al. 2020), and the proposed method CUDA. Because AA, FAA, and DADA provide their policies searched by using CIFAR, SVHN (for AA), and ImageNet, we leverage their results. Furthermore, RA suggests using their parameter (n,m) = (1, 2) for CIFAR, and we follow their guidelines. As shown in Table 4, even though the automated augmentation methods use additional computation resources to search, CUDA outperforms the other pre-searched augmentations. This shows that CUDA is computationally efficient. Dynamics of LoL score. We evaluate how LoL scores vary with algorithms: CE, CE-DRW, LDAMDRW, BS, and RIDE. Note that we set a lower class index (i.e., 0) as the most common class (i.e., the number of samples is 500), while an index of 100 represents the rarest class (i.e., with five samples). As described in Figure 4, as training progressed, the LoL score of all algorithms increase. After learning rate decay (i.e., 160 epoch) all algorithms are able to learn to classify minority classes more easily than before. In particular, except for BS, the majority classes of most algorithms show a steep increment. The reason that BS exhibit a similar increasing speed for majority and minority classes is that it includes a module to balance the impact of majority and minority samples. Furthermore, we found that CE-DRW and BS have similar end average accuracy in the case of applying CUDA but different LoL score dynamics. We can conclude that LoL score on one category of classes has a high correlation with the performance of opposite classes from the observation that CE-DRW has higher and lower performance gain for many and few, respectively, than BS. Parameter sensitivity. For further analysis, we conduct a sensitivity analysis of hyperparameters in CUDA. More precisely, we study three kinds of parameters, including augmentation probability paug (Figure 5a), number of tests T (Figure 5b), and LoL update threshold γ (Figure 5c). We examine each hyperparameter sensitivity on a CUDA case with RIDE and the remainder of the hyperparameters are fixed to the default values in Section 4.1. All results show that the performance gains of CUDA decreases if the parameters are adjusted to make the augmentation too strong or weak. For example, the augmentation strength of all classes steeply increases when γ becomes small. The strength cannot increase when γ becomes large, and thus it cannot improve the performance of the model. Moreover, as shown in Figure 5b, the performance of CUDA increases as T increases. However, larger T spends computational overhead, we set T as 10 and obtained cost-effective performance gain. Impact of curriculum. In addition to studying the impact of CUDA, we examine its performance component-wise. In particular, we test the case where class-wise augmentation strength is searched based on the hyperparameter optimization algorithm. We check five cases overall: baseline algorithm, hyperparameter optimization (HO), re-searched DADA for CIFAR-100-LT, CUDA without curriculum, (i.e., re-training utilizing the final augmentation strength of CUDA), and CUDA. We provide detailed description for each method in Appendix E. As described in Figure 5d, CUDA finds better augmentation strengths compare to the hyperparameter search case. This means that CUDA exhibits not only a lower searching time but also obtains better augmentation strength. Moreover, by comparing the performance of with or without curriculum, the curriculum also can provide additional advance to the model to achieve better generalization. Additionally, as Figure 4, lower augmentation strength at the beginning of training is more effective than static higher augmentation strength. These results are consistent with the results of previous studies on curriculum learning methods (Zhou et al., 2020b). 5 CONCLUSION In this study, we proposed CUDA to address the class imbalance problem. The proposed approach is also compatible with existing methods. To design a proper augmentation for LTR, we first studied the impact of augmentation strength for LTR. We found that the strength of augmentation for a specific type of class (e.g., major class) could affect the performance of the other type (e.g., minor class). From this finding, we designed CUDA to adaptively find an appropriate augmentation strength without any further searching phase by measuring the LoL score for each epoch and determining the augmentation accordingly. To verify the superior performance of proposed approach, we examined each performance with various methods and obtained the best performance among the methods compared, including synthetically generated and real-world benchmarks. Furthermore, from our analyses, we validated that our CUDA enhanced balance and feature extraction ability, which can consistently improve performance for majority and minority classes. ACKNOWLEDGEMENT This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST), 10%) and the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022- 0-00871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration, 90%) Appendix CUDA: Curriculum of Data Augmentation for Long-tailed Recognition Owing to the page limitation of the main manuscript, we provide detailed information in this supplementary as follows. (1) In Appendix A, we summarize the experimental setup of Figure 1, and further explain why augmentation on one side causes performance degradation on the opposite side. (2) In Appendix B, we describe in detail our experimental setting, including dataset configuration, data preprocessing, and training implementation. (3) In Appendix C, we show ImageNet-LT performance on different size and architecture networks, training time analysis, and accuracy on the balanced dataset case. (4) In Appendix D, we present in detail the augmentation operations that CUDA utilizes. (5) In Appendix E, we describe the experimental setting of Figure 5d. A DETAIL FOR FIGURE 1 A.1 EXPERIMENTAL SETTINGS Major and minor group decomposition. To check the impact of augmentation on majority and minority classes, we split the training dataset into two clusters. The majority cluster is the top 50 classes by sorting through the number of samples for each class. The bottom 50 classes are in the minority cluster. For simplicity, we utilize class indices of 0 to 49 as the majority and 50 to 99 as the minority, respectively. For the balanced case, we utilize 0 to 49 classes as cluster 1, and the others as cluster 2. Controlling augmentation strength. We set the augmentation strength as the number of augmentation and its augmentation magnitude by following the augmentation rule of CUDA. For example, the samples in the majority classes with magnitude parameter 4 represents that they are augmented with randomly sampled 4 augmentations with their own pre-defined augmentation magnitude. Training setting. For heatmaps in Figure 1, we follow the training recipe of CIFAR-100-LT for CE case, e.g., ResNet-32, learning rate of 0.1, and so on. Further details, hyperparameters, and datasets are described in section 4 and Appendix B. 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 Tr ai n C os -S im Te st C os -S im W ei gh t N or m Without Augment Partial Augment All Augment Figure 6: Analysis on Balanced CIFAR-100. 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 Tr ai n C os -S im Te st C os -S im W ei gh t N or m Without Augment Partial Augment All Augment Figure 7: Analysis on CIFAR-100-LT (IR 100). A.2 ANALYSIS Analysis for Figure 1. To figure out the reason for the phenomena in Figure 1, we conduct further analysis as shown in Figure 6 and Figure 7. Our experimental setups are as follows: • Train the networks with three augmentation strategies, respectively (without, partial, and all), then measure the class-wise feature alignment and linear classifier weight norm for all networks. (Experiment 1) • From a trained network without augmentation in Experiment 1, we freeze the feature extractor and train the linear classifier layer with augmenting partial classes. Then, we measure the class-wise L1-norm for all linear classifiers. (Experiment 2) From the Figure 6 and Figure 7, we have three observations from Experiment 1: 1. When we conduct augmentation only for partial classes (0-49 classes), the feature alignment for augmented classes of the training dataset is degraded compared to the non-augmented classes. This is because the augmentation classes have more diversified training data than non-augmentation classes, which leads to more diversification in feature space. We observe the balance between alignment between classes in the cases of without augmentation and with all augmentation since all classes have similar diversity. (See the first rows in Figure 6, 7) 2. However, all three augmentation strategies have balanced class-wise feature alignment for the same test dataset. This tendency can be observed in both balanced and imbalanced datasets. This result is consistent with Kang et al. (2020). Furthermore, the values for feature alignment are increased when we conduct augmentation partially or all, compared to without augmentation. This result shows that augmentation enhances the feature extraction ability, which is consistent with conventional studies. (See the second rows in Figure 6, 7) 3. When we conduct augmentation only for partial classes on a balanced dataset, the class-wise weight norm of the linear classifier is larger for non-augmentation classes. This result incurs performance improvement for non-augmentation classes and reduction for augmentation classes since this linear classifier has a tendency to classify non-augmented classes with larger weight values. However, we observe that class-wise weight norms are balanced in “without augmentation” and “all augmentation” cases. (See the third row in Figure 6) 4. We observe that the class-wise weight norm of the linear classifier is larger for majorities for all classes that have the same augmentation strength. These results are consistent with previous works (Kang et al., 2020; Alshammari et al., 2022). However, when we conduct augmentation only for majorities, the class-wise weight norm is more balanced. This phenomenon is similar to the balanced case in that partial augmentation incurs a reduction in the norm of the linear classifier for augmented classes. (See the third row in Figure 7) Our observations from Experiment 1 are highly consistent in both balanced and imbalanced datasets. The results in Figure 1, Figure 6 and Figure 7 highly motivate the design of CUDA. Moreover, our results for Experiment 2 can explain these observations as shown in Figure 8 and Figure 9. We observe that in the presence of feature alignment degradation from augmentation, the corresponding norm is relatively small, as shown in Figure 8. This is because in the class that has lower feature alignment, the variation of the gradient for the linear classifier is larger than in the class with high feature alignment. As shown in Figure 9, from Experiment 2, we observe that ∥∆w∥, the norm of class-wise difference of between current and initialize linear classifier parameters ∆w := w −w0, have smaller value in augmented classes than non-augmented classes. From our experimental analysis in Figure 6, 7, and 9, we can conclude that augmentation breaks the consistency of feature alignment and it makes the weight norm of the linear classifier decreases. B IMPLEMENTATION DETAIL IN SECTION 4 B.1 DATASET DESCRIPTION CIFAR-100-LT. CIFAR-100-LT is a subset of CIFAR-100. Following Wang et al. (2021); Park et al. (2022); Zhu et al. (2022), we use the same long-tailed version for a fair comparison. The number of samples of kth class is determined as follows: (1) Compute the imbalanced factor Nmax/Nmin, which reflects the degree of imbalance in the data. (2) |Dk| between |D1| = Nmax and |D100| = Nmin follows an exponential decay (i.e., |Dk| = |D1| × (Nmax/Nmin)k/100). The imbalance factors used in the experiment are set to 100, 50, and 10. ImageNet-LT. ImageNet-LT (Liu et al., 2019) is a modified version of the large-scale real-world dataset (Russakovsky et al., 2015). Subsampling is conducted by following the Pareto distribution with power value α = 0.6. It consists of 115.8K images of 1, 000 classes in total. The most common or rare class has 1, 280 or 5 images, respectively. iNaturalist 2018. iNaturalist (Van Horn et al., 2018) is a large-scale real-world dataset which consists of 437.5K images from 8, 142 classes. It has long-tailed property by nature, with an extremely class imbalanced. In addition to long-tailed recognition, this dataset is also used for evaluating the finegrained classification task. B.2 DATA PREPROCESSING For data preprocessing, we follow the default settings of Cao et al. (2019). For CIFAR-100-LT, each side of the image is padded with 4 pixels, and a 32× 32 crop is randomly selected from the padded image or its horizontal flip. For ImageNet-LT and iNaturalist 2018, after resizing each image by setting the shorter side to 256 pixels, a 224 × 224 crop is randomly sampled from an image or its horizontal flip. For BCL and NCL, which use AutoAugment (Cubuk et al., 2019) or RandAugment (Cubuk et al., 2020) as default data augmentation, we apply them after random cropping by following their original papers (Zhu et al., 2022; Li et al., 2022a). Then, we finally conduct CUDA after all default augmentation operations, and then normalize the image with following mean and standard deviation values sequentially: CIFAR-100-LT ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ImageNetLT ((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), and iNaturalist 2019 ((0.466, 0.471, 0.380), (0.195, 0.194, 0.192)). B.3 DETAILED IMPLEMENTATION Because some official codes do not open their entire implementations, we re-implement by following the rules. For re-implementation, we reproduce the code based on their partial code and the authors’ responses. RIDE. We follow the officially offered code2. Among various experimental configurations of official code (e.g., one-stage RIDE, RIDE-EA, Distill-RIDE), for fair comparison (to leverage similar computation resources), we utilize one-stage training (i.e., one-stage RIDE) for all cases. We confirm that CMO (Park et al., 2022) also utilizes this setup for RIDE + CMO from the response of the authors. CMO. We re-implement all CMO results from their official code3 in our work. However, the official code of CMO does not contain code for RIDE + CMO. Therefore, we re-implement by injecting the CMO part for BS in the official code (weighted sampler and mixup part) into the RIDE code. Furthermore, for iNaturalist 2018, we train the model for 100 epochs for a fair comparison with other methods (whereas the original RIDE + CMO is trained for 200 epochs on iNaturalist 2018). BCL. The officially released code4 of BCL only contains ImageNet-LT and iNaturalist 2018. Whereas the official code applies a cosine classifier for ImageNet-LT and iNaturalist 2018, we apply 2https://github.com/frank-xwang/RIDE-LongTailRecognition 3https://github.com/naver-ai/cmo 4https://github.com/FlamieZhu/Balanced-Contrastive-Learning an ordinary linear classifier for CIFAR-100-LT from the author’s response. All hyperparameters are the same as the experiment settings of the original work (Zhu et al., 2022). B.4 GUIDELINE FOR HYPER-PARAMETER TUNING Although we did not tune the hyper-parameters extensively. However, we give a guideline to select the hyper-parameters. The number of samples for updating LoL (T ). We can set this value according to the given computing resources (i.e., the largest T under computing resource constraint). This is because the performance improves as T increases from obtaining a definite LoL score by testing many samples. The acceptance threshold (γ). Our strategy for tuning gamma is to select the largest value in which at least one of LoL scores among all classes increases within 20 epochs. This is because for large-scale datasets, the network fail to infer even the easier-to-learn majority classes. Here is the detailed tuning strategy for γ. • We initially set γ as 0.6. • We decrease the threshold γ by 0.1 points whenever it fails to raise any of LoL score for the first 20 training epochs. We condcut this search on CE with CIFAR-100-LT with IR 100 and using the same γ value of the other algorithms with remaining IR settings. Also, we conduct this search rule on ImageNet-LT with CE and use the same value to the other large-scale dataset, i.e., iNaturalist 2018 with remaining algorithms. The augmentation probability (paug). While we did not tune this hyper-parameter, we offer the guideline how to tune this value based on Figure 5a. As shown in Figure 5a, the shape of graph between paug and performance is concave. Thanks to concavity, we think that it is easy to find the optimal value for this hyper-parameter. Note that the reason for the concavity is because the decision of paug value has a trade-off between preserving the information of the original image and exploring diversified images. Further sensitivity analysis on ImageNet-LT In Section 4, we apply different values of γ in CIFAR-100-LT (0.6) and large-scale datasets (0.4; ImageNet-LT and iNaturalist 2018). In addition to Figure 5, we further conduct the sensitivity analysis for γ on the ImageNet-LT to verify CUDA works well robustly with different values of γ on large-scale datasets. As shown in Table 5, our proposed method CUDA is also robust to hyper-parameter selection for γ not only the small datasets such as CIFAR-100-LT but also large-scale datasets. C FURTHER ANALYSES Training Time Analysis. CUDA requires additional computation for computing LoL score. We measure the additional training time for adding CUDA on various algorithms. As shown in Figure 11, when utilizing CUDA additional training time is spent. However, the additional operation for searching the LoL score does not require a large value. For example, BS with CUDA spends ×1.29 time to obtain adequate augmentation strength. Network Architecture Analysis. We also present our ResNet-10 (Liu et al., 2019) and ResNeXt50 (Xie et al., 2017) experiments on the ImageNet-LT dataset in Figure 10, respectively. These results show that CUDA consistently improves performance regardless of network sizes and corresponding LTR methods. w/ CUDA w/o CUDA ResNet-10 CE CD LD BS w/ CUDA w/o CUDA Ac cu ra cy (% ) 46 48 50 52 54 56 ResNeXt-50 CE CD LD BS w/o CUDA w/ CUDA Tr ai ni ng ti m e (m in .) 0 10 20 30 40 Algorithm CE LDAM BS RIDE BCL Figure 11: Training time. What if CUDA is ran on the balanced dataset. We examine that if CUDA is applied to the balanced case, i.e., imbalance ratio is 1. As described in the Table 6 CUDA obtains 1.9% accuracy gain, which is lower than the other auto augmentation methods. However, other autoaugmentation methods spend more computation time searching a good augmentation than CUDA. Furthermore, as described in Figure 4, CUDA has higher performance than the others when the class imbalance dataset is given. D AUGMENTATION PRESET D.1 DATA AUGMENTATION OPERATIONS USED IN CUDA. There have been numerous data augmentation operations in vision tasks. We used totally 22 augmentations for CUDA with their own parameter set. Details of the operation set and parameters are described in Table 7. For augmentation magnitude parameter mk(s), we divide parameters into thirty values linearly. For example of, ShearX case, its max and min values are 3 and 0, respectively. Therefore, mShearX(s) = (3− 0)/30 ∗ s, thus mShearX(1) = 0.01 = (3− 0)/30 ∗ 1. D.2 FURTHER ANALYSIS ON AUGMENTATION PRESET To get further intuition on the effect of number of predefined augmentation operations, we conduct several exploratory experiments. Validity of our main finding (Figure 1) under a few predefined augmentation. The observation in Figure 1 is caused by minorities becoming relatively easy to learn since majorities have become difficult. Therefore, if the sample of majorities becomes difficult enough to learn, the same phenomenon as Figure 1 occurs regardless of the number of augmentation presets. To verify that our main finding is valid regardless of the number of predefined augmentations, we conduct the experimental with ten augmentation operations (Mirror, ShearX, Invert, Smooth, ResizeCrop, Color, Brightness, Sharpness, Rotate, AutoContrast). Table 8 describes the performance of (0,0), (0,4), (4,0), and (4,4) that each configuration denotes the augmentation strength of (majority; top 50 class, minor; bottom 50 class). Through the results, we verify that the finding in Figure 1 is valid even in a small number of predefined augmentation operations. Effect of number of predefined augmentation. We further analyze the impact of predefined augmentation operations (K in Figure 2); we additionally experiment by replacing the augmentation preset in Appendix D with the following two augmentation presets: (1) 10 randomly sampled augmentations (Mirror, ShearX, Invert, Smooth, ResizeCrop, Color, Brightness, Sharpness, Rotate, Operation Parameter Description Flip On/Off Flip top and bottom Mirror On/Off Flip left and right Edge Enhancement On/Off Increasing the contrast of the pixels around the targeted edges Detail On/Off Utilize convolutional kernel [[0,−1, 0], [−1, 10,−1], [0,−1, 0]] Smooth On/Off Utilize convolutional kernel [[1, 1, 1], [1, 5, 1], [1, 1, 1]] AutoContrast On/Off Remove a specific percent of the lightest and darkest pixels Equalize On/Off apply non-linear mapping to make uniform distribution Invert On/Off Negate the image Gaussian Blur [0,2] Blurring an image using Gaussian function Resize Crop [1,1.3] Resizing and center random cropping Rotate [0,30] Rotate the image Posterize [0,4] Reduce the number of bits for each channel Solarize [0,256] Invert all pixel values above a threshold SolarizeAdd [0,110] Adding value and run solarize Color [0.1, 1.9] Colorize gray scale values Contrast [0.1,1.9] Distance between the colors Brightness [0.1,1.9] Adjust image brightness Sharpness [0.1,1.9] Adjust image sharp Shear X [0,0.3] Shearing X-axis Shear Y [0,0.3] Shearing Y-axis Translate X [0,100] Shift X-axis Translate Y [0,100] Shifting Y-axis AutoContrast) and (2) RandAugment (Cubuk et al., 2020) preset that consists of (AutoContrast, Equalize, Invert, Rotate, Posterize, Solarize, SolarizeAdd, Color, Contrast, Brightness, Sharpness, ShearX, ShearY, CutoutAbs, TranslateXabs, TranslateYabs). Table 9 demonstrates that the accuracy slightly increases when the size of the augmentation preset increases. However, the gap between the RandAugment preset (14 operations) and our original preset (22 operations) is small compared to the gap between the vanilla (without CUDA case) and the RandAugment case. These results verify our belief that the impact of the number of predefined augmentations is small. Effect of randomly ordered data augmentation. Our proposed CUDA operates randomly sequential of the selected augmentations based on the strength of DA. To study the impact of these randomly ordered augmentations, we compare CUDA and CUDA with fixed order augmentations. For examples, when the operation indices (6, 3, 5) among 22 augmentations are samples, it is applied with (3, 5, 6). Table 10 shows small performance differences between the two methods. Thus, we believe that the effect of the augmentation order on the difficulty is negligible. This is because the effectiveness of CUDA is expected to be sufficiently high even in a given order of augmentations since the goal is to make it harder to learn, regardless of the ordered (determined or random) order. Comparison with random augmentation. To verify that the success of CUDA is not simply from a richer dataset made by DA, we compare our proposed method CUDA to randomly sampled augmentation for every iteration. Our comparison methods are Random 5 and Random 10, which denote the conduct of five and ten randomly sampled augmentations for every iteration. As shown in Table 11, while Random 10 generates the most diversifying images, the network trained with this showed the worst performance, even lower than vanilla. Our CUDA achieves the best performance among all methods. E EXPERIMENTAL SETTING OF FIGURE 5D To further analyze the impact of curriculum, we compare CUDA with the performance of previous hyper-parameter search algorithms and auto-augmentation methods, especially DADA (Li et al., 2020b). We describe each setting in detail as follows. Baseline. This is the case of training with standard data augmentation that consists of random cropping and probabilistic horizontal flip. Hyper-parameter search. We utilize the strength score-based augmentation module in CUDA to verify the hyper-parameter search. In other words, samples in each class utilize K augmentation operations. Therefore, we search the class-wise augmentation on the search space KN where N is the number of classes. We leverage the hyper-parameter searching open-source library, Ray (Liaw et al., 2018), for search KN space efficiently. Among various search modules, we utilize the HyperOptSearch module, which is the implementation of the Tree-structured Parzen Estimator (Bergstra et al., 2013). Moreover, for fast search, we use the Asynchronous Successive Halving Algorithm (ASHA) (Li et al., 2020a). We run 1, 000 trials for each algorithms which spends almost 20 GPU hours (i.e., ×80 overhead compare to CUDA). Researched DADA operation on imbalanced CIFAR. Because the officially offered policies on CIFAR by Li et al. (2020b) are searched for a balanced CIFAR dataset, we have to re-search the augmentation policy for the imbalanced dataset. We utilize the official code of DADA and replace the dataloader to re-search the operations. It spends 48 minutes for searching the augmentation policy (×8.6 than the overhead of CUDA). Despite this additional overhead, DADA outputs worse performance than CUDA (even CUDA without curriculum case). This is because (1) DADA does not consider class-wise augmentation and (2) it does not consider the impact of class imbalance. CUDA without curriculum To verify the impact of curriculum itself, we ran the following steps. (1) We conduct experiments with CUDA and get the strength of data augmentation for each class at the final epoch. (2) We re-train the network from scratch by using the strength parameter obtained from (1). F FURTHER ANALYSES To get better understanding, we conduct several analyses for our proposed method, CUDA. F.1 FURTHER ANALYSIS ON LOL SCORE In this section, we conduct experimental ablation studies to understand the performance gain of our proposed method, CUDA. Suitability of LoL score as metric for class-wise difficulty. The superiority of LoL score is to measure the difficulty metric based on the augmentation strength for each class, which is motivated by our main findings. To verify the suitability of LoL score as a metric for class-wise difficulty, we compared CUDA and the case where LoL score is replaced by the score in Sinha et al. (2022). As same with our proposed method, we increase the strength parameter when the score in Sinha et al. (2022) is larger than the same threshold γ = 0.6. Table 12 summarizes the results that our LoL score showed performance improvement compared to the case of Sinha et al. (2022). From the results, we can conclude that this improvement comes from the characteristic of LoL score that is directly related to augmentation strength. Effect of random sampling for computing LoL score To implement the computation of LoL score efficiently, we randomly selected the instances for each class. The reason for using random sampling to compute VCorrect is that we want to measure how much the model learns entire information for each class. To understand the effect of random sampling, we compare our random sampling method to sampling instances with larger (or smaller) losses. Table 13 describes the comparison of performance between various sampling strategies. As shown in the results, if CUDA measures the degree of learning with only easy samples (the samples with small losses), CUDA increases the strength of augmentation too quickly and generates performance degradation. Therefore, it is a better way to grasp the degree of learning for each class without prejudice through uniform random sampling. Furthermore, computing loss for all samples for sorting them at the beginning of each epoch requires ×1.5 times of computation overhead than our method. Numerical values of LoL score dynamics. We provide the numerical values for Figure 4 that is, the average values (for every 20 epochs) of LoL score for the classes with indices 1-10 and the classes with indices 91-100. From the numerical values, we can easily understand the explanation which is discussed in Section 4. F.2 ANALYSIS THE CASE OF WITHOUT CLASS-WISE To examine the validity of class-wise augmentation of CUDA, we apply the CUDA with the same strength of DA for all classes. Instead of computing LoL score class-wisely, we computed only one LoL score for the entire dataset by uniformly random sampling instances in the training dataset regardless of class. Table 15 shows the significant performance degradation of CUDA without classwise augmentation compared to CUDA. This is because, without class-wise augmentation, we cannot allocate the appropriate strength of augmentation to each class.
1. What is the main contribution of the paper regarding class imbalance? 2. How does the proposed method differ from existing approaches, particularly in its use of a curriculum for class-dependent augmentation strength? 3. What are the strengths and weaknesses of the paper's experimental design and results? 4. Do you have any concerns or suggestions regarding the paper's analysis and presentation of results? 5. How might the approach be improved or extended in future works?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes to use a class-dependent augmentation strategy for tackling the class imbalance problem. The paper initially observes that by using a class-dependent augmentation where only the majority classes are augmented, the performance of the minority classes improves but that over the majority class regresses. Thus, the paper proposes to learn a curriculum for class-dependent augmentation strength, with the idea being that if the model is performing good over a certain class for an augmentation strength then the strength can be increases and reduced otherwise. They propose level-of-learning, which each class computes the augmentation strength to be used for that epoch. At each epoch, it is either incremented or decremented based on the model's performance over a subset of data for that class over all the augmentation strengths up to the current one. Empirically the proposed method achieves a good boost on top of several different existing long-tail tackling approaches over different datasets. To show that their approach leads to a good balance between the majority and minorty classes, the authors plot the variance of weight L1-norm of linear classifier between each class and show that the variance is lower when using the proposed approach. The authors also compare against using a fixed augmentation strategy such as RandAugment and AUtoAugment and show that their method works better than using a fixed augmentation. Strengths And Weaknesses Strengths - The paper is very well written and easy to flow along. The motivation in the paper about how doing augmentations for some classes affects the performance of non-augmented classes is really useful. Extensive experiments are conducted on top of existing baselines and over different datasets and the gains achieved show the usefulness of the approach. I really like the comparison against fixed augmentations such as AutoAugment and RandAugment. Such comparisons show the efficacy of the proposed class-dependent augmentation strategy. The comparison against doing a curriculum search using hyperparameter optimization algorithms is also neat and useful. Weakness - Since for computing the LoL the paper uses only a subset of the data from that class, I am curious to see the variance in the performance. However, none of the tables report the variance in the performance but only the mean result across 3 trials. The paragraph "Dynamics of LoL score" needs more emphasis. Specifically, it would be interesting to see some statistics of what are the LoL scores for the majority classes and the minority classes after the end of the training. While figure 4 has the plots, it is difficult to conclude something from there. Can authors include some statistics, such as the mean LoL score for the top k majority and minority classes respectively? How do the authors select which subset of data to use for LoL computation? Is it completely random? Can the authors also report what happens when the samples with the highest/lowest training loss and instead chosen? Clarity, Quality, Novelty And Reproducibility The paper is clear and easy to follow along with a good motivation behind the approach. The work seems novel to me.
ICLR
Title CUDA: Curriculum of Data Augmentation for Long-tailed Recognition Abstract Class imbalance problems frequently occur in real-world tasks, and conventional deep learning algorithms are well known for performance degradation on imbalanced training datasets. To mitigate this problem, many approaches have aimed to balance among given classes by re-weighting or re-sampling training samples. These re-balancing methods increase the impact of minority classes and reduce the influence of majority classes on the output of models. However, the extracted representations may be of poor quality owing to the limited number of minority samples. To handle this restriction, several methods have been developed that increase the representations of minority samples by leveraging the features of the majority samples. Despite extensive recent studies, no deep analysis has been conducted on determination of classes to be augmented and strength of augmentation has been conducted. In this study, we first investigate the correlation between the degree of augmentation and class-wise performance, and find that the proper degree of augmentation must be allocated for each class to mitigate class imbalance problems. Motivated by this finding, we propose a simple and efficient novel curriculum, which is designed to find the appropriate per-class strength of data augmentation, called CUDA: CUrriculum of Data Augmentation for long-tailed recognition. CUDA can simply be integrated into existing long-tailed recognition methods. We present the results of experiments showing that CUDA effectively achieves better generalization performance compared to the state-of-the-art method on various imbalanced datasets such as CIFAR-100-LT, ImageNet-LT, and iNaturalist 2018. 1 1 INTRODUCTION Deep neural networks (DNNs) have significantly improved over the past few decades on a wide range of tasks (He et al., 2017; Redmon & Farhadi, 2017; Qi et al., 2017). This effective performance is made possible by come from well-organized datasets such as MNIST (LeCun et al., 1998), CIFAR10/100 (Krizhevsky et al., 2009), and ImageNet (Russakovsky et al., 2015). However, as Van Horn et al. (2018) indicated, gathering such balanced datasets is notoriously difficult in real-world applications. In addition, the models perform poorly when trained on an improperly organized dataset, e.g., in cases with class imbalance, because minority samples can be ignored due to their small portion. The simplest solution to the class imbalance problem is to prevent the model from ignoring minority classes. To improve generalization performance, many studies have aimed to emphasize minority classes or reduce the influence of the majority samples. Reweighting (Cao et al., 2019; Menon et al., 2021) or resampling (Buda et al., 2018; Van Hulse et al., 2007) are two representative methods that have been frequently applied to achieve this goal. (i) Reweighting techniques increase the weight of the training loss of the samples in the minority classes. (ii) Resampling techniques reconstruct a class-balanced training dataset by upsampling minority classes or downsampling majority classes. Although these elaborate rebalancing approaches have been adopted in some applications, limited information on minority classes due to fewer samples remains problematic. To address this issue, some works have attempted to spawn minority samples by leveraging the information of the minority ∗Two authors contribute equally 1Code is available at Link samples themselves. For example, Chawla et al. (2002); Ando & Huang (2017) proposed a method to generate interpolated minority samples. Recently (Kim et al., 2020; Chu et al., 2020; Park et al., 2022) suggested enriching the information of minority classes by transferring information gathered from majority classes to the minority ones. For example, Kim et al. (2020) generated a balanced training dataset by creating adversarial examples from the majority class to consider them as minority. Although many approaches have been proposed to utilize data augmentation methods to generate various information about minority samples, relatively few works have considered the influence of the degree of augmentation of different classes on class imbalance problems. In particular, few detailed observations have been conducted as to which classes should be augmented and how intensively. To this end, we first consider that controlling the strength of class-wise augmentation can provide another dimension to mitigate the class imbalance problem. In this paper, we use the number of augmentation operations and their magnitude to control the extent of the augmentation, which we refer to herein as its strength, e.g., a strength parameter of 2 means that two randomly sampled operations with a pre-defined magnitude index of 2 are used. Our key finding is that class-wise augmentation improves performance in the non-augmented classes while that for the augmented classes may not be significantly improved, and in some cases, performances may even decrease. As described in Figure 1, regardless of whether a given dataset is class imbalanced, conventional class imbalance methods show similar trends: when only the major classes are strongly augmented (e.g., strength 4), the performance of majority classes decreases, whereas that for the minority classes have better results. To explain this finding, we further find that strongly augmented classes get diversified feature representation, preventing the growth of the norm of a linear classifier for corresponding classes. As a result, the softmax outputs of the strongly augmented classes are reduced, and thus the accuracy of those classes decreases. It is described in Appendix A. This result motivates us to find the proper augmentation strength for each class to improve the performance for other classes while maintaining its own performance. Contribution. We propose a simple algorithm called CUrriculum of Data Augmentation (CUDA) to find the proper class-wise augmentation strength for long-tailed recognition. Based on our motivation, we have to increase the augmentation strength of majorities for the performance of minorities when the model successfully predicts the majorities. On the other hand, we have to lower the strength of majorities when the model makes wrong predictions about majorities. The proposed method consists of two modules, which compute a level-of-learning score for each class and leverage the score to determine the augmentation. Therefore, CUDA increases and decreases the augmentation strength of the class that was successfully and wrongly predicted by the trained model. To the best of our knowledge, this work is the first to suggest a class-wise augmentation method to find a proper augmentation strength for class imbalance problem. We empirically examine performance of CUDA on synthetically imbalanced datasets such as CIFAR100-LT (Cao et al., 2019), ImageNet-LT (Liu et al., 2019), and a real-world benchmark, iNaturalist 2018 (Van Horn et al., 2018). With the high compatibility of CUDA, we apply our framework to various long-tailed recognition methods and achieve better performance compared to the existing long-tailed recognition methods. Furthermore, we conduct an extensive exploratory analysis to obtain a better understanding of CUDA. The results of these analyses verify that CUDA exhibits two effects that mitigate class imbalance, including its balanced classifier and improved feature extractor. 2 RELATED WORKS Long-tailed Recognition (LTR). The datasets with class imbalances can lead DNNs to learn biases toward training data, and their performance may decrease significantly on the balanced test data. To improve the robustness of such models to imbalance, LTR methods have been evolving in two main directions: (1) reweighting (Cui et al., 2019; Cao et al., 2019; Park et al., 2021) methods that reweight the loss for each class by a factor inversely proportional to the number of data points, and (2) resampling methods (Kubat et al., 1997; Chawla et al., 2002; Ando & Huang, 2017) that balance the number of training samples for each class in the training set. However, studies along these lines commonly sacrifice performance on majority classes to enhance that on minority classes, because the overfitting problem occurs with limited information on minority classes as a result of increasing the weight of a small number of minority samples. Several methods have recently been developed to alleviate the overfitting issues in various categories: (1) two-stage training (Cao et al., 2019; Kang et al., 2020; Liu et al., 2019), (2) ensemble methods (Zhou et al., 2020a; Xiang et al., 2020; Wang et al., 2021; Cai et al., 2021), and (3) contrastive learning approach (Kang et al., 2021; Cui et al., 2021; Zhu et al., 2022; Li et al., 2022a;b). To re-balance the classifier layers after achieving a good representation on the imbalanced training dataset in an early phase, Cao et al. (2019) proposed deferred resampling (DRS) and reweighting (DRW) approaches. Kang et al. (2020) decoupled the learning procedure into representation learning and training linear classifier, achieved higher performance than previous balancing methods. Wang et al. (2021) and Cai et al. (2021) suggested efficient ensemble methods using multiple experts with a routing module and a shared architecture for experts to capture various representations. Liu et al. (2022) found that self-supervised representations are more robust to class imbalance than supervised representations, and some works have developed supervised contrastive learning methods (Khosla et al., 2020) for imbalanced datasets (Cui et al., 2021; Zhu et al., 2022; Li et al., 2022b). Another line of research has considered augmentation methods in terms of both input and feature spaces (Kim et al., 2020; Chu et al., 2020; Li et al., 2021). Recently, Park et al. (2022) mixed minority and majority images by using CutMix with different sampling strategies to enhance balancing and robustness simultaneously. These methods commonly focus on utilizing the rich context of majority samples to improve the diversity of minority samples. Zhou et al. (2022) proposed an augmentation-based contrastive learning method which boosts memorization of each samples for long-tailed learning. Moreover, these augmentation-based methods are relatively in easy to apply orthogonally with other LTR methods. Data Augmentation (DA). DA has been studied to mitigate overfitting which may occur due to a lack of data samples. Some works have been proposed to erase random parts of images to enhance the generalization performance of neural networks (DeVries & Taylor, 2017; Zhong et al., 2020; Kumar Singh & Jae Lee, 2017; Choe & Shim, 2019). Recently, variants of MixUp (Zhang et al., 2018) have been proposed; this method combines two images with specific weights (Tokozume et al., 2018; Guo et al., 2019; Takahashi et al., 2018; DeVries & Taylor, 2017; Verma et al., 2019). By aggregating two approaches, CutMix (Yun et al., 2019) was proposed to erase and replace a small rectangular part of an image into another image. In another line of research, methods have been proposed to automatically configure augmentation operations (Cubuk et al., 2019; Lim et al., 2019; Li et al., 2020b; Hataya et al., 2020; Gudovskiy et al., 2021). In addition, Cubuk et al. (2020) randomly selected augmentation operations using the given hyperparameters of the number of sampling augmentation and their magnitudes. Recently, class-wise or per-sample auto-augmentation methods have also been proposed (Cheung & Yeung, 2021; Rommel et al., 2022). 3 CURRICULUM OF DATA AUGMENTATION FOR LONG-TAILED RECOGNITION The core philosophy of CUDA is to “generate an augmented sample that becomes the most difficult sample without losing its original information.” In this section, we describe design of CUDA in terms of two parts: (1) a method to generate the augmented samples based on the given strength parameter, and (2) a method to measure a Level-of-Learning (LoL) score for each class. 3.1 PROBLEM FORMULATION OF LONG-TAILED RECOGNITION Suppose that the training dataset D = {(xi, yi)}Ni=1 is composed of images with size d, xi ∈ Rd, and their corresponding labels yi ∈ {1, ..., C}. Dc ⊂ D is a set of class c, i.e., Dc = {(x, y)|y = c, (x, y) ∈ D}. Without loss of generality, we assume |D1| ≥ |D2| ≥ · · · ≥ |DC |, where |D| denotes the cardinality of the set D. We denote the Nmax := |D1| and Nmin := |DC |. LTR algorithms, ALTR(fθ,D), mainly focus on training the model fθ with parameter θ when the class distribution of training dataset Ptrain(y) and test dataset Ptest(y) are not identical. More precisely, Ptrain(y) is highly imbalanced while Ptest(y) is balanced, i.e., uniform distribution. 3.2 CURRICULUM OF DATA AUGMENTATION In this section, we describe our proposed DA with strength parameter, and the methods used to measured the LoL score. Then, we integrate the two methods in a single framework to propose CUDA. DA with a strength parameter. Let us assume that there exist pre-defined K augmentation operations. We utilize visual augmentation operations which is indexed as k ∈ {1, · · · ,K}, e.g., Gaussian blur, Rotation, Horizontal flip. Each augmentation operation Omk(s)k : Rd → Rd has its own predefined augmentation magnitude function mk(s) where the strength parameter s ∈ {0, ..., S}. These operations are described in detail along with each magnitude functions in Appendix D. Given an augmentation strength parameter s and an input image x, we model a sequence of augmentation operations O(x; s) as follows: O(x; s) = Omks (s)ks ◦ O mks−1 (s) ks−1 ◦ · · · ◦ Omk1 (s)k1 (x), ki ∼ Cat(K,U(K)) ∀i = {1, . . . , s}, where, Cat(·) and U(·) denote categorical and discrete uniform distributions, respectively. The sequential augmentation operation O(x; s) samples s operations from the categorical distribution when the probability of seeing the operations follows uniform distribution. As depicted on the left side Figure 2, suppose that the random sampled augmentations k1, k2, and k3 are brightness, X-shift, and Y-shift, respectively. Then, O(x; 3) outputs an image in which bright is raised by mbright(3) and moved by mx-shift(3) on the x-axis and shifted by my-shift(3) on the y-axis. Algorithm 1: CUrriculum of Data Augmentation Input: LTR algorithm ALTR(f,D), training dataset D = {(xi, yi)}Ni=1, train epochs E, aug. probability paug, threshold γ, number of sample coefficient T . Output: trained model fθ Initialize: L0c = 0 ∀c ∈ {1, ..., C} for e ≤ E do Update Lec = VLoL(Dc, Le−1c , fθ, γ, T ) ∀c // Alg. 2 Generate DCUDA = {(x̄i, yi)|(xi, yi) ∈ D} where x̄i = { O(xi, Leyi) with prob. paug xi otherwise. Run LTR algorithm using DCUDA, i.e., ALTR (fθ,DCUDA). end Algorithm 2: VLoL: Update LoL score Input: Dc, L, fθ, γ, T Output: updated L Initialize: check = 1 for l ≤ L do /* Vcorrect(Dc, l, fθ, T ) */ Sample D′c ⊂ Dc s.t. |D′c| = T (l + 1) Compute v = ∑ x∈D′c 1{f(O(x;l)=c} if v ≤ γT (l + 1) then check← 0; break end end if check = 1 then L← L+ 1 else L← L− 1 Level-of-Learning (LoL). To control the strength of augmentation properly, we check whether the model can correctly predict augmented versions without losing the original information. To enable this, we define the LoL for each class c at epoch e, i.e., Lec, which is adaptively updated as the training continues as follows: Lec = VLoL(Dc, Le−1c , fθ, γ, T ), where VLoL(Dc, Le−1c , fθ, γ, T ) = { Le−1c + 1 if VCorrect(Dc, l, fθ, T ) ≥ γT (l + 1) ∀l ∈ {0, ..., Le−1c } Le−1c − 1 otherwise . Here, γ ∈ [0, 1] is threshold hyperparameter, T is coefficient of the number of samples used to updating LoL. Vcorrect is a function which outputs the number of correctly predicted examples by the model fθ among l + 1 randomly augmented samples with strength l. Vcorrect is defined as: VCorrect(Dc, l, fθ, T ) = ∑ x∈D′c 1{fθ(O(x;l))=c} where D′c ⊂ Dc. Note that D′c is a randomly sampled subset of Dc with replacement and its size is T (l + 1). The key philosophy of this criterion is two fold. (1) If samples in the class c are trained sufficiently with an augmentation strength of Lec, the model is ready to learn a more difficult version with augmentation strength of Le+1c ← Lec + 1. In contrast, if the model predicts incorrectly, it should re-learn the easier sample with an augmentation strength of Le+1c ← Lec − 1. (2) As the strength parameter increases, the number of candidates for the sequential augmentation operation O(x;L) increases exponentially. For example, the amount of increment is NL(N − 1) when L is increases to L+ 1. To control the LoL in a large sequential augmentation operation space, we take more random samples to check as the strength parameter gets bigger. In our experiments, linearly increasing the number of samples to evaluate corresponding to the strength with a small additional computation time was sufficient. VLoL is described in Figure 2 and Algorithm 2. Curriculum of DA. By combining two components, including DA with a strength parameter and LoL, our CUDA provides class-wise adaptive augmentation to enhance the performance of the others without losing its own information. As shown in Figure 2 and Algorithm 1, we measure the LoL score Lc for all classes in the training dataset to determine the augmentation strength for every epoch. Based on Lc, we generate the augmented version O(x;Lc) for x ∈ Dc and train the model with the augmented samples. Additionally, we randomly use the original sample instead of the augmented sample with probability paug so that the trained models do not forget the original information. In our experiments, this operation improved performance robustly on a wide range of paug values. The results are provided in Section 4.3. Advantage of CUDA design. Our proposed approach mainly has three advantages. (1) CUDA adaptively finds proper augmentation strengths for each class without need for a validation set. (2) Following the spirits of existing curriculum learning methods (Hacohen & Weinshall, 2019; Zhou et al., 2020b; Wu et al., 2021), CUDA enables modeling by first presenting easier examples earlier during training to improve generalization. This encourages the model to learn difficult samples (i.e., within high augmentation strength) better. (3) Moreover, owing to the universality of data augmentation, CUDA is easily compatible with other LTR algorithms, such as (Cao et al., 2019; Ren et al., 2020; Wang et al., 2021). 4 EXPERIMENTS In this section, we present empirical evaluation, the results of which demonstrate the superior performance of our proposed algorithm for class imbalance. We first describe the long-tailed classification benchmarks and implementations in detail (Section 4.1). Then, we describe the experimental results on several synthetic (CIFAR-100-LT, ImageNet-LT) and real-world (iNaturalist 2018) long-tailed benchmark datasets in Section 4.2. Moreover, we conduct additional experiments to obtain a better understanding of CUDA, and this analysis is provided in Section 4.3. 4.1 EXPERIMENTAL SETUP Datasets. We evaluate CUDA on the most commonly used long-tailed image classification tasks: CIFAR-100-LT (Cao et al., 2019), ImageNet-LT (Liu et al., 2019), and iNaturalist 2018 (Van Horn et al., 2018). CIFAR-100-LT and ImageNet-LT are provided with imbalanced classes by synthetically sampling the training samples. CIFAR-100-LT is examined with various imbalance ratios {100, 50, 10}, where an imbalance ratio is defined as Nmax/Nmin. iNaturalist 2018 is a large-scale real-world dataset includes natural long-tailed imbalance. We utilize the officially provided datasets. Baselines. We compare CUDA with previous long-tailed learning algorithms , including cross-entropy loss (CE), two-stage approaches: CE-DRW (Cao et al., 2019) and cRT (Kang et al., 2020), balanced loss approaches: LDAM-DRW (Cao et al., 2019) and Balanced Softmax (BS; Ren et al. 2020), the ensemble method: RIDE with three experts (Wang et al., 2021), resampling algorithms: Remix (Chou et al., 2020) and CMO (Park et al., 2022), and contrastive learning-based approach: BCL (Zhu et al., 2022). We integrate CUDA with CE, CE-DRW, LDAM-DRW, BS, RIDE, and BCL algorithms. For longer epochs, we compare CUDA with PaCo (Cui et al., 2021), BCL, and NCL (Li et al., 2022a), by combining CUDA with BCL and NCL. For a fair comparison of the computational cost, we train the network with the official one-stage implementation of RIDE (i.e., without distillation and routing). Implementation. For CIFAR-100-LT dataset, almost all implementations follow the general setting from Cao et al. (2019), whereas cRT (Kang et al., 2020), BCL, NCL and RIDE follow the settings used in their original implementation. Following Cao et al. (2019), we use ResNet-32 (He et al., 2016) as a backbone network for CIFAR-100-LT. The network is trained on SGD with a momentum of 0.9 and a weight decay of 2× 10−4. The initial learning rate is 0.1 and a linear learning rate warm-up is used in the first 5 epochs to reach the initial learning rate. During training over 200 epochs, the learning rate is decayed at the 160th and 180th epochs by 0.01. For the ImageNet-LT and iNaturalist, the ResNet-50 is used as a backbone network and is trained for 100 epochs. The learning rate is decayed at the 60th and 80th epochs by 0.1. As with CIFAR, for cRT, RIDE, and BCL, we follow the original experimental settings of the official released code. For the hyperparameter values of CUDA, we apply a paug of 0.5 and T of 10 for all experiments. For γ, we set the values as 0.6 for CIFAR-100-LT and 0.4 for ImageNet-LT and iNaturalist 2018. The detailed implementation for baselines are in Appendix B. 4.2 EXPERIMENTAL RESULTS In this section, we report the performances of the methods compared on the CIFAR-100-LT, ImageNetLT, and iNaturalist 2018. We include four different categories of accuracy: all, many, med(ium), and few. Each represents the average accuracy of all samples, classes containing more than 100 samples, 20 to 100 samples, and under 20 samples, respectively. CIFAR-100-LT. In Table 1, we report the performance when CUDA is applied to the various algorithms: CE, CE-DRW (Cao et al., 2019), LDAM-DRW (Cao et al., 2019), BS (Ren et al., 2020), RIDE (Wang et al., 2021) with 3 experts, RIDE+CMO (Park et al., 2022), and BCL (Zhu et al., 2022). Compared to the cases without CUDA, balanced validation performance is increased when we apply the proposed approach. Recently, some works (Cui et al., 2021; Alshammari et al., 2022; Zhu et al., 2022; Li et al., 2022a) have shown impressive performances with diverse augmentation strategies and longer training epochs. For a fair comparison with these methods, we examine CUDA using the same experimental setups from PaCo (Cui et al. 2021; 400 epochs with batch size of 64). Table 3 shows that augmented images using CUDA can enhance LTR performance compared to the other baselines. In particular, CUDA with NCL obtains the best performance over 400 epochs. As noted by Li et al. (2022a), the NCL algorithm utilizes six times as much memory compared to the vanilla architecture with three experts. Hereinafter in large-scale benchmarks, we focus on the cases with similar network size. ImageNet-LT and iNaturalist 2018. To evaluate the performance of CUDA on larger datasets, we conduct experiments on ImageNet-LT (Liu et al., 2019) and iNaturalist 2018 (Van Horn et al., 2018). Table 2 summarizes the performance of various LTR methods and the performance gain when integrated with CUDA. Our proposed method consistently improves performance regardless of the LTR method and target dataset by simply adding class-wise data augmentation without complicated methodological modification. Additionally, to evaluate the performance gain of CUDA on other Epoch architectures, we experiment with CUDA on ImageNet-LT with ResNet-10 (Liu et al., 2019) and ResNeXt-50 (Xie et al., 2017), as reported in Appendix C. 4.3 ANALYSIS We design our analyses to answer the following questions. (1) How does CUDA perform? (2) Does CUDA perform better than other augmentation methods? (3) How does LoL score change over training epochs when combined with various LTR methods? (4) Which part of CUDA is important to improved performance? These analyses provide additional explanations to understand CUDA. All experiments are conducted on CIFAR-100-LT with imbalance ratio of 100. How does CUDA mitigate the class imbalance problem? To deeply understand CUDA, we observe two types of metrics: (1) variance of weight L1-Norm of linear classifier between each class (2) feature alignment gain for each class (i.e., cosine similarity with and without CUDA) on validation dataset. The classifier weight norm is usually used to measure how balanced the model consider the input from a class-wise perspective (Kang et al., 2020; Alshammari et al., 2022). Feature alignment, especially feature cosine similarity amongst samples belonging to the same class, is a measure of the extent to which the extracted features are aligned (Oh et al., 2021). As shown in Figure 3, CUDA has two forces for alleviating imbalance. For all cases, CUDA reduces the variance of the weight norm (i.e., balance the weight norm), and thus the trained model consider the minority classes in a balanced manner. Note that because LDAM-DRW and RIDE utilize a cosine classifier (i.e., utilizing L2 normalized linear weight), their standard deviation scale is quite different from those other methods. Because LDAM-DRW, BS, and RIDE include balancing logic in their loss function, they exhibit lower variance reduction compared to the CE and CE-DRW. Second, as shown in the bottom row in Figure 3, CUDA obtains feature alignment gains for almost all classes. This shows that CUDA facilitates a network to learn to extract meaningful features. Compared with other augmentations. To verify the impact of CUDA, we examine the other augmentation methods as follows. We compare five augmentation methods, including AutoAugment (AA, Cubuk et al. 2019), Fast AutoAugment (FAA, Lim et al. 2019), DADA (Li et al., 2020b), RandAugment (RA, Cubuk et al. 2020), and the proposed method CUDA. Because AA, FAA, and DADA provide their policies searched by using CIFAR, SVHN (for AA), and ImageNet, we leverage their results. Furthermore, RA suggests using their parameter (n,m) = (1, 2) for CIFAR, and we follow their guidelines. As shown in Table 4, even though the automated augmentation methods use additional computation resources to search, CUDA outperforms the other pre-searched augmentations. This shows that CUDA is computationally efficient. Dynamics of LoL score. We evaluate how LoL scores vary with algorithms: CE, CE-DRW, LDAMDRW, BS, and RIDE. Note that we set a lower class index (i.e., 0) as the most common class (i.e., the number of samples is 500), while an index of 100 represents the rarest class (i.e., with five samples). As described in Figure 4, as training progressed, the LoL score of all algorithms increase. After learning rate decay (i.e., 160 epoch) all algorithms are able to learn to classify minority classes more easily than before. In particular, except for BS, the majority classes of most algorithms show a steep increment. The reason that BS exhibit a similar increasing speed for majority and minority classes is that it includes a module to balance the impact of majority and minority samples. Furthermore, we found that CE-DRW and BS have similar end average accuracy in the case of applying CUDA but different LoL score dynamics. We can conclude that LoL score on one category of classes has a high correlation with the performance of opposite classes from the observation that CE-DRW has higher and lower performance gain for many and few, respectively, than BS. Parameter sensitivity. For further analysis, we conduct a sensitivity analysis of hyperparameters in CUDA. More precisely, we study three kinds of parameters, including augmentation probability paug (Figure 5a), number of tests T (Figure 5b), and LoL update threshold γ (Figure 5c). We examine each hyperparameter sensitivity on a CUDA case with RIDE and the remainder of the hyperparameters are fixed to the default values in Section 4.1. All results show that the performance gains of CUDA decreases if the parameters are adjusted to make the augmentation too strong or weak. For example, the augmentation strength of all classes steeply increases when γ becomes small. The strength cannot increase when γ becomes large, and thus it cannot improve the performance of the model. Moreover, as shown in Figure 5b, the performance of CUDA increases as T increases. However, larger T spends computational overhead, we set T as 10 and obtained cost-effective performance gain. Impact of curriculum. In addition to studying the impact of CUDA, we examine its performance component-wise. In particular, we test the case where class-wise augmentation strength is searched based on the hyperparameter optimization algorithm. We check five cases overall: baseline algorithm, hyperparameter optimization (HO), re-searched DADA for CIFAR-100-LT, CUDA without curriculum, (i.e., re-training utilizing the final augmentation strength of CUDA), and CUDA. We provide detailed description for each method in Appendix E. As described in Figure 5d, CUDA finds better augmentation strengths compare to the hyperparameter search case. This means that CUDA exhibits not only a lower searching time but also obtains better augmentation strength. Moreover, by comparing the performance of with or without curriculum, the curriculum also can provide additional advance to the model to achieve better generalization. Additionally, as Figure 4, lower augmentation strength at the beginning of training is more effective than static higher augmentation strength. These results are consistent with the results of previous studies on curriculum learning methods (Zhou et al., 2020b). 5 CONCLUSION In this study, we proposed CUDA to address the class imbalance problem. The proposed approach is also compatible with existing methods. To design a proper augmentation for LTR, we first studied the impact of augmentation strength for LTR. We found that the strength of augmentation for a specific type of class (e.g., major class) could affect the performance of the other type (e.g., minor class). From this finding, we designed CUDA to adaptively find an appropriate augmentation strength without any further searching phase by measuring the LoL score for each epoch and determining the augmentation accordingly. To verify the superior performance of proposed approach, we examined each performance with various methods and obtained the best performance among the methods compared, including synthetically generated and real-world benchmarks. Furthermore, from our analyses, we validated that our CUDA enhanced balance and feature extraction ability, which can consistently improve performance for majority and minority classes. ACKNOWLEDGEMENT This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST), 10%) and the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022- 0-00871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration, 90%) Appendix CUDA: Curriculum of Data Augmentation for Long-tailed Recognition Owing to the page limitation of the main manuscript, we provide detailed information in this supplementary as follows. (1) In Appendix A, we summarize the experimental setup of Figure 1, and further explain why augmentation on one side causes performance degradation on the opposite side. (2) In Appendix B, we describe in detail our experimental setting, including dataset configuration, data preprocessing, and training implementation. (3) In Appendix C, we show ImageNet-LT performance on different size and architecture networks, training time analysis, and accuracy on the balanced dataset case. (4) In Appendix D, we present in detail the augmentation operations that CUDA utilizes. (5) In Appendix E, we describe the experimental setting of Figure 5d. A DETAIL FOR FIGURE 1 A.1 EXPERIMENTAL SETTINGS Major and minor group decomposition. To check the impact of augmentation on majority and minority classes, we split the training dataset into two clusters. The majority cluster is the top 50 classes by sorting through the number of samples for each class. The bottom 50 classes are in the minority cluster. For simplicity, we utilize class indices of 0 to 49 as the majority and 50 to 99 as the minority, respectively. For the balanced case, we utilize 0 to 49 classes as cluster 1, and the others as cluster 2. Controlling augmentation strength. We set the augmentation strength as the number of augmentation and its augmentation magnitude by following the augmentation rule of CUDA. For example, the samples in the majority classes with magnitude parameter 4 represents that they are augmented with randomly sampled 4 augmentations with their own pre-defined augmentation magnitude. Training setting. For heatmaps in Figure 1, we follow the training recipe of CIFAR-100-LT for CE case, e.g., ResNet-32, learning rate of 0.1, and so on. Further details, hyperparameters, and datasets are described in section 4 and Appendix B. 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 Tr ai n C os -S im Te st C os -S im W ei gh t N or m Without Augment Partial Augment All Augment Figure 6: Analysis on Balanced CIFAR-100. 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 Tr ai n C os -S im Te st C os -S im W ei gh t N or m Without Augment Partial Augment All Augment Figure 7: Analysis on CIFAR-100-LT (IR 100). A.2 ANALYSIS Analysis for Figure 1. To figure out the reason for the phenomena in Figure 1, we conduct further analysis as shown in Figure 6 and Figure 7. Our experimental setups are as follows: • Train the networks with three augmentation strategies, respectively (without, partial, and all), then measure the class-wise feature alignment and linear classifier weight norm for all networks. (Experiment 1) • From a trained network without augmentation in Experiment 1, we freeze the feature extractor and train the linear classifier layer with augmenting partial classes. Then, we measure the class-wise L1-norm for all linear classifiers. (Experiment 2) From the Figure 6 and Figure 7, we have three observations from Experiment 1: 1. When we conduct augmentation only for partial classes (0-49 classes), the feature alignment for augmented classes of the training dataset is degraded compared to the non-augmented classes. This is because the augmentation classes have more diversified training data than non-augmentation classes, which leads to more diversification in feature space. We observe the balance between alignment between classes in the cases of without augmentation and with all augmentation since all classes have similar diversity. (See the first rows in Figure 6, 7) 2. However, all three augmentation strategies have balanced class-wise feature alignment for the same test dataset. This tendency can be observed in both balanced and imbalanced datasets. This result is consistent with Kang et al. (2020). Furthermore, the values for feature alignment are increased when we conduct augmentation partially or all, compared to without augmentation. This result shows that augmentation enhances the feature extraction ability, which is consistent with conventional studies. (See the second rows in Figure 6, 7) 3. When we conduct augmentation only for partial classes on a balanced dataset, the class-wise weight norm of the linear classifier is larger for non-augmentation classes. This result incurs performance improvement for non-augmentation classes and reduction for augmentation classes since this linear classifier has a tendency to classify non-augmented classes with larger weight values. However, we observe that class-wise weight norms are balanced in “without augmentation” and “all augmentation” cases. (See the third row in Figure 6) 4. We observe that the class-wise weight norm of the linear classifier is larger for majorities for all classes that have the same augmentation strength. These results are consistent with previous works (Kang et al., 2020; Alshammari et al., 2022). However, when we conduct augmentation only for majorities, the class-wise weight norm is more balanced. This phenomenon is similar to the balanced case in that partial augmentation incurs a reduction in the norm of the linear classifier for augmented classes. (See the third row in Figure 7) Our observations from Experiment 1 are highly consistent in both balanced and imbalanced datasets. The results in Figure 1, Figure 6 and Figure 7 highly motivate the design of CUDA. Moreover, our results for Experiment 2 can explain these observations as shown in Figure 8 and Figure 9. We observe that in the presence of feature alignment degradation from augmentation, the corresponding norm is relatively small, as shown in Figure 8. This is because in the class that has lower feature alignment, the variation of the gradient for the linear classifier is larger than in the class with high feature alignment. As shown in Figure 9, from Experiment 2, we observe that ∥∆w∥, the norm of class-wise difference of between current and initialize linear classifier parameters ∆w := w −w0, have smaller value in augmented classes than non-augmented classes. From our experimental analysis in Figure 6, 7, and 9, we can conclude that augmentation breaks the consistency of feature alignment and it makes the weight norm of the linear classifier decreases. B IMPLEMENTATION DETAIL IN SECTION 4 B.1 DATASET DESCRIPTION CIFAR-100-LT. CIFAR-100-LT is a subset of CIFAR-100. Following Wang et al. (2021); Park et al. (2022); Zhu et al. (2022), we use the same long-tailed version for a fair comparison. The number of samples of kth class is determined as follows: (1) Compute the imbalanced factor Nmax/Nmin, which reflects the degree of imbalance in the data. (2) |Dk| between |D1| = Nmax and |D100| = Nmin follows an exponential decay (i.e., |Dk| = |D1| × (Nmax/Nmin)k/100). The imbalance factors used in the experiment are set to 100, 50, and 10. ImageNet-LT. ImageNet-LT (Liu et al., 2019) is a modified version of the large-scale real-world dataset (Russakovsky et al., 2015). Subsampling is conducted by following the Pareto distribution with power value α = 0.6. It consists of 115.8K images of 1, 000 classes in total. The most common or rare class has 1, 280 or 5 images, respectively. iNaturalist 2018. iNaturalist (Van Horn et al., 2018) is a large-scale real-world dataset which consists of 437.5K images from 8, 142 classes. It has long-tailed property by nature, with an extremely class imbalanced. In addition to long-tailed recognition, this dataset is also used for evaluating the finegrained classification task. B.2 DATA PREPROCESSING For data preprocessing, we follow the default settings of Cao et al. (2019). For CIFAR-100-LT, each side of the image is padded with 4 pixels, and a 32× 32 crop is randomly selected from the padded image or its horizontal flip. For ImageNet-LT and iNaturalist 2018, after resizing each image by setting the shorter side to 256 pixels, a 224 × 224 crop is randomly sampled from an image or its horizontal flip. For BCL and NCL, which use AutoAugment (Cubuk et al., 2019) or RandAugment (Cubuk et al., 2020) as default data augmentation, we apply them after random cropping by following their original papers (Zhu et al., 2022; Li et al., 2022a). Then, we finally conduct CUDA after all default augmentation operations, and then normalize the image with following mean and standard deviation values sequentially: CIFAR-100-LT ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ImageNetLT ((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), and iNaturalist 2019 ((0.466, 0.471, 0.380), (0.195, 0.194, 0.192)). B.3 DETAILED IMPLEMENTATION Because some official codes do not open their entire implementations, we re-implement by following the rules. For re-implementation, we reproduce the code based on their partial code and the authors’ responses. RIDE. We follow the officially offered code2. Among various experimental configurations of official code (e.g., one-stage RIDE, RIDE-EA, Distill-RIDE), for fair comparison (to leverage similar computation resources), we utilize one-stage training (i.e., one-stage RIDE) for all cases. We confirm that CMO (Park et al., 2022) also utilizes this setup for RIDE + CMO from the response of the authors. CMO. We re-implement all CMO results from their official code3 in our work. However, the official code of CMO does not contain code for RIDE + CMO. Therefore, we re-implement by injecting the CMO part for BS in the official code (weighted sampler and mixup part) into the RIDE code. Furthermore, for iNaturalist 2018, we train the model for 100 epochs for a fair comparison with other methods (whereas the original RIDE + CMO is trained for 200 epochs on iNaturalist 2018). BCL. The officially released code4 of BCL only contains ImageNet-LT and iNaturalist 2018. Whereas the official code applies a cosine classifier for ImageNet-LT and iNaturalist 2018, we apply 2https://github.com/frank-xwang/RIDE-LongTailRecognition 3https://github.com/naver-ai/cmo 4https://github.com/FlamieZhu/Balanced-Contrastive-Learning an ordinary linear classifier for CIFAR-100-LT from the author’s response. All hyperparameters are the same as the experiment settings of the original work (Zhu et al., 2022). B.4 GUIDELINE FOR HYPER-PARAMETER TUNING Although we did not tune the hyper-parameters extensively. However, we give a guideline to select the hyper-parameters. The number of samples for updating LoL (T ). We can set this value according to the given computing resources (i.e., the largest T under computing resource constraint). This is because the performance improves as T increases from obtaining a definite LoL score by testing many samples. The acceptance threshold (γ). Our strategy for tuning gamma is to select the largest value in which at least one of LoL scores among all classes increases within 20 epochs. This is because for large-scale datasets, the network fail to infer even the easier-to-learn majority classes. Here is the detailed tuning strategy for γ. • We initially set γ as 0.6. • We decrease the threshold γ by 0.1 points whenever it fails to raise any of LoL score for the first 20 training epochs. We condcut this search on CE with CIFAR-100-LT with IR 100 and using the same γ value of the other algorithms with remaining IR settings. Also, we conduct this search rule on ImageNet-LT with CE and use the same value to the other large-scale dataset, i.e., iNaturalist 2018 with remaining algorithms. The augmentation probability (paug). While we did not tune this hyper-parameter, we offer the guideline how to tune this value based on Figure 5a. As shown in Figure 5a, the shape of graph between paug and performance is concave. Thanks to concavity, we think that it is easy to find the optimal value for this hyper-parameter. Note that the reason for the concavity is because the decision of paug value has a trade-off between preserving the information of the original image and exploring diversified images. Further sensitivity analysis on ImageNet-LT In Section 4, we apply different values of γ in CIFAR-100-LT (0.6) and large-scale datasets (0.4; ImageNet-LT and iNaturalist 2018). In addition to Figure 5, we further conduct the sensitivity analysis for γ on the ImageNet-LT to verify CUDA works well robustly with different values of γ on large-scale datasets. As shown in Table 5, our proposed method CUDA is also robust to hyper-parameter selection for γ not only the small datasets such as CIFAR-100-LT but also large-scale datasets. C FURTHER ANALYSES Training Time Analysis. CUDA requires additional computation for computing LoL score. We measure the additional training time for adding CUDA on various algorithms. As shown in Figure 11, when utilizing CUDA additional training time is spent. However, the additional operation for searching the LoL score does not require a large value. For example, BS with CUDA spends ×1.29 time to obtain adequate augmentation strength. Network Architecture Analysis. We also present our ResNet-10 (Liu et al., 2019) and ResNeXt50 (Xie et al., 2017) experiments on the ImageNet-LT dataset in Figure 10, respectively. These results show that CUDA consistently improves performance regardless of network sizes and corresponding LTR methods. w/ CUDA w/o CUDA ResNet-10 CE CD LD BS w/ CUDA w/o CUDA Ac cu ra cy (% ) 46 48 50 52 54 56 ResNeXt-50 CE CD LD BS w/o CUDA w/ CUDA Tr ai ni ng ti m e (m in .) 0 10 20 30 40 Algorithm CE LDAM BS RIDE BCL Figure 11: Training time. What if CUDA is ran on the balanced dataset. We examine that if CUDA is applied to the balanced case, i.e., imbalance ratio is 1. As described in the Table 6 CUDA obtains 1.9% accuracy gain, which is lower than the other auto augmentation methods. However, other autoaugmentation methods spend more computation time searching a good augmentation than CUDA. Furthermore, as described in Figure 4, CUDA has higher performance than the others when the class imbalance dataset is given. D AUGMENTATION PRESET D.1 DATA AUGMENTATION OPERATIONS USED IN CUDA. There have been numerous data augmentation operations in vision tasks. We used totally 22 augmentations for CUDA with their own parameter set. Details of the operation set and parameters are described in Table 7. For augmentation magnitude parameter mk(s), we divide parameters into thirty values linearly. For example of, ShearX case, its max and min values are 3 and 0, respectively. Therefore, mShearX(s) = (3− 0)/30 ∗ s, thus mShearX(1) = 0.01 = (3− 0)/30 ∗ 1. D.2 FURTHER ANALYSIS ON AUGMENTATION PRESET To get further intuition on the effect of number of predefined augmentation operations, we conduct several exploratory experiments. Validity of our main finding (Figure 1) under a few predefined augmentation. The observation in Figure 1 is caused by minorities becoming relatively easy to learn since majorities have become difficult. Therefore, if the sample of majorities becomes difficult enough to learn, the same phenomenon as Figure 1 occurs regardless of the number of augmentation presets. To verify that our main finding is valid regardless of the number of predefined augmentations, we conduct the experimental with ten augmentation operations (Mirror, ShearX, Invert, Smooth, ResizeCrop, Color, Brightness, Sharpness, Rotate, AutoContrast). Table 8 describes the performance of (0,0), (0,4), (4,0), and (4,4) that each configuration denotes the augmentation strength of (majority; top 50 class, minor; bottom 50 class). Through the results, we verify that the finding in Figure 1 is valid even in a small number of predefined augmentation operations. Effect of number of predefined augmentation. We further analyze the impact of predefined augmentation operations (K in Figure 2); we additionally experiment by replacing the augmentation preset in Appendix D with the following two augmentation presets: (1) 10 randomly sampled augmentations (Mirror, ShearX, Invert, Smooth, ResizeCrop, Color, Brightness, Sharpness, Rotate, Operation Parameter Description Flip On/Off Flip top and bottom Mirror On/Off Flip left and right Edge Enhancement On/Off Increasing the contrast of the pixels around the targeted edges Detail On/Off Utilize convolutional kernel [[0,−1, 0], [−1, 10,−1], [0,−1, 0]] Smooth On/Off Utilize convolutional kernel [[1, 1, 1], [1, 5, 1], [1, 1, 1]] AutoContrast On/Off Remove a specific percent of the lightest and darkest pixels Equalize On/Off apply non-linear mapping to make uniform distribution Invert On/Off Negate the image Gaussian Blur [0,2] Blurring an image using Gaussian function Resize Crop [1,1.3] Resizing and center random cropping Rotate [0,30] Rotate the image Posterize [0,4] Reduce the number of bits for each channel Solarize [0,256] Invert all pixel values above a threshold SolarizeAdd [0,110] Adding value and run solarize Color [0.1, 1.9] Colorize gray scale values Contrast [0.1,1.9] Distance between the colors Brightness [0.1,1.9] Adjust image brightness Sharpness [0.1,1.9] Adjust image sharp Shear X [0,0.3] Shearing X-axis Shear Y [0,0.3] Shearing Y-axis Translate X [0,100] Shift X-axis Translate Y [0,100] Shifting Y-axis AutoContrast) and (2) RandAugment (Cubuk et al., 2020) preset that consists of (AutoContrast, Equalize, Invert, Rotate, Posterize, Solarize, SolarizeAdd, Color, Contrast, Brightness, Sharpness, ShearX, ShearY, CutoutAbs, TranslateXabs, TranslateYabs). Table 9 demonstrates that the accuracy slightly increases when the size of the augmentation preset increases. However, the gap between the RandAugment preset (14 operations) and our original preset (22 operations) is small compared to the gap between the vanilla (without CUDA case) and the RandAugment case. These results verify our belief that the impact of the number of predefined augmentations is small. Effect of randomly ordered data augmentation. Our proposed CUDA operates randomly sequential of the selected augmentations based on the strength of DA. To study the impact of these randomly ordered augmentations, we compare CUDA and CUDA with fixed order augmentations. For examples, when the operation indices (6, 3, 5) among 22 augmentations are samples, it is applied with (3, 5, 6). Table 10 shows small performance differences between the two methods. Thus, we believe that the effect of the augmentation order on the difficulty is negligible. This is because the effectiveness of CUDA is expected to be sufficiently high even in a given order of augmentations since the goal is to make it harder to learn, regardless of the ordered (determined or random) order. Comparison with random augmentation. To verify that the success of CUDA is not simply from a richer dataset made by DA, we compare our proposed method CUDA to randomly sampled augmentation for every iteration. Our comparison methods are Random 5 and Random 10, which denote the conduct of five and ten randomly sampled augmentations for every iteration. As shown in Table 11, while Random 10 generates the most diversifying images, the network trained with this showed the worst performance, even lower than vanilla. Our CUDA achieves the best performance among all methods. E EXPERIMENTAL SETTING OF FIGURE 5D To further analyze the impact of curriculum, we compare CUDA with the performance of previous hyper-parameter search algorithms and auto-augmentation methods, especially DADA (Li et al., 2020b). We describe each setting in detail as follows. Baseline. This is the case of training with standard data augmentation that consists of random cropping and probabilistic horizontal flip. Hyper-parameter search. We utilize the strength score-based augmentation module in CUDA to verify the hyper-parameter search. In other words, samples in each class utilize K augmentation operations. Therefore, we search the class-wise augmentation on the search space KN where N is the number of classes. We leverage the hyper-parameter searching open-source library, Ray (Liaw et al., 2018), for search KN space efficiently. Among various search modules, we utilize the HyperOptSearch module, which is the implementation of the Tree-structured Parzen Estimator (Bergstra et al., 2013). Moreover, for fast search, we use the Asynchronous Successive Halving Algorithm (ASHA) (Li et al., 2020a). We run 1, 000 trials for each algorithms which spends almost 20 GPU hours (i.e., ×80 overhead compare to CUDA). Researched DADA operation on imbalanced CIFAR. Because the officially offered policies on CIFAR by Li et al. (2020b) are searched for a balanced CIFAR dataset, we have to re-search the augmentation policy for the imbalanced dataset. We utilize the official code of DADA and replace the dataloader to re-search the operations. It spends 48 minutes for searching the augmentation policy (×8.6 than the overhead of CUDA). Despite this additional overhead, DADA outputs worse performance than CUDA (even CUDA without curriculum case). This is because (1) DADA does not consider class-wise augmentation and (2) it does not consider the impact of class imbalance. CUDA without curriculum To verify the impact of curriculum itself, we ran the following steps. (1) We conduct experiments with CUDA and get the strength of data augmentation for each class at the final epoch. (2) We re-train the network from scratch by using the strength parameter obtained from (1). F FURTHER ANALYSES To get better understanding, we conduct several analyses for our proposed method, CUDA. F.1 FURTHER ANALYSIS ON LOL SCORE In this section, we conduct experimental ablation studies to understand the performance gain of our proposed method, CUDA. Suitability of LoL score as metric for class-wise difficulty. The superiority of LoL score is to measure the difficulty metric based on the augmentation strength for each class, which is motivated by our main findings. To verify the suitability of LoL score as a metric for class-wise difficulty, we compared CUDA and the case where LoL score is replaced by the score in Sinha et al. (2022). As same with our proposed method, we increase the strength parameter when the score in Sinha et al. (2022) is larger than the same threshold γ = 0.6. Table 12 summarizes the results that our LoL score showed performance improvement compared to the case of Sinha et al. (2022). From the results, we can conclude that this improvement comes from the characteristic of LoL score that is directly related to augmentation strength. Effect of random sampling for computing LoL score To implement the computation of LoL score efficiently, we randomly selected the instances for each class. The reason for using random sampling to compute VCorrect is that we want to measure how much the model learns entire information for each class. To understand the effect of random sampling, we compare our random sampling method to sampling instances with larger (or smaller) losses. Table 13 describes the comparison of performance between various sampling strategies. As shown in the results, if CUDA measures the degree of learning with only easy samples (the samples with small losses), CUDA increases the strength of augmentation too quickly and generates performance degradation. Therefore, it is a better way to grasp the degree of learning for each class without prejudice through uniform random sampling. Furthermore, computing loss for all samples for sorting them at the beginning of each epoch requires ×1.5 times of computation overhead than our method. Numerical values of LoL score dynamics. We provide the numerical values for Figure 4 that is, the average values (for every 20 epochs) of LoL score for the classes with indices 1-10 and the classes with indices 91-100. From the numerical values, we can easily understand the explanation which is discussed in Section 4. F.2 ANALYSIS THE CASE OF WITHOUT CLASS-WISE To examine the validity of class-wise augmentation of CUDA, we apply the CUDA with the same strength of DA for all classes. Instead of computing LoL score class-wisely, we computed only one LoL score for the entire dataset by uniformly random sampling instances in the training dataset regardless of class. Table 15 shows the significant performance degradation of CUDA without classwise augmentation compared to CUDA. This is because, without class-wise augmentation, we cannot allocate the appropriate strength of augmentation to each class.
1. What is the focus of the paper regarding class imbalance and data augmentation? 2. What are the strengths and weaknesses of the proposed approach, particularly in its simplicity and effectiveness versus its contribution and comparison to other works? 3. Do you have any questions or concerns regarding the methodology, hyperparameters, and interpretation of results? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions or recommendations for improving the paper, such as comparing with existing dataset-specific data augmentation search methods and providing more detailed explanations and interpretations of results?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper deals with class imbalance from the data augmentation perspective. It is based on and motivated by an analysis on the relationship between data augmentation degree and performance per class. Strengths And Weaknesses Strengths Data augmentation is an interesting dimension to study class imbalance in general. The proposed method is simple and reasonably effective in the sense that it exploits the empirical analysis. Weaknesses The contribution of "the first to suggest a class-wise augmentation method to find a proper augmentation strength for class imbalance problem" is over claimed. As this is a special case of exiting dataset-specific data augmentation methods such as (Cheung & Yeung, ICLR 2022). In Introduction, the findings on the data augmentation and class performances are given without proper and concise explanation. This is fine only when they are intuitive and there is no need for explaining, but it seems not. Figure 1 needs more interpretation in the caption. e.g. What is the metric for the grid charts Method: A few hyper-parameters are introduced, and how to tune them is not clear. What means by curriculum of DA? Experiments: Given that this paper is about data augmentation, it is useful and necessary to compare with existing dataset-specific data augmentation search methods such as (Cheung & Yeung, ICLR 2022). This is missing now. The performance gain on larger datasets (ImageNet, iNaturelist 2018) in Table 2 is smaller. This seems that class-wise data augmentation will get less useful along the scale dimension. This is a key issue for experimental evaluation. Fig 5 (d): This shows that the most performance margin is from the use of curriculum. Given this, what is the performance for the case of w/o CUDA & w. Curriculum? Clarity, Quality, Novelty And Reproducibility The paper is mostly clear and easy to understand with some parts in method and experiment for more clarity. The novelty is sort of limited due to that dataset-specific data augmentation exists. The reproducibility seems good since the method design is not complex. The overall quality is limited at the current form, considering the weaknesses as listed above.
ICLR
Title CUDA: Curriculum of Data Augmentation for Long-tailed Recognition Abstract Class imbalance problems frequently occur in real-world tasks, and conventional deep learning algorithms are well known for performance degradation on imbalanced training datasets. To mitigate this problem, many approaches have aimed to balance among given classes by re-weighting or re-sampling training samples. These re-balancing methods increase the impact of minority classes and reduce the influence of majority classes on the output of models. However, the extracted representations may be of poor quality owing to the limited number of minority samples. To handle this restriction, several methods have been developed that increase the representations of minority samples by leveraging the features of the majority samples. Despite extensive recent studies, no deep analysis has been conducted on determination of classes to be augmented and strength of augmentation has been conducted. In this study, we first investigate the correlation between the degree of augmentation and class-wise performance, and find that the proper degree of augmentation must be allocated for each class to mitigate class imbalance problems. Motivated by this finding, we propose a simple and efficient novel curriculum, which is designed to find the appropriate per-class strength of data augmentation, called CUDA: CUrriculum of Data Augmentation for long-tailed recognition. CUDA can simply be integrated into existing long-tailed recognition methods. We present the results of experiments showing that CUDA effectively achieves better generalization performance compared to the state-of-the-art method on various imbalanced datasets such as CIFAR-100-LT, ImageNet-LT, and iNaturalist 2018. 1 1 INTRODUCTION Deep neural networks (DNNs) have significantly improved over the past few decades on a wide range of tasks (He et al., 2017; Redmon & Farhadi, 2017; Qi et al., 2017). This effective performance is made possible by come from well-organized datasets such as MNIST (LeCun et al., 1998), CIFAR10/100 (Krizhevsky et al., 2009), and ImageNet (Russakovsky et al., 2015). However, as Van Horn et al. (2018) indicated, gathering such balanced datasets is notoriously difficult in real-world applications. In addition, the models perform poorly when trained on an improperly organized dataset, e.g., in cases with class imbalance, because minority samples can be ignored due to their small portion. The simplest solution to the class imbalance problem is to prevent the model from ignoring minority classes. To improve generalization performance, many studies have aimed to emphasize minority classes or reduce the influence of the majority samples. Reweighting (Cao et al., 2019; Menon et al., 2021) or resampling (Buda et al., 2018; Van Hulse et al., 2007) are two representative methods that have been frequently applied to achieve this goal. (i) Reweighting techniques increase the weight of the training loss of the samples in the minority classes. (ii) Resampling techniques reconstruct a class-balanced training dataset by upsampling minority classes or downsampling majority classes. Although these elaborate rebalancing approaches have been adopted in some applications, limited information on minority classes due to fewer samples remains problematic. To address this issue, some works have attempted to spawn minority samples by leveraging the information of the minority ∗Two authors contribute equally 1Code is available at Link samples themselves. For example, Chawla et al. (2002); Ando & Huang (2017) proposed a method to generate interpolated minority samples. Recently (Kim et al., 2020; Chu et al., 2020; Park et al., 2022) suggested enriching the information of minority classes by transferring information gathered from majority classes to the minority ones. For example, Kim et al. (2020) generated a balanced training dataset by creating adversarial examples from the majority class to consider them as minority. Although many approaches have been proposed to utilize data augmentation methods to generate various information about minority samples, relatively few works have considered the influence of the degree of augmentation of different classes on class imbalance problems. In particular, few detailed observations have been conducted as to which classes should be augmented and how intensively. To this end, we first consider that controlling the strength of class-wise augmentation can provide another dimension to mitigate the class imbalance problem. In this paper, we use the number of augmentation operations and their magnitude to control the extent of the augmentation, which we refer to herein as its strength, e.g., a strength parameter of 2 means that two randomly sampled operations with a pre-defined magnitude index of 2 are used. Our key finding is that class-wise augmentation improves performance in the non-augmented classes while that for the augmented classes may not be significantly improved, and in some cases, performances may even decrease. As described in Figure 1, regardless of whether a given dataset is class imbalanced, conventional class imbalance methods show similar trends: when only the major classes are strongly augmented (e.g., strength 4), the performance of majority classes decreases, whereas that for the minority classes have better results. To explain this finding, we further find that strongly augmented classes get diversified feature representation, preventing the growth of the norm of a linear classifier for corresponding classes. As a result, the softmax outputs of the strongly augmented classes are reduced, and thus the accuracy of those classes decreases. It is described in Appendix A. This result motivates us to find the proper augmentation strength for each class to improve the performance for other classes while maintaining its own performance. Contribution. We propose a simple algorithm called CUrriculum of Data Augmentation (CUDA) to find the proper class-wise augmentation strength for long-tailed recognition. Based on our motivation, we have to increase the augmentation strength of majorities for the performance of minorities when the model successfully predicts the majorities. On the other hand, we have to lower the strength of majorities when the model makes wrong predictions about majorities. The proposed method consists of two modules, which compute a level-of-learning score for each class and leverage the score to determine the augmentation. Therefore, CUDA increases and decreases the augmentation strength of the class that was successfully and wrongly predicted by the trained model. To the best of our knowledge, this work is the first to suggest a class-wise augmentation method to find a proper augmentation strength for class imbalance problem. We empirically examine performance of CUDA on synthetically imbalanced datasets such as CIFAR100-LT (Cao et al., 2019), ImageNet-LT (Liu et al., 2019), and a real-world benchmark, iNaturalist 2018 (Van Horn et al., 2018). With the high compatibility of CUDA, we apply our framework to various long-tailed recognition methods and achieve better performance compared to the existing long-tailed recognition methods. Furthermore, we conduct an extensive exploratory analysis to obtain a better understanding of CUDA. The results of these analyses verify that CUDA exhibits two effects that mitigate class imbalance, including its balanced classifier and improved feature extractor. 2 RELATED WORKS Long-tailed Recognition (LTR). The datasets with class imbalances can lead DNNs to learn biases toward training data, and their performance may decrease significantly on the balanced test data. To improve the robustness of such models to imbalance, LTR methods have been evolving in two main directions: (1) reweighting (Cui et al., 2019; Cao et al., 2019; Park et al., 2021) methods that reweight the loss for each class by a factor inversely proportional to the number of data points, and (2) resampling methods (Kubat et al., 1997; Chawla et al., 2002; Ando & Huang, 2017) that balance the number of training samples for each class in the training set. However, studies along these lines commonly sacrifice performance on majority classes to enhance that on minority classes, because the overfitting problem occurs with limited information on minority classes as a result of increasing the weight of a small number of minority samples. Several methods have recently been developed to alleviate the overfitting issues in various categories: (1) two-stage training (Cao et al., 2019; Kang et al., 2020; Liu et al., 2019), (2) ensemble methods (Zhou et al., 2020a; Xiang et al., 2020; Wang et al., 2021; Cai et al., 2021), and (3) contrastive learning approach (Kang et al., 2021; Cui et al., 2021; Zhu et al., 2022; Li et al., 2022a;b). To re-balance the classifier layers after achieving a good representation on the imbalanced training dataset in an early phase, Cao et al. (2019) proposed deferred resampling (DRS) and reweighting (DRW) approaches. Kang et al. (2020) decoupled the learning procedure into representation learning and training linear classifier, achieved higher performance than previous balancing methods. Wang et al. (2021) and Cai et al. (2021) suggested efficient ensemble methods using multiple experts with a routing module and a shared architecture for experts to capture various representations. Liu et al. (2022) found that self-supervised representations are more robust to class imbalance than supervised representations, and some works have developed supervised contrastive learning methods (Khosla et al., 2020) for imbalanced datasets (Cui et al., 2021; Zhu et al., 2022; Li et al., 2022b). Another line of research has considered augmentation methods in terms of both input and feature spaces (Kim et al., 2020; Chu et al., 2020; Li et al., 2021). Recently, Park et al. (2022) mixed minority and majority images by using CutMix with different sampling strategies to enhance balancing and robustness simultaneously. These methods commonly focus on utilizing the rich context of majority samples to improve the diversity of minority samples. Zhou et al. (2022) proposed an augmentation-based contrastive learning method which boosts memorization of each samples for long-tailed learning. Moreover, these augmentation-based methods are relatively in easy to apply orthogonally with other LTR methods. Data Augmentation (DA). DA has been studied to mitigate overfitting which may occur due to a lack of data samples. Some works have been proposed to erase random parts of images to enhance the generalization performance of neural networks (DeVries & Taylor, 2017; Zhong et al., 2020; Kumar Singh & Jae Lee, 2017; Choe & Shim, 2019). Recently, variants of MixUp (Zhang et al., 2018) have been proposed; this method combines two images with specific weights (Tokozume et al., 2018; Guo et al., 2019; Takahashi et al., 2018; DeVries & Taylor, 2017; Verma et al., 2019). By aggregating two approaches, CutMix (Yun et al., 2019) was proposed to erase and replace a small rectangular part of an image into another image. In another line of research, methods have been proposed to automatically configure augmentation operations (Cubuk et al., 2019; Lim et al., 2019; Li et al., 2020b; Hataya et al., 2020; Gudovskiy et al., 2021). In addition, Cubuk et al. (2020) randomly selected augmentation operations using the given hyperparameters of the number of sampling augmentation and their magnitudes. Recently, class-wise or per-sample auto-augmentation methods have also been proposed (Cheung & Yeung, 2021; Rommel et al., 2022). 3 CURRICULUM OF DATA AUGMENTATION FOR LONG-TAILED RECOGNITION The core philosophy of CUDA is to “generate an augmented sample that becomes the most difficult sample without losing its original information.” In this section, we describe design of CUDA in terms of two parts: (1) a method to generate the augmented samples based on the given strength parameter, and (2) a method to measure a Level-of-Learning (LoL) score for each class. 3.1 PROBLEM FORMULATION OF LONG-TAILED RECOGNITION Suppose that the training dataset D = {(xi, yi)}Ni=1 is composed of images with size d, xi ∈ Rd, and their corresponding labels yi ∈ {1, ..., C}. Dc ⊂ D is a set of class c, i.e., Dc = {(x, y)|y = c, (x, y) ∈ D}. Without loss of generality, we assume |D1| ≥ |D2| ≥ · · · ≥ |DC |, where |D| denotes the cardinality of the set D. We denote the Nmax := |D1| and Nmin := |DC |. LTR algorithms, ALTR(fθ,D), mainly focus on training the model fθ with parameter θ when the class distribution of training dataset Ptrain(y) and test dataset Ptest(y) are not identical. More precisely, Ptrain(y) is highly imbalanced while Ptest(y) is balanced, i.e., uniform distribution. 3.2 CURRICULUM OF DATA AUGMENTATION In this section, we describe our proposed DA with strength parameter, and the methods used to measured the LoL score. Then, we integrate the two methods in a single framework to propose CUDA. DA with a strength parameter. Let us assume that there exist pre-defined K augmentation operations. We utilize visual augmentation operations which is indexed as k ∈ {1, · · · ,K}, e.g., Gaussian blur, Rotation, Horizontal flip. Each augmentation operation Omk(s)k : Rd → Rd has its own predefined augmentation magnitude function mk(s) where the strength parameter s ∈ {0, ..., S}. These operations are described in detail along with each magnitude functions in Appendix D. Given an augmentation strength parameter s and an input image x, we model a sequence of augmentation operations O(x; s) as follows: O(x; s) = Omks (s)ks ◦ O mks−1 (s) ks−1 ◦ · · · ◦ Omk1 (s)k1 (x), ki ∼ Cat(K,U(K)) ∀i = {1, . . . , s}, where, Cat(·) and U(·) denote categorical and discrete uniform distributions, respectively. The sequential augmentation operation O(x; s) samples s operations from the categorical distribution when the probability of seeing the operations follows uniform distribution. As depicted on the left side Figure 2, suppose that the random sampled augmentations k1, k2, and k3 are brightness, X-shift, and Y-shift, respectively. Then, O(x; 3) outputs an image in which bright is raised by mbright(3) and moved by mx-shift(3) on the x-axis and shifted by my-shift(3) on the y-axis. Algorithm 1: CUrriculum of Data Augmentation Input: LTR algorithm ALTR(f,D), training dataset D = {(xi, yi)}Ni=1, train epochs E, aug. probability paug, threshold γ, number of sample coefficient T . Output: trained model fθ Initialize: L0c = 0 ∀c ∈ {1, ..., C} for e ≤ E do Update Lec = VLoL(Dc, Le−1c , fθ, γ, T ) ∀c // Alg. 2 Generate DCUDA = {(x̄i, yi)|(xi, yi) ∈ D} where x̄i = { O(xi, Leyi) with prob. paug xi otherwise. Run LTR algorithm using DCUDA, i.e., ALTR (fθ,DCUDA). end Algorithm 2: VLoL: Update LoL score Input: Dc, L, fθ, γ, T Output: updated L Initialize: check = 1 for l ≤ L do /* Vcorrect(Dc, l, fθ, T ) */ Sample D′c ⊂ Dc s.t. |D′c| = T (l + 1) Compute v = ∑ x∈D′c 1{f(O(x;l)=c} if v ≤ γT (l + 1) then check← 0; break end end if check = 1 then L← L+ 1 else L← L− 1 Level-of-Learning (LoL). To control the strength of augmentation properly, we check whether the model can correctly predict augmented versions without losing the original information. To enable this, we define the LoL for each class c at epoch e, i.e., Lec, which is adaptively updated as the training continues as follows: Lec = VLoL(Dc, Le−1c , fθ, γ, T ), where VLoL(Dc, Le−1c , fθ, γ, T ) = { Le−1c + 1 if VCorrect(Dc, l, fθ, T ) ≥ γT (l + 1) ∀l ∈ {0, ..., Le−1c } Le−1c − 1 otherwise . Here, γ ∈ [0, 1] is threshold hyperparameter, T is coefficient of the number of samples used to updating LoL. Vcorrect is a function which outputs the number of correctly predicted examples by the model fθ among l + 1 randomly augmented samples with strength l. Vcorrect is defined as: VCorrect(Dc, l, fθ, T ) = ∑ x∈D′c 1{fθ(O(x;l))=c} where D′c ⊂ Dc. Note that D′c is a randomly sampled subset of Dc with replacement and its size is T (l + 1). The key philosophy of this criterion is two fold. (1) If samples in the class c are trained sufficiently with an augmentation strength of Lec, the model is ready to learn a more difficult version with augmentation strength of Le+1c ← Lec + 1. In contrast, if the model predicts incorrectly, it should re-learn the easier sample with an augmentation strength of Le+1c ← Lec − 1. (2) As the strength parameter increases, the number of candidates for the sequential augmentation operation O(x;L) increases exponentially. For example, the amount of increment is NL(N − 1) when L is increases to L+ 1. To control the LoL in a large sequential augmentation operation space, we take more random samples to check as the strength parameter gets bigger. In our experiments, linearly increasing the number of samples to evaluate corresponding to the strength with a small additional computation time was sufficient. VLoL is described in Figure 2 and Algorithm 2. Curriculum of DA. By combining two components, including DA with a strength parameter and LoL, our CUDA provides class-wise adaptive augmentation to enhance the performance of the others without losing its own information. As shown in Figure 2 and Algorithm 1, we measure the LoL score Lc for all classes in the training dataset to determine the augmentation strength for every epoch. Based on Lc, we generate the augmented version O(x;Lc) for x ∈ Dc and train the model with the augmented samples. Additionally, we randomly use the original sample instead of the augmented sample with probability paug so that the trained models do not forget the original information. In our experiments, this operation improved performance robustly on a wide range of paug values. The results are provided in Section 4.3. Advantage of CUDA design. Our proposed approach mainly has three advantages. (1) CUDA adaptively finds proper augmentation strengths for each class without need for a validation set. (2) Following the spirits of existing curriculum learning methods (Hacohen & Weinshall, 2019; Zhou et al., 2020b; Wu et al., 2021), CUDA enables modeling by first presenting easier examples earlier during training to improve generalization. This encourages the model to learn difficult samples (i.e., within high augmentation strength) better. (3) Moreover, owing to the universality of data augmentation, CUDA is easily compatible with other LTR algorithms, such as (Cao et al., 2019; Ren et al., 2020; Wang et al., 2021). 4 EXPERIMENTS In this section, we present empirical evaluation, the results of which demonstrate the superior performance of our proposed algorithm for class imbalance. We first describe the long-tailed classification benchmarks and implementations in detail (Section 4.1). Then, we describe the experimental results on several synthetic (CIFAR-100-LT, ImageNet-LT) and real-world (iNaturalist 2018) long-tailed benchmark datasets in Section 4.2. Moreover, we conduct additional experiments to obtain a better understanding of CUDA, and this analysis is provided in Section 4.3. 4.1 EXPERIMENTAL SETUP Datasets. We evaluate CUDA on the most commonly used long-tailed image classification tasks: CIFAR-100-LT (Cao et al., 2019), ImageNet-LT (Liu et al., 2019), and iNaturalist 2018 (Van Horn et al., 2018). CIFAR-100-LT and ImageNet-LT are provided with imbalanced classes by synthetically sampling the training samples. CIFAR-100-LT is examined with various imbalance ratios {100, 50, 10}, where an imbalance ratio is defined as Nmax/Nmin. iNaturalist 2018 is a large-scale real-world dataset includes natural long-tailed imbalance. We utilize the officially provided datasets. Baselines. We compare CUDA with previous long-tailed learning algorithms , including cross-entropy loss (CE), two-stage approaches: CE-DRW (Cao et al., 2019) and cRT (Kang et al., 2020), balanced loss approaches: LDAM-DRW (Cao et al., 2019) and Balanced Softmax (BS; Ren et al. 2020), the ensemble method: RIDE with three experts (Wang et al., 2021), resampling algorithms: Remix (Chou et al., 2020) and CMO (Park et al., 2022), and contrastive learning-based approach: BCL (Zhu et al., 2022). We integrate CUDA with CE, CE-DRW, LDAM-DRW, BS, RIDE, and BCL algorithms. For longer epochs, we compare CUDA with PaCo (Cui et al., 2021), BCL, and NCL (Li et al., 2022a), by combining CUDA with BCL and NCL. For a fair comparison of the computational cost, we train the network with the official one-stage implementation of RIDE (i.e., without distillation and routing). Implementation. For CIFAR-100-LT dataset, almost all implementations follow the general setting from Cao et al. (2019), whereas cRT (Kang et al., 2020), BCL, NCL and RIDE follow the settings used in their original implementation. Following Cao et al. (2019), we use ResNet-32 (He et al., 2016) as a backbone network for CIFAR-100-LT. The network is trained on SGD with a momentum of 0.9 and a weight decay of 2× 10−4. The initial learning rate is 0.1 and a linear learning rate warm-up is used in the first 5 epochs to reach the initial learning rate. During training over 200 epochs, the learning rate is decayed at the 160th and 180th epochs by 0.01. For the ImageNet-LT and iNaturalist, the ResNet-50 is used as a backbone network and is trained for 100 epochs. The learning rate is decayed at the 60th and 80th epochs by 0.1. As with CIFAR, for cRT, RIDE, and BCL, we follow the original experimental settings of the official released code. For the hyperparameter values of CUDA, we apply a paug of 0.5 and T of 10 for all experiments. For γ, we set the values as 0.6 for CIFAR-100-LT and 0.4 for ImageNet-LT and iNaturalist 2018. The detailed implementation for baselines are in Appendix B. 4.2 EXPERIMENTAL RESULTS In this section, we report the performances of the methods compared on the CIFAR-100-LT, ImageNetLT, and iNaturalist 2018. We include four different categories of accuracy: all, many, med(ium), and few. Each represents the average accuracy of all samples, classes containing more than 100 samples, 20 to 100 samples, and under 20 samples, respectively. CIFAR-100-LT. In Table 1, we report the performance when CUDA is applied to the various algorithms: CE, CE-DRW (Cao et al., 2019), LDAM-DRW (Cao et al., 2019), BS (Ren et al., 2020), RIDE (Wang et al., 2021) with 3 experts, RIDE+CMO (Park et al., 2022), and BCL (Zhu et al., 2022). Compared to the cases without CUDA, balanced validation performance is increased when we apply the proposed approach. Recently, some works (Cui et al., 2021; Alshammari et al., 2022; Zhu et al., 2022; Li et al., 2022a) have shown impressive performances with diverse augmentation strategies and longer training epochs. For a fair comparison with these methods, we examine CUDA using the same experimental setups from PaCo (Cui et al. 2021; 400 epochs with batch size of 64). Table 3 shows that augmented images using CUDA can enhance LTR performance compared to the other baselines. In particular, CUDA with NCL obtains the best performance over 400 epochs. As noted by Li et al. (2022a), the NCL algorithm utilizes six times as much memory compared to the vanilla architecture with three experts. Hereinafter in large-scale benchmarks, we focus on the cases with similar network size. ImageNet-LT and iNaturalist 2018. To evaluate the performance of CUDA on larger datasets, we conduct experiments on ImageNet-LT (Liu et al., 2019) and iNaturalist 2018 (Van Horn et al., 2018). Table 2 summarizes the performance of various LTR methods and the performance gain when integrated with CUDA. Our proposed method consistently improves performance regardless of the LTR method and target dataset by simply adding class-wise data augmentation without complicated methodological modification. Additionally, to evaluate the performance gain of CUDA on other Epoch architectures, we experiment with CUDA on ImageNet-LT with ResNet-10 (Liu et al., 2019) and ResNeXt-50 (Xie et al., 2017), as reported in Appendix C. 4.3 ANALYSIS We design our analyses to answer the following questions. (1) How does CUDA perform? (2) Does CUDA perform better than other augmentation methods? (3) How does LoL score change over training epochs when combined with various LTR methods? (4) Which part of CUDA is important to improved performance? These analyses provide additional explanations to understand CUDA. All experiments are conducted on CIFAR-100-LT with imbalance ratio of 100. How does CUDA mitigate the class imbalance problem? To deeply understand CUDA, we observe two types of metrics: (1) variance of weight L1-Norm of linear classifier between each class (2) feature alignment gain for each class (i.e., cosine similarity with and without CUDA) on validation dataset. The classifier weight norm is usually used to measure how balanced the model consider the input from a class-wise perspective (Kang et al., 2020; Alshammari et al., 2022). Feature alignment, especially feature cosine similarity amongst samples belonging to the same class, is a measure of the extent to which the extracted features are aligned (Oh et al., 2021). As shown in Figure 3, CUDA has two forces for alleviating imbalance. For all cases, CUDA reduces the variance of the weight norm (i.e., balance the weight norm), and thus the trained model consider the minority classes in a balanced manner. Note that because LDAM-DRW and RIDE utilize a cosine classifier (i.e., utilizing L2 normalized linear weight), their standard deviation scale is quite different from those other methods. Because LDAM-DRW, BS, and RIDE include balancing logic in their loss function, they exhibit lower variance reduction compared to the CE and CE-DRW. Second, as shown in the bottom row in Figure 3, CUDA obtains feature alignment gains for almost all classes. This shows that CUDA facilitates a network to learn to extract meaningful features. Compared with other augmentations. To verify the impact of CUDA, we examine the other augmentation methods as follows. We compare five augmentation methods, including AutoAugment (AA, Cubuk et al. 2019), Fast AutoAugment (FAA, Lim et al. 2019), DADA (Li et al., 2020b), RandAugment (RA, Cubuk et al. 2020), and the proposed method CUDA. Because AA, FAA, and DADA provide their policies searched by using CIFAR, SVHN (for AA), and ImageNet, we leverage their results. Furthermore, RA suggests using their parameter (n,m) = (1, 2) for CIFAR, and we follow their guidelines. As shown in Table 4, even though the automated augmentation methods use additional computation resources to search, CUDA outperforms the other pre-searched augmentations. This shows that CUDA is computationally efficient. Dynamics of LoL score. We evaluate how LoL scores vary with algorithms: CE, CE-DRW, LDAMDRW, BS, and RIDE. Note that we set a lower class index (i.e., 0) as the most common class (i.e., the number of samples is 500), while an index of 100 represents the rarest class (i.e., with five samples). As described in Figure 4, as training progressed, the LoL score of all algorithms increase. After learning rate decay (i.e., 160 epoch) all algorithms are able to learn to classify minority classes more easily than before. In particular, except for BS, the majority classes of most algorithms show a steep increment. The reason that BS exhibit a similar increasing speed for majority and minority classes is that it includes a module to balance the impact of majority and minority samples. Furthermore, we found that CE-DRW and BS have similar end average accuracy in the case of applying CUDA but different LoL score dynamics. We can conclude that LoL score on one category of classes has a high correlation with the performance of opposite classes from the observation that CE-DRW has higher and lower performance gain for many and few, respectively, than BS. Parameter sensitivity. For further analysis, we conduct a sensitivity analysis of hyperparameters in CUDA. More precisely, we study three kinds of parameters, including augmentation probability paug (Figure 5a), number of tests T (Figure 5b), and LoL update threshold γ (Figure 5c). We examine each hyperparameter sensitivity on a CUDA case with RIDE and the remainder of the hyperparameters are fixed to the default values in Section 4.1. All results show that the performance gains of CUDA decreases if the parameters are adjusted to make the augmentation too strong or weak. For example, the augmentation strength of all classes steeply increases when γ becomes small. The strength cannot increase when γ becomes large, and thus it cannot improve the performance of the model. Moreover, as shown in Figure 5b, the performance of CUDA increases as T increases. However, larger T spends computational overhead, we set T as 10 and obtained cost-effective performance gain. Impact of curriculum. In addition to studying the impact of CUDA, we examine its performance component-wise. In particular, we test the case where class-wise augmentation strength is searched based on the hyperparameter optimization algorithm. We check five cases overall: baseline algorithm, hyperparameter optimization (HO), re-searched DADA for CIFAR-100-LT, CUDA without curriculum, (i.e., re-training utilizing the final augmentation strength of CUDA), and CUDA. We provide detailed description for each method in Appendix E. As described in Figure 5d, CUDA finds better augmentation strengths compare to the hyperparameter search case. This means that CUDA exhibits not only a lower searching time but also obtains better augmentation strength. Moreover, by comparing the performance of with or without curriculum, the curriculum also can provide additional advance to the model to achieve better generalization. Additionally, as Figure 4, lower augmentation strength at the beginning of training is more effective than static higher augmentation strength. These results are consistent with the results of previous studies on curriculum learning methods (Zhou et al., 2020b). 5 CONCLUSION In this study, we proposed CUDA to address the class imbalance problem. The proposed approach is also compatible with existing methods. To design a proper augmentation for LTR, we first studied the impact of augmentation strength for LTR. We found that the strength of augmentation for a specific type of class (e.g., major class) could affect the performance of the other type (e.g., minor class). From this finding, we designed CUDA to adaptively find an appropriate augmentation strength without any further searching phase by measuring the LoL score for each epoch and determining the augmentation accordingly. To verify the superior performance of proposed approach, we examined each performance with various methods and obtained the best performance among the methods compared, including synthetically generated and real-world benchmarks. Furthermore, from our analyses, we validated that our CUDA enhanced balance and feature extraction ability, which can consistently improve performance for majority and minority classes. ACKNOWLEDGEMENT This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST), 10%) and the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022- 0-00871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration, 90%) Appendix CUDA: Curriculum of Data Augmentation for Long-tailed Recognition Owing to the page limitation of the main manuscript, we provide detailed information in this supplementary as follows. (1) In Appendix A, we summarize the experimental setup of Figure 1, and further explain why augmentation on one side causes performance degradation on the opposite side. (2) In Appendix B, we describe in detail our experimental setting, including dataset configuration, data preprocessing, and training implementation. (3) In Appendix C, we show ImageNet-LT performance on different size and architecture networks, training time analysis, and accuracy on the balanced dataset case. (4) In Appendix D, we present in detail the augmentation operations that CUDA utilizes. (5) In Appendix E, we describe the experimental setting of Figure 5d. A DETAIL FOR FIGURE 1 A.1 EXPERIMENTAL SETTINGS Major and minor group decomposition. To check the impact of augmentation on majority and minority classes, we split the training dataset into two clusters. The majority cluster is the top 50 classes by sorting through the number of samples for each class. The bottom 50 classes are in the minority cluster. For simplicity, we utilize class indices of 0 to 49 as the majority and 50 to 99 as the minority, respectively. For the balanced case, we utilize 0 to 49 classes as cluster 1, and the others as cluster 2. Controlling augmentation strength. We set the augmentation strength as the number of augmentation and its augmentation magnitude by following the augmentation rule of CUDA. For example, the samples in the majority classes with magnitude parameter 4 represents that they are augmented with randomly sampled 4 augmentations with their own pre-defined augmentation magnitude. Training setting. For heatmaps in Figure 1, we follow the training recipe of CIFAR-100-LT for CE case, e.g., ResNet-32, learning rate of 0.1, and so on. Further details, hyperparameters, and datasets are described in section 4 and Appendix B. 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 Tr ai n C os -S im Te st C os -S im W ei gh t N or m Without Augment Partial Augment All Augment Figure 6: Analysis on Balanced CIFAR-100. 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 0.86 0.88 0.90 0.92 0.94 0.96 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 8 10 12 14 0 20 40 60 80 100 Tr ai n C os -S im Te st C os -S im W ei gh t N or m Without Augment Partial Augment All Augment Figure 7: Analysis on CIFAR-100-LT (IR 100). A.2 ANALYSIS Analysis for Figure 1. To figure out the reason for the phenomena in Figure 1, we conduct further analysis as shown in Figure 6 and Figure 7. Our experimental setups are as follows: • Train the networks with three augmentation strategies, respectively (without, partial, and all), then measure the class-wise feature alignment and linear classifier weight norm for all networks. (Experiment 1) • From a trained network without augmentation in Experiment 1, we freeze the feature extractor and train the linear classifier layer with augmenting partial classes. Then, we measure the class-wise L1-norm for all linear classifiers. (Experiment 2) From the Figure 6 and Figure 7, we have three observations from Experiment 1: 1. When we conduct augmentation only for partial classes (0-49 classes), the feature alignment for augmented classes of the training dataset is degraded compared to the non-augmented classes. This is because the augmentation classes have more diversified training data than non-augmentation classes, which leads to more diversification in feature space. We observe the balance between alignment between classes in the cases of without augmentation and with all augmentation since all classes have similar diversity. (See the first rows in Figure 6, 7) 2. However, all three augmentation strategies have balanced class-wise feature alignment for the same test dataset. This tendency can be observed in both balanced and imbalanced datasets. This result is consistent with Kang et al. (2020). Furthermore, the values for feature alignment are increased when we conduct augmentation partially or all, compared to without augmentation. This result shows that augmentation enhances the feature extraction ability, which is consistent with conventional studies. (See the second rows in Figure 6, 7) 3. When we conduct augmentation only for partial classes on a balanced dataset, the class-wise weight norm of the linear classifier is larger for non-augmentation classes. This result incurs performance improvement for non-augmentation classes and reduction for augmentation classes since this linear classifier has a tendency to classify non-augmented classes with larger weight values. However, we observe that class-wise weight norms are balanced in “without augmentation” and “all augmentation” cases. (See the third row in Figure 6) 4. We observe that the class-wise weight norm of the linear classifier is larger for majorities for all classes that have the same augmentation strength. These results are consistent with previous works (Kang et al., 2020; Alshammari et al., 2022). However, when we conduct augmentation only for majorities, the class-wise weight norm is more balanced. This phenomenon is similar to the balanced case in that partial augmentation incurs a reduction in the norm of the linear classifier for augmented classes. (See the third row in Figure 7) Our observations from Experiment 1 are highly consistent in both balanced and imbalanced datasets. The results in Figure 1, Figure 6 and Figure 7 highly motivate the design of CUDA. Moreover, our results for Experiment 2 can explain these observations as shown in Figure 8 and Figure 9. We observe that in the presence of feature alignment degradation from augmentation, the corresponding norm is relatively small, as shown in Figure 8. This is because in the class that has lower feature alignment, the variation of the gradient for the linear classifier is larger than in the class with high feature alignment. As shown in Figure 9, from Experiment 2, we observe that ∥∆w∥, the norm of class-wise difference of between current and initialize linear classifier parameters ∆w := w −w0, have smaller value in augmented classes than non-augmented classes. From our experimental analysis in Figure 6, 7, and 9, we can conclude that augmentation breaks the consistency of feature alignment and it makes the weight norm of the linear classifier decreases. B IMPLEMENTATION DETAIL IN SECTION 4 B.1 DATASET DESCRIPTION CIFAR-100-LT. CIFAR-100-LT is a subset of CIFAR-100. Following Wang et al. (2021); Park et al. (2022); Zhu et al. (2022), we use the same long-tailed version for a fair comparison. The number of samples of kth class is determined as follows: (1) Compute the imbalanced factor Nmax/Nmin, which reflects the degree of imbalance in the data. (2) |Dk| between |D1| = Nmax and |D100| = Nmin follows an exponential decay (i.e., |Dk| = |D1| × (Nmax/Nmin)k/100). The imbalance factors used in the experiment are set to 100, 50, and 10. ImageNet-LT. ImageNet-LT (Liu et al., 2019) is a modified version of the large-scale real-world dataset (Russakovsky et al., 2015). Subsampling is conducted by following the Pareto distribution with power value α = 0.6. It consists of 115.8K images of 1, 000 classes in total. The most common or rare class has 1, 280 or 5 images, respectively. iNaturalist 2018. iNaturalist (Van Horn et al., 2018) is a large-scale real-world dataset which consists of 437.5K images from 8, 142 classes. It has long-tailed property by nature, with an extremely class imbalanced. In addition to long-tailed recognition, this dataset is also used for evaluating the finegrained classification task. B.2 DATA PREPROCESSING For data preprocessing, we follow the default settings of Cao et al. (2019). For CIFAR-100-LT, each side of the image is padded with 4 pixels, and a 32× 32 crop is randomly selected from the padded image or its horizontal flip. For ImageNet-LT and iNaturalist 2018, after resizing each image by setting the shorter side to 256 pixels, a 224 × 224 crop is randomly sampled from an image or its horizontal flip. For BCL and NCL, which use AutoAugment (Cubuk et al., 2019) or RandAugment (Cubuk et al., 2020) as default data augmentation, we apply them after random cropping by following their original papers (Zhu et al., 2022; Li et al., 2022a). Then, we finally conduct CUDA after all default augmentation operations, and then normalize the image with following mean and standard deviation values sequentially: CIFAR-100-LT ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ImageNetLT ((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), and iNaturalist 2019 ((0.466, 0.471, 0.380), (0.195, 0.194, 0.192)). B.3 DETAILED IMPLEMENTATION Because some official codes do not open their entire implementations, we re-implement by following the rules. For re-implementation, we reproduce the code based on their partial code and the authors’ responses. RIDE. We follow the officially offered code2. Among various experimental configurations of official code (e.g., one-stage RIDE, RIDE-EA, Distill-RIDE), for fair comparison (to leverage similar computation resources), we utilize one-stage training (i.e., one-stage RIDE) for all cases. We confirm that CMO (Park et al., 2022) also utilizes this setup for RIDE + CMO from the response of the authors. CMO. We re-implement all CMO results from their official code3 in our work. However, the official code of CMO does not contain code for RIDE + CMO. Therefore, we re-implement by injecting the CMO part for BS in the official code (weighted sampler and mixup part) into the RIDE code. Furthermore, for iNaturalist 2018, we train the model for 100 epochs for a fair comparison with other methods (whereas the original RIDE + CMO is trained for 200 epochs on iNaturalist 2018). BCL. The officially released code4 of BCL only contains ImageNet-LT and iNaturalist 2018. Whereas the official code applies a cosine classifier for ImageNet-LT and iNaturalist 2018, we apply 2https://github.com/frank-xwang/RIDE-LongTailRecognition 3https://github.com/naver-ai/cmo 4https://github.com/FlamieZhu/Balanced-Contrastive-Learning an ordinary linear classifier for CIFAR-100-LT from the author’s response. All hyperparameters are the same as the experiment settings of the original work (Zhu et al., 2022). B.4 GUIDELINE FOR HYPER-PARAMETER TUNING Although we did not tune the hyper-parameters extensively. However, we give a guideline to select the hyper-parameters. The number of samples for updating LoL (T ). We can set this value according to the given computing resources (i.e., the largest T under computing resource constraint). This is because the performance improves as T increases from obtaining a definite LoL score by testing many samples. The acceptance threshold (γ). Our strategy for tuning gamma is to select the largest value in which at least one of LoL scores among all classes increases within 20 epochs. This is because for large-scale datasets, the network fail to infer even the easier-to-learn majority classes. Here is the detailed tuning strategy for γ. • We initially set γ as 0.6. • We decrease the threshold γ by 0.1 points whenever it fails to raise any of LoL score for the first 20 training epochs. We condcut this search on CE with CIFAR-100-LT with IR 100 and using the same γ value of the other algorithms with remaining IR settings. Also, we conduct this search rule on ImageNet-LT with CE and use the same value to the other large-scale dataset, i.e., iNaturalist 2018 with remaining algorithms. The augmentation probability (paug). While we did not tune this hyper-parameter, we offer the guideline how to tune this value based on Figure 5a. As shown in Figure 5a, the shape of graph between paug and performance is concave. Thanks to concavity, we think that it is easy to find the optimal value for this hyper-parameter. Note that the reason for the concavity is because the decision of paug value has a trade-off between preserving the information of the original image and exploring diversified images. Further sensitivity analysis on ImageNet-LT In Section 4, we apply different values of γ in CIFAR-100-LT (0.6) and large-scale datasets (0.4; ImageNet-LT and iNaturalist 2018). In addition to Figure 5, we further conduct the sensitivity analysis for γ on the ImageNet-LT to verify CUDA works well robustly with different values of γ on large-scale datasets. As shown in Table 5, our proposed method CUDA is also robust to hyper-parameter selection for γ not only the small datasets such as CIFAR-100-LT but also large-scale datasets. C FURTHER ANALYSES Training Time Analysis. CUDA requires additional computation for computing LoL score. We measure the additional training time for adding CUDA on various algorithms. As shown in Figure 11, when utilizing CUDA additional training time is spent. However, the additional operation for searching the LoL score does not require a large value. For example, BS with CUDA spends ×1.29 time to obtain adequate augmentation strength. Network Architecture Analysis. We also present our ResNet-10 (Liu et al., 2019) and ResNeXt50 (Xie et al., 2017) experiments on the ImageNet-LT dataset in Figure 10, respectively. These results show that CUDA consistently improves performance regardless of network sizes and corresponding LTR methods. w/ CUDA w/o CUDA ResNet-10 CE CD LD BS w/ CUDA w/o CUDA Ac cu ra cy (% ) 46 48 50 52 54 56 ResNeXt-50 CE CD LD BS w/o CUDA w/ CUDA Tr ai ni ng ti m e (m in .) 0 10 20 30 40 Algorithm CE LDAM BS RIDE BCL Figure 11: Training time. What if CUDA is ran on the balanced dataset. We examine that if CUDA is applied to the balanced case, i.e., imbalance ratio is 1. As described in the Table 6 CUDA obtains 1.9% accuracy gain, which is lower than the other auto augmentation methods. However, other autoaugmentation methods spend more computation time searching a good augmentation than CUDA. Furthermore, as described in Figure 4, CUDA has higher performance than the others when the class imbalance dataset is given. D AUGMENTATION PRESET D.1 DATA AUGMENTATION OPERATIONS USED IN CUDA. There have been numerous data augmentation operations in vision tasks. We used totally 22 augmentations for CUDA with their own parameter set. Details of the operation set and parameters are described in Table 7. For augmentation magnitude parameter mk(s), we divide parameters into thirty values linearly. For example of, ShearX case, its max and min values are 3 and 0, respectively. Therefore, mShearX(s) = (3− 0)/30 ∗ s, thus mShearX(1) = 0.01 = (3− 0)/30 ∗ 1. D.2 FURTHER ANALYSIS ON AUGMENTATION PRESET To get further intuition on the effect of number of predefined augmentation operations, we conduct several exploratory experiments. Validity of our main finding (Figure 1) under a few predefined augmentation. The observation in Figure 1 is caused by minorities becoming relatively easy to learn since majorities have become difficult. Therefore, if the sample of majorities becomes difficult enough to learn, the same phenomenon as Figure 1 occurs regardless of the number of augmentation presets. To verify that our main finding is valid regardless of the number of predefined augmentations, we conduct the experimental with ten augmentation operations (Mirror, ShearX, Invert, Smooth, ResizeCrop, Color, Brightness, Sharpness, Rotate, AutoContrast). Table 8 describes the performance of (0,0), (0,4), (4,0), and (4,4) that each configuration denotes the augmentation strength of (majority; top 50 class, minor; bottom 50 class). Through the results, we verify that the finding in Figure 1 is valid even in a small number of predefined augmentation operations. Effect of number of predefined augmentation. We further analyze the impact of predefined augmentation operations (K in Figure 2); we additionally experiment by replacing the augmentation preset in Appendix D with the following two augmentation presets: (1) 10 randomly sampled augmentations (Mirror, ShearX, Invert, Smooth, ResizeCrop, Color, Brightness, Sharpness, Rotate, Operation Parameter Description Flip On/Off Flip top and bottom Mirror On/Off Flip left and right Edge Enhancement On/Off Increasing the contrast of the pixels around the targeted edges Detail On/Off Utilize convolutional kernel [[0,−1, 0], [−1, 10,−1], [0,−1, 0]] Smooth On/Off Utilize convolutional kernel [[1, 1, 1], [1, 5, 1], [1, 1, 1]] AutoContrast On/Off Remove a specific percent of the lightest and darkest pixels Equalize On/Off apply non-linear mapping to make uniform distribution Invert On/Off Negate the image Gaussian Blur [0,2] Blurring an image using Gaussian function Resize Crop [1,1.3] Resizing and center random cropping Rotate [0,30] Rotate the image Posterize [0,4] Reduce the number of bits for each channel Solarize [0,256] Invert all pixel values above a threshold SolarizeAdd [0,110] Adding value and run solarize Color [0.1, 1.9] Colorize gray scale values Contrast [0.1,1.9] Distance between the colors Brightness [0.1,1.9] Adjust image brightness Sharpness [0.1,1.9] Adjust image sharp Shear X [0,0.3] Shearing X-axis Shear Y [0,0.3] Shearing Y-axis Translate X [0,100] Shift X-axis Translate Y [0,100] Shifting Y-axis AutoContrast) and (2) RandAugment (Cubuk et al., 2020) preset that consists of (AutoContrast, Equalize, Invert, Rotate, Posterize, Solarize, SolarizeAdd, Color, Contrast, Brightness, Sharpness, ShearX, ShearY, CutoutAbs, TranslateXabs, TranslateYabs). Table 9 demonstrates that the accuracy slightly increases when the size of the augmentation preset increases. However, the gap between the RandAugment preset (14 operations) and our original preset (22 operations) is small compared to the gap between the vanilla (without CUDA case) and the RandAugment case. These results verify our belief that the impact of the number of predefined augmentations is small. Effect of randomly ordered data augmentation. Our proposed CUDA operates randomly sequential of the selected augmentations based on the strength of DA. To study the impact of these randomly ordered augmentations, we compare CUDA and CUDA with fixed order augmentations. For examples, when the operation indices (6, 3, 5) among 22 augmentations are samples, it is applied with (3, 5, 6). Table 10 shows small performance differences between the two methods. Thus, we believe that the effect of the augmentation order on the difficulty is negligible. This is because the effectiveness of CUDA is expected to be sufficiently high even in a given order of augmentations since the goal is to make it harder to learn, regardless of the ordered (determined or random) order. Comparison with random augmentation. To verify that the success of CUDA is not simply from a richer dataset made by DA, we compare our proposed method CUDA to randomly sampled augmentation for every iteration. Our comparison methods are Random 5 and Random 10, which denote the conduct of five and ten randomly sampled augmentations for every iteration. As shown in Table 11, while Random 10 generates the most diversifying images, the network trained with this showed the worst performance, even lower than vanilla. Our CUDA achieves the best performance among all methods. E EXPERIMENTAL SETTING OF FIGURE 5D To further analyze the impact of curriculum, we compare CUDA with the performance of previous hyper-parameter search algorithms and auto-augmentation methods, especially DADA (Li et al., 2020b). We describe each setting in detail as follows. Baseline. This is the case of training with standard data augmentation that consists of random cropping and probabilistic horizontal flip. Hyper-parameter search. We utilize the strength score-based augmentation module in CUDA to verify the hyper-parameter search. In other words, samples in each class utilize K augmentation operations. Therefore, we search the class-wise augmentation on the search space KN where N is the number of classes. We leverage the hyper-parameter searching open-source library, Ray (Liaw et al., 2018), for search KN space efficiently. Among various search modules, we utilize the HyperOptSearch module, which is the implementation of the Tree-structured Parzen Estimator (Bergstra et al., 2013). Moreover, for fast search, we use the Asynchronous Successive Halving Algorithm (ASHA) (Li et al., 2020a). We run 1, 000 trials for each algorithms which spends almost 20 GPU hours (i.e., ×80 overhead compare to CUDA). Researched DADA operation on imbalanced CIFAR. Because the officially offered policies on CIFAR by Li et al. (2020b) are searched for a balanced CIFAR dataset, we have to re-search the augmentation policy for the imbalanced dataset. We utilize the official code of DADA and replace the dataloader to re-search the operations. It spends 48 minutes for searching the augmentation policy (×8.6 than the overhead of CUDA). Despite this additional overhead, DADA outputs worse performance than CUDA (even CUDA without curriculum case). This is because (1) DADA does not consider class-wise augmentation and (2) it does not consider the impact of class imbalance. CUDA without curriculum To verify the impact of curriculum itself, we ran the following steps. (1) We conduct experiments with CUDA and get the strength of data augmentation for each class at the final epoch. (2) We re-train the network from scratch by using the strength parameter obtained from (1). F FURTHER ANALYSES To get better understanding, we conduct several analyses for our proposed method, CUDA. F.1 FURTHER ANALYSIS ON LOL SCORE In this section, we conduct experimental ablation studies to understand the performance gain of our proposed method, CUDA. Suitability of LoL score as metric for class-wise difficulty. The superiority of LoL score is to measure the difficulty metric based on the augmentation strength for each class, which is motivated by our main findings. To verify the suitability of LoL score as a metric for class-wise difficulty, we compared CUDA and the case where LoL score is replaced by the score in Sinha et al. (2022). As same with our proposed method, we increase the strength parameter when the score in Sinha et al. (2022) is larger than the same threshold γ = 0.6. Table 12 summarizes the results that our LoL score showed performance improvement compared to the case of Sinha et al. (2022). From the results, we can conclude that this improvement comes from the characteristic of LoL score that is directly related to augmentation strength. Effect of random sampling for computing LoL score To implement the computation of LoL score efficiently, we randomly selected the instances for each class. The reason for using random sampling to compute VCorrect is that we want to measure how much the model learns entire information for each class. To understand the effect of random sampling, we compare our random sampling method to sampling instances with larger (or smaller) losses. Table 13 describes the comparison of performance between various sampling strategies. As shown in the results, if CUDA measures the degree of learning with only easy samples (the samples with small losses), CUDA increases the strength of augmentation too quickly and generates performance degradation. Therefore, it is a better way to grasp the degree of learning for each class without prejudice through uniform random sampling. Furthermore, computing loss for all samples for sorting them at the beginning of each epoch requires ×1.5 times of computation overhead than our method. Numerical values of LoL score dynamics. We provide the numerical values for Figure 4 that is, the average values (for every 20 epochs) of LoL score for the classes with indices 1-10 and the classes with indices 91-100. From the numerical values, we can easily understand the explanation which is discussed in Section 4. F.2 ANALYSIS THE CASE OF WITHOUT CLASS-WISE To examine the validity of class-wise augmentation of CUDA, we apply the CUDA with the same strength of DA for all classes. Instead of computing LoL score class-wisely, we computed only one LoL score for the entire dataset by uniformly random sampling instances in the training dataset regardless of class. Table 15 shows the significant performance degradation of CUDA without classwise augmentation compared to CUDA. This is because, without class-wise augmentation, we cannot allocate the appropriate strength of augmentation to each class.
1. What is the main contribution of the paper regarding data augmentation for long-tail problems? 2. What are the strengths and weaknesses of the proposed method, particularly in its ability to estimate class difficulty and adjust augmentation strength? 3. Do you have any concerns about the analysis and comparisons made in the paper, especially regarding the effectiveness of the proposed curriculum and its relation to other methods? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a new data-augmentation strategy based on curriculum learning for long-tail problems. The key idea is to estimate the appropriate strength of data augmentation needed for each class during training. The proposed method was evaluated on widely used datasets and achieved favorable performance. Strengths And Weaknesses Strength The idea of changing the strength of data augmentation for each class during the training depending on their performance is reasonable and interesting. The idea is so simple that it can be easily integrated with other methods for long-tailed learning as shown in the paper. The proposed method is evaluated on popular datasets for long-tailed learning and shows consistently good performance. The analysis provided in section 4.3 helps readers understand the characteristics of the proposed method. The paper is mostly well written and easy to follow. Weakness In my view, the important contributions of the paper are two-fold: (1) the paper presented a new way to estimate the performance of each class during training, and (2) the paper presented the idea of changing the strength of augmentation according to the performance of each class during training. I think the paper becomes stronger if it could provide more in-depth analysis on which part of the two are essential. For (1) [A1] proposed to use validation data to estimate the difficulty of each class. Possibly the authors can use this method in the LoL process instead of the proposed one. This clarifies if it is the proposed LoL process that makes the overall performance better or an alternative method can also be used in the LoL process. For (2), even though the authors compare the proposed method with other augmentation methods, it is not clear if the overall performance gain comes from the proposed curriculum, or the selected set of data augmentation and its strength presented in Appendix D. The last paragraph of Section 4 “Impact of curriculum” may be the one for this analysis, but I could not judge if the proposed curriculum is actually effective since there is no detailed explanation on other methods. For example, what is the “vanilla algorithm”? Which “hyperparameter optimization” method is used? Please elaborate “the final augmentation strength of CUDA” more. What happens if the kinds of augmentation and its strength is randomly sampled from the sets presented in Appendix D in each step? I am afraid that the performance gain presented in Table 1-3 actually come from just a richer set of data augmentation. It is not clear what “the augmentation policies” in the 3rd line from the bottom of p8 means. Is it the augmentation policies of the existing methods or the proposed method? Even though the authors state that ”CUDA is computationally efficient” in the last line, I doubt that it is not necessarily the case because the proposed method needs to run multiple inferences to calculate V C o r r e c t . [A1] Sinha+, Class-Difficulty Based Methods for Long-Tailed Visual Recognition, IJCV 2022 Minor points. I needed to read several times and make some guesses to understand what each graph in Figure 1 shows. I think the 6 graphs shown in the top row represent the accuracies of majority classes (class index 0-49) while the 6 graphs in the bottom row show those of minority classes (class index 50-99). I think it is better to add clearer and more detailed explanation in the caption or the figure itself. Figure 3: It is necessary to clearly indicate in the caption that the bottom row shows feature alignment gain, not feature alignment. It is good that the dynamics of the LoL score is shown in Figure 4, and it shows reasonable results. I wonder if it has actually correlate with the accuracy. The paper may become stronger if the analysis on the correlation between the estimated LoL score and validation or test accuracy is added. Clarity, Quality, Novelty And Reproducibility The paper is mostly clear except the points I listed above. The quality of the paper in terms of its writing, the technical soundness, and the support provided by the experiments is good. Even though the idea of using different strength of data augmentation itself and the curriculum learning itself are well-known, I think the design presented in this paper to combine them to better handle long-tailed problems has good novelty. Since the code is provided, I believe the results are reproducible though I have not tried to do so by myself.
ICLR
Title Collaborative Generated Hashing for Market Analysis and Fast Cold-start Recommendation Abstract Cold-start and efficiency issues of the Top-k recommendation are critical to largescale recommender systems. Previous hybrid recommendation methods are effective to deal with the cold-start issues by extracting real latent factors of cold-start items(users) from side information, but they still suffer low efficiency in online recommendation caused by the expensive similarity search in real latent space. This paper presents a collaborative generated hashing (CGH) to improve the efficiency by denoting users and items as binary codes, which applies to various settings: cold-start users, cold-start items and warm-start ones. Specifically, CGH is designed to learn hash functions of users and items through the Minimum Description Length (MDL) principle; thus, it can deal with various recommendation settings. In addition, CGH initiates a new marketing strategy through mining potential users by a generative step. To reconstruct effective users, the MDL principle is used to learn compact and informative binary codes from the content data. Extensive experiments on two public datasets show the advantages for recommendations in various settings over competing baselines and analyze the feasibility of the application in marketing. 1 INTRODUCTION With the explosion of e-commerce, most customers are accustomed to receiving a variety of recommendations, such as movies, books, news, or hotels they might be interested in. Traditional recommender systems just recommended items that are similar to what they liked or rated in the previous. Recommendations help users find their desirable items, and also creates new revenue opportunities for vendors, such as Amazon, Taobao, eBay, etc. Among them, one of the most popular recommendation methods, collaborative filtering is dependent on a large amount of user-item interactive information to provide an accurate recommendation. However, most of new e-commerce vendors do not have enough interactive data, which leads to low recommendation accuracy, i.e., cold-start issues. Previous studies on cold-start issues generally modeled as a combination of collaborative filtering and content filtering, known as hybrid recommender systems. Specifically, they learned real latent factors by incorporating the side information into the interactive data. Such as Collaborative Deep Learning (CDL) (Wang et al., 2015), Visual Bayesian Personalized Ranking (VBPR) (He & McAuley, 2016), Collaborative Topic modeling for Recommedation (CTR) (Wang & Blei, 2011), and the DropoutNet for addressing cold start (DropoutNet)(Volkovs et al., 2017), ABCPRec for Bridging Consumer and Producer Roles for User-Generated Content Recommendation (ABCPRec)(Tsukuda et al., 2019). All of the above hybrid recommender systems were modeled in real latent space, which leads to low efficiency for the online recommendation with the increasing scale of datasets. Recent studies show the promising of hashing based methods to tackle the efficiency challenge by representing users and items with binary codes (Zhang et al., 2014; Zhou & Zha, 2012; Zhang et al., 2016; Liu et al., 2019), because the preference score can be represented by the Hamming distance calculated via XOR operation efficient (Wang et al., 2014). However, the existing hashing based recommendations are learning-based frameworks, which leads to NP-hard problems of optimizing discrete objectives. Thus many scholars learned binary codes by some approximate techniques, such as the two-stage hashing learning method utilized in Preference Preserving Hashing(PPH) (Zhang et al., 2014) and the Iterative Quantization(ITQ) (Zhou & Zha, 2012). To reduce information loss, two learning-based hashing frameworks: bit-wise learning and block-wise learning were respectively proposed in hashing based recommendation frameworks (Zhang et al., 2016; Wang et al., 2019; Zhang et al., 2018; Zheng et al.). However, due to the requirement of binary outputs for learning-based hashing frameworks, the training procedure is expensive for large-scale recommendation, which motivates us to propose a generative approach to learn hash functions. In this paper, we propose the collaborative generated hashing(CGH) to learn hash functions of users and items from content data with the principle of Minimum Description Length (MDL) (Dai et al., 2017). In marketing area, mining potential customers is crucial to the e-commerce. CGH provides a strategy to discover potential users by the generative step. To reconstruct effective users, uncorrelated and balanced limits are imposed to learn compact and informative binary codes with the principle of the MDL. Especially, discovering potential customers is vital to the success of adding new items for a recommendation platform (Papies et al., 2017). Specifically, for a new item, we can generate a new potential user by the generative step (detailed in Section 2.1), and then search the nearest potential users in the user set. By recommending a new product to the potential users who might be interested in but didn’t plan to buy, further e-commerce strategies can be developed to attract those potential users. We organize the paper as follows: Section 2 introduce the main techniques of CGH. We first introduce the framework of CGH and compare it with the closely related competing baselines: CDL (Wang et al., 2015) and DropoutNet (Volkovs et al., 2017); we then formulate the generative step in Section 2.1 and the inference step in Section 2.2, respectively; we finally summarize the training objective and introduce the optimization in Section 2.3. Particularly, we demonstrate the process of mining potential users for the marketing application in Section 2.1. Section 3 presents the experimental results for marketing analysis and recommendation accuracy in various settings. Section 4 concludes the paper. The main contributions of this paper are summarized as follows: (1) We propose the Collaborative Generated Hashing (CGH) with the principle of MDL to learn compact but informative hash codes, which applies to various settings for recommendation. (2) We provides a marketing strategy by discovering potential users by the generative step of CGH, which can be applied to boost the e-commence development. (3) We evaluate the effectiveness of the proposed CGH compared with the state-of-the-art baselines, and demonstrate its robustness and convergence properties on the public datasets. 2 COLLABORATIVE GENERATED HASHING The framework of the proposed CGH is shown in Fig. 1(c), where U , V and R are respectively observed user content, item content and rating matrix. B and D are binary codes of users and items, respectively. CGH consists of the generative step marked as dashed lines and the inference step denoted by solid lines. Once training is finished, we fix the model and make forward passes to obtain binary codes B and D through the inference step, and then conduct recommendation. For the marketing application, we create a new user via the generative step. In comparison of CGH with the closely related baseline CDL (Wang et al., 2015), the proposed CGH aims to learn binary codes instead of real latent vectors P and Q due to the advantage of hashing for online recommendation; plus the CGH optimizes an objective with the principle of MDL, while CDL optimized the joint objective of rating loss and item content reconstruction error. In comparison of CGH with DropoutNet (Volkovs et al., 2017), CGH can be used as a marketing strategy by discovering potential users; plus CGH learns hash functions by stacked denoising autoendoer, while DropoutNet obtained real latent factors by the standard neural network. In the following we start by first formulating the generative process and demonstrating the application in marketing area; we then formulate the inference step; we finally summarize the training objective and the optimization method. 2.1 MINING POTENTIAL USERS . Give a sparse rating matrix R and item content data V ∈ Rdv , where dv is the dimension of the content vector, and V is stacked by the bag-of-words vectors of item content in the item set V . most previous studies were focus on modeling deterministic frameworks to learn representations of items for item recommendation, such as CDL, CTR, DropoutNet, et.al. In this paper, we discover a new strategy from a perspective of marketing for item recommendation – mining potential users. We demonstrate the process of mining potential users by an item through the generative step in Fig. 2. After the inference step, the binary code of item j is available. By maximizing the similarity function δ(bi,dj) (detailed in Section 2.1), the optimal binary code bp is obtained. Then we generate the new user up through the generative step. Finally we find out potential users from the user set by some nearest neighborhood algorithms, such as KNN. As a marketing strategy, it can discover potential users for both warm-start items and cold-start items. Thus, from a perspective of marketing, it can be regarded as another kind of item recommendation. The generation process (also referred as decoding process) is denoted by dashed lines in Fig. 1 (c). Fix binary codes bi and dj of the user i and the item j, the bag-of-words vector ui of the user i (vj of the item j) is generated via p(θu). The ratings rij is generated by bi and dj . We use a simple Gaussian distribution to model the generation of ui and vj given bi and dj like Stochastic Generative Hashing (SGH) (Dai et al., 2017): p(ui|bi) = N (Cubi, λu-1I ), p(vj |dj) = N (Cvdj , λv -1I ), (1) where Cuk = [cuk]rk=1, cuk ∈ Rdu is the codebook (Dai et al., 2017) with r codewords, which is similar for Cv , and du is the dimension of the bag-of-the-words vector of users. The prior is modeled as the multivariate Bernoulli distribution on hash codes: p(bi) ∼ B(ρu), and p(dj) ∼ B(ρv), thus the prior probability are as follows: p(bi) = ∏r k=1 ρbikui (1− ρui) 1−bik , p(dj) = ∏r k=1 ρ djk vj (1− ρvj) 1−djk , (2) We formulate the rating with the similarity between binary codes of users and items like the most successful recommender systems, matrix factorization (Koren et al., 2009). Then the rating is thus drawn from the normal distribution centered at the similarity value, p(rij |bi,dj) ∼ N (δ(bi,dj), C−1ij ), (3) where δ(bi,dj) = 1 − 1rHamdis(bi,dj) denotes the similarity between binary codes bi and dj . Hamdis(bi,dj) represents Hamming distance between the two binary vectors, which has been widely applied in hashing-based recommender system (Wang & Blei, 2011; Lian et al., 2017; Zhang et al., 2018). Cij is the precision parameter that serves as confidence for rij similar to that in CTR (Wang & Blei, 2011) (Cij = a if rij = 1 and Cij = b otherwise) due to the fact that rij = 0 means the user i is either not interested in item j or not aware of it. With the generative model constructed, the joint probability of both observed ratings, content vectors and binary codes is given by p (R,U ,V ,B,D) = ∏ i,j p(rij |bi,dj)p(ui|bi)p(vj |dj)p(bi)p(dj) (4) 2.2 CONSTRAINTS ON BINARY LATENT VARIABLES The inference process (also referred as encoding process) shown in the Fig. 1 (c) with dashed lines, the binary latent variables bi (dj) depends on the content vector ui (vj) and the ratingR (shadowed in Fig 1). Inspired by the recent work on generative hashing (Dai et al., 2017) and DropoutNet (Volkovs et al., 2017), we use a multivariate Bernoulli distribution to model the inference process of bi and dj with linear parametrization, i.e., q(bi|ũi) = B(σ(TuT ũi)) q(dj |ṽj) = B(σ(TvT ṽj)), (5) where ũi = [ui,pi], ṽj = [vj , qj ]. pi and qj are the results of r-dimension matrix factorization (Koren et al., 2009) of R, i.e., rij ≈ pTi qj . Tu = [tuk]rk=1, tuk ∈ Rdu+r, Tv = [tvk]rk=1, tvk ∈ Rdv+r are the transformation matrices of linear parametrization. From SGH (Dai et al., 2017), the MAP solution of the eq. (5) is readily given by bi = argmax bi q(bi|ui) = sign(T Tu ui) + 1 2 , dj = argmax dj q(dj |vj) = sign(T Tv vj) + 1 2 . (6) With the linear projection followed by a sign function, we can easily get hash codes of users and items. However, hashing with a simple sign function suffers from large information loss according to (Zhang et al., 2016), which motivates us to add constraints on parameters in the inference step. To derive compact and informative hash codes for users and items, we add balanced and uncorrelated constraints in the inference step. The balanced constraint is proposed to maximize the entropy of each binary bit (Zhou & Zha, 2012), and the uncorrelated constraint makes each bit is independent of others. Then we can obtain compact and informative hash codes by the following constraints, Balanced constraint: ∑ k bik = 0, ∑ k djk = 0 Uncorrelated constraint: bibTi = Ir,djd T j = Ir (7) From the eq. (6), bi and dj are only dependent on parameters Tu and Tv , respectively, thus we add constraints on Tu and Tv directly. So eq. (7) is equivalent to the following constraints, Balanced constraint: T Tu 1r = 0, T Tv 1r = 0, Uncorrelated constraint: T Tu Tu = Idu+r, T Tv Tv = Idv+r. (8) By imposing the above constraints in the training step, compact and informative hash codes can be obtained through the inference process. Next we summarize the training objective and its optimization. 2.3 TRAINING OF CGH Since our goal is to reconstruct users, items and ratings by using the least information of binary codes, we train the CGH with the MDL principle, which finds the best parameters that maximally compress the training data and meanwhile keep the information carried, thus CGH aims to minimize the expected amount of informations related to q: L(q) =Eq[log p(R,U ,V ,B,D)− log q(B,D)] =Eq[log p(R|B,D) + log p(U |B) + log p(V |D)−KL(q(B|Ũ)||p(B))− KL(q(D|Ṽ )||p(D))] (9) Maximizing the posterior probability is equivalent to maximizing L(q) by only considering the variational distribution of q(B,D), the objective becomes LMAP (Θ,Φ) =− ∑ i,j Cij 2 (rij − δ(bi,dj))2 − λu 2 ∑ i (ui − Cvbi)2 − λv 2 ∑ j (vj − Cvdj)2 − KL(qφu ||pθu)− KL(qφv ||pθv )−∇(Θ,Φ) (10) where Θ = {θu,θv}, Φ = {φu,φv}, ∇(Θ,Φ) is the regularizer term with parameters Θ and Φ. By training the objective in eq. (10), we obtain binary codes, but some bits probably be correlated. To minimize the reconstruction error, SGH had to set up the code length as long as r = 200. Our goal in this paper is to obtain compact and informative hash codes, thus we impose the balance and independent constraints on hash codes by eq. (8). Maximizing the eq. (10) is transformed to minimizing the following constrained objective function of the proposed Collaborative Generative Hashing (CGH), LCGH(Θ,Φ) = ∑ i,j Cij 2 (rij − δ(bi,dj))2 + λu 2 ∑ i (ui − Cubi)2 + λv 2 ∑ j (vj − Cvdj)2+ KL(qφu ||pθu) + KL(qφv ||pθv ) + αu ∥∥T Tu 1r∥∥22 + αv ∥∥T Tv 1r∥∥22 + βu ∥∥T Tu Tu − Idu+r∥∥22 + βv ∥∥T Tv Tv − Idv+r∥∥22 +∇(Θ,Φ). (11) The objective of CGH in eq. (11) is a discrete optimization problem, which is difficult to optimize straightforwardly, so in the training stage, the tanh function is utilized to replace the sign function in the inference step, and then the continuous outputs are used as a relaxation of hash codes. With the relaxation, we train all components jointly with back-propagation. After training, we fix them and make forward passes to map the concatenate vectors in Ũ and Ṽ to binary codes B and D, respectively. The recommendation in various settings is then conducted using B and D by the similarity score estimated as before δ(bi,dj) = 1− 1rHamdis(bi,dj). The training settings are dependent on the recommendation settings, i.e, warm-start, cold-start item, and cold-start user. LCGH(Θ,Φ) aims to minimize the rating loss and two content reconstruction errors with regularizers. (a.) For the warm-start recommendation, ratings for all users and items are available, then the above objective is trivially optimized by setting the content weights to 0 and learning hashing function with the observed ratingsR. (b.) For the cold-start item recommendation, ratings for some items are missing, then the objective is optimized by setting the user content weight to 0 and learning parameters with the observed ratings R and item content V . (c.) The training setting for the cold-start user recommendation is similar to the cold-start item recommendation. 3 EXPERIMENTS We validate the proposed CGH on two public dataset: CiteUlike1 and RecSys 2017 Challenge dataset2 from the following two aspects: (1) Marketing analysis. To validate the effectiveness of CGH in marketing area, we fist defined a metric to evaluate the accuracy of mining potential users; we then test the performance for warm-start item and cold-start item, respectively. (2) Recommendation performance. We test the performance of CGH for recommendation in various settings including: warm-start, cold-start item, and cold-start user in terms of Accurcy@k (Yin et al., 2014). In the following, we first introduce the experimental settings, followed by the experimental results analysis from the above aspects. 3.1 EXPERIMENTAL SETTINGS To evaluate the power of finding out potential users and the accuracy of recommendation in different settings. (1) For the CiteUlike dataset, it contains 5,551 users, 16,980 articles, 204,986 observed user-article binary interaction pairs, and articles abstract content. Similar to (Wang & Blei, 2011), we extract bag-of-the-words item vector with dimension dv = 8000 by ranking the TF-IDF values. (2) For the RecSys 2017 Challenge dataset, it is the only publicly available datasets that contains both user and item content data enabling both cold-start item and cold-start user recommendation. It contains 300M user-item interactions from 1.5M users to 1.3M items and content data collected from the career oriented social network XING (Europern analog of LinkedIn). Like (Volkovs et al., 2017), we evaluate all methods on binary rating data, item content with dimension of du = 831 and user content with the dimension of dv = 2738. user features and 2738 item features forming the dimensions of user and item content . We randomly split the binary interaction (rating)R into three disjoint parts: warm start ratingsRw, cold-start user ratingsRu, and cold-start item ratingsRv , andRw is furtherly split into the training dataset Rwt and the testing dataset Rwe. Correspondingly, the user and item content datasets are split into three disjoint parts. The randomly selection is carried out 5 times independently, and we report the experimental results as the average values. 3.2 EVALUATION METRIC The ultimate goal of recommendation is to find out the top-k items that users may be interested in. Accuracy@k was widely adopted by many previous ranking based recommender systems (Koren, 2008; Chen et al., 2009). Thus we adopt the ranking-based evaluation metric Accuracy@k to evaluate the quality of the recommended item ranking list. Metric for Marketing Application. For a new application of the recommender system, there haven’t yet a metric to evaluate the marketing performance. Thus, in this paper, we define an evaluation metric similar to the ranking-based metric Accuracy@k used for the warm-start and cold-start recommendation in this paper. From Fig. 2, we discover the k nearest potential users for an item j. The basic idea of the metric is to test whether the user that really interested in an item appears in the k potential users list. For each positive rating (rij = 1) in the testing dataset Dtest: (1) we randomly choose 1000 negative users (users k with rkj = 0) and find k potential users in the 1001 user set; (2) we check if the positive user i (with positive rating rij = 1) appears in the k potential users list. If the answer is ’yes’ we have a ’hit’ and have a ’miss’ otherwise. The metric also denoted by Accuracy@k is formulated as: Accuracy@k = #hit@k |Dtest| , (12) 1http://www.citeulike.org/faq/data.adp 2http://www.recsyschallenge.com/2017/ where |Dtest| is the size of the test set, and #hit@k denotes the number of hits in the test set. 3.3 ACCURACY FOR MINING POTENTIAL USERS The experiments evaluate the performance of the marketing application in mining potential users for warm-start items on the test dataset Rwe and cold-start items on Rv . Specifically, we first train the model on the training dataset Rwt and the corresponding user and item content data. When the training is completed, we fix parameters and obtain hash codes bi and dj by making forward passes. Then we generate k potential users for items in the test dataset by the procedure demonstrated in Fig. 2, and evaluate the quality of the potential users list by Accuracy@k defined in Section 3.2. The marketing analysis for warm start item and cold-start item are reported in Fig. 3 (Left.), which shows the accuracy values varies with the numbers of potential users. It indicates the accuracy increases with the number of potential users for both cold-start and warm start settings. It’s reasonable because mining more potential users will have greater accuracy value defined in Section 3.2. Especially, the proposed CGH is effective for cold-start item, which indicates further e-commerce strategies can be developed for new items to attract those potential users. Besides, from the perspective of marketing, warm-start recommendation and cold-start recommendation has less gap than traditional recommendation. Robust Testing. We evaluate the performance varies with the number of users who really interested in the target item in test set. The experimental results shown in Fig. 3 (Center.) indicates the accuracy grows steadily with the size of test set, which reveals the CGH for marketing application is robust. Thus, it is practical to be used in the sparse and cold-start settings. Convergence of CGH. Fig. 3 (Right.) demonstrates the convergence of the proposed CGH, which reveals the reconstruction errors of ratings, users content, items content and the total error with the number of samples seen by CGH are converged, which furtherly validate the correction and effectiveness of the proposed CGH. 3.4 ACCURACY FOR RECOMMENDATION Accuracy for warm-start Recommendation. Fig. 4 (Left.) shows the accuracy comparison of warm-start recommendation on CiteUlike dataset. In which collaborative generated embedding (CGE) denotes the real version of CGH. The figure shows the proposed CGH (CGE) has a comparable performance with other hybrid recommender systems. The proposed CGH is hashing-based recommendation, hence binary vectors apply to recommendation which has the advantage in online recommendation as introduced in Section 1; while the baselines are real-valued recommendations which conducts recommendation on real latent space. Due to real latent vectors intuitively carried more information than hash codes. Thus it is acceptable to have small gaps between the real-valued hybrid recommendation and the hashing-based recommendation. In addition, there is still small gap of the real version CGE in comparison with DropoutNet, because the reconstruction error is consid- ered in CGH(CGE), while DropoutNet didn’t consider it. However, the reconstruction is significant in the generative step of CGH, which makes it feasible to mining effective potential users, thus CGH(CGE) has the advantage in marketing application. Accuracy for cold-start item recommendation. This experiment studies the accuracy comparison between competing hybrid recommender systems and CGH under the same cold-start item setting. We test the performance on the test dataset Rv introduced in Section 3.1. Specifically, in Rv each item (cold-start item) has less than 5 positive ratings. Then we select users with at least one positive rating as test users. For each test user, we first choose his/her ratings related to cold-start items as the test set, and the remaining ratings as the training set. Our goal is to test whether the marked-off cold-start items can be accurately recommended to the right user. The experimental results for cold-start item recommendation are shown in Fig. 4 (Center.). We conclude that CGH has a comparable performance with competing baselines and achieves better performance than CTR. The results evaluated by another metric MRR (detailed in Appendix.A) are similar. Accuracy for cold-start user recommendation. We also test the performance on the cold-start user setting on the test dataset Ru introduced in Section 3.1. Specifically, in Ru, each user (cold-start user) has less than 5 positive ratings. Then we select items with at least one positive rating as test items. For each test item, we first choose ratings related to cold-start users as the test set, and the remaining ratings as the training set. Our goal is to test whether the test item can be can be accurately recommended to marked-off user. Due to the fact that only Dropout can be applied to cold-start user recommendation, so we only compare the performance of CGH with Dropout. The experimental results for cold-start user recommendation shown in Fig. 4 (Right.) indicates our proposed CGH has similar performance with DropoutNet. Besides, CGH has the advantage of the application in marketing area. 4 CONCLUSION In this paper, a generated recommendation framework called collaborative generated hashing (CGH) is proposed to address the cold-start and efficiency issues for recommendation. The two main contributions are put forward in this paper: (1) we develop a collaborative generated hashing framework with the principle of Minimum Description Length together(MDL) with uncorrelated and balanced constraints on the inference process to derive compact and informative hash codes, which is significant for the accuracy of recommendation and marketing; (2) we propose a marketing strategy by the proposed CGH, specifically, we design a framework to discover the k potential users by the generate step; (3) we evaluate the proposed scheme on two the public datasets, the experimental results show the effectiveness of the proposed CGH for both warm-start and cold-start recommendation. A. MRR RESULTS ON FOR RECOMMENDATION We evaluate the accuracy in terms of the MRR (Yin et al., 2014) metric shown in Table 1 for warmstart recommendation. Our proposed CGH performs almost as well as the best result of the realvalued competing baselines. Table 1 summarizes MRR results for the four algorithms, the best result is marked as ‘?q and the second best is marked as ‘oq . We find that the performance of CGH is very close to the best result, that is consistent with the outcome of Accuracy@k reported in Fig. 4.
1. What are the proposed contributions of the paper regarding hashing schemes for recommendation settings? 2. How does the reviewer assess the significance and novelty of the three contributions claimed by the authors? 3. What are the concerns regarding the experimental results and their presentation? 4. How does the reviewer evaluate the choice of datasets and evaluation metrics used in the study? 5. Are there any questions or suggestions regarding the potential applications of the proposed method in marketing strategies?
Review
Review The paper proposes new hashing schemes to learn hash codes to describe users/items for the purpose of recommendation. It is claimed that doing so leads to three contributions (1) the new hash codes themselves, which can apply to various recommendation settings (2) the ability to better discover potential users in e-commerce settings (3) state-of-the-art performance on several datasets In terms of (1), the paper contrasts with Collaborative Deep Learning and dropoutnet. The main difference compared to CDL is that the hashing-based method learns binary codes. Compared to DropoutNet the main difference is the use of a stacked autoencoder, rather than a different neural network architecture. This latter contribution is perhaps a little thin (really it's just a technical detail); the additional contribution of being able to be used as a marketing strategy I didn't really follow. The "mining potential users" contribution (contribution 2) seemed a little ad-hoc to me. It ultimately seems like a variant of KNN, and seems like something similar could be attempted for other methods. Training etc. looks fine, though I didn't fully check the details. The experiments seem not totally convincing. Most critically, the method does not seem to exhibit state-of-the-art performance as claimed, but is somewhat lower (in terms of accuracy) than other baselines. It might be better in terms of speed but this doesn't seem to be thoroughly evaluated. The choice of accuracy as the only evaluation metric also seems unusual. The selection of datasets is also quite limited. It's claimed that these are the only datasets with user and item content, but why are both needed to run an experiment? Can't this method work with either (in which case many other datasets would be appropriate)? Overall the actual results seem mixed, and thus the paper hinges on its statement that it has the "advantage of applications in marketing area". However this latter contribution seems handwavy. In order to be accepted, I'd need to see -- More clearly stated and demonstrable contributions -- More compelling experiments, in terms of datasets, evaluation measures, and actual performance
ICLR
Title Collaborative Generated Hashing for Market Analysis and Fast Cold-start Recommendation Abstract Cold-start and efficiency issues of the Top-k recommendation are critical to largescale recommender systems. Previous hybrid recommendation methods are effective to deal with the cold-start issues by extracting real latent factors of cold-start items(users) from side information, but they still suffer low efficiency in online recommendation caused by the expensive similarity search in real latent space. This paper presents a collaborative generated hashing (CGH) to improve the efficiency by denoting users and items as binary codes, which applies to various settings: cold-start users, cold-start items and warm-start ones. Specifically, CGH is designed to learn hash functions of users and items through the Minimum Description Length (MDL) principle; thus, it can deal with various recommendation settings. In addition, CGH initiates a new marketing strategy through mining potential users by a generative step. To reconstruct effective users, the MDL principle is used to learn compact and informative binary codes from the content data. Extensive experiments on two public datasets show the advantages for recommendations in various settings over competing baselines and analyze the feasibility of the application in marketing. 1 INTRODUCTION With the explosion of e-commerce, most customers are accustomed to receiving a variety of recommendations, such as movies, books, news, or hotels they might be interested in. Traditional recommender systems just recommended items that are similar to what they liked or rated in the previous. Recommendations help users find their desirable items, and also creates new revenue opportunities for vendors, such as Amazon, Taobao, eBay, etc. Among them, one of the most popular recommendation methods, collaborative filtering is dependent on a large amount of user-item interactive information to provide an accurate recommendation. However, most of new e-commerce vendors do not have enough interactive data, which leads to low recommendation accuracy, i.e., cold-start issues. Previous studies on cold-start issues generally modeled as a combination of collaborative filtering and content filtering, known as hybrid recommender systems. Specifically, they learned real latent factors by incorporating the side information into the interactive data. Such as Collaborative Deep Learning (CDL) (Wang et al., 2015), Visual Bayesian Personalized Ranking (VBPR) (He & McAuley, 2016), Collaborative Topic modeling for Recommedation (CTR) (Wang & Blei, 2011), and the DropoutNet for addressing cold start (DropoutNet)(Volkovs et al., 2017), ABCPRec for Bridging Consumer and Producer Roles for User-Generated Content Recommendation (ABCPRec)(Tsukuda et al., 2019). All of the above hybrid recommender systems were modeled in real latent space, which leads to low efficiency for the online recommendation with the increasing scale of datasets. Recent studies show the promising of hashing based methods to tackle the efficiency challenge by representing users and items with binary codes (Zhang et al., 2014; Zhou & Zha, 2012; Zhang et al., 2016; Liu et al., 2019), because the preference score can be represented by the Hamming distance calculated via XOR operation efficient (Wang et al., 2014). However, the existing hashing based recommendations are learning-based frameworks, which leads to NP-hard problems of optimizing discrete objectives. Thus many scholars learned binary codes by some approximate techniques, such as the two-stage hashing learning method utilized in Preference Preserving Hashing(PPH) (Zhang et al., 2014) and the Iterative Quantization(ITQ) (Zhou & Zha, 2012). To reduce information loss, two learning-based hashing frameworks: bit-wise learning and block-wise learning were respectively proposed in hashing based recommendation frameworks (Zhang et al., 2016; Wang et al., 2019; Zhang et al., 2018; Zheng et al.). However, due to the requirement of binary outputs for learning-based hashing frameworks, the training procedure is expensive for large-scale recommendation, which motivates us to propose a generative approach to learn hash functions. In this paper, we propose the collaborative generated hashing(CGH) to learn hash functions of users and items from content data with the principle of Minimum Description Length (MDL) (Dai et al., 2017). In marketing area, mining potential customers is crucial to the e-commerce. CGH provides a strategy to discover potential users by the generative step. To reconstruct effective users, uncorrelated and balanced limits are imposed to learn compact and informative binary codes with the principle of the MDL. Especially, discovering potential customers is vital to the success of adding new items for a recommendation platform (Papies et al., 2017). Specifically, for a new item, we can generate a new potential user by the generative step (detailed in Section 2.1), and then search the nearest potential users in the user set. By recommending a new product to the potential users who might be interested in but didn’t plan to buy, further e-commerce strategies can be developed to attract those potential users. We organize the paper as follows: Section 2 introduce the main techniques of CGH. We first introduce the framework of CGH and compare it with the closely related competing baselines: CDL (Wang et al., 2015) and DropoutNet (Volkovs et al., 2017); we then formulate the generative step in Section 2.1 and the inference step in Section 2.2, respectively; we finally summarize the training objective and introduce the optimization in Section 2.3. Particularly, we demonstrate the process of mining potential users for the marketing application in Section 2.1. Section 3 presents the experimental results for marketing analysis and recommendation accuracy in various settings. Section 4 concludes the paper. The main contributions of this paper are summarized as follows: (1) We propose the Collaborative Generated Hashing (CGH) with the principle of MDL to learn compact but informative hash codes, which applies to various settings for recommendation. (2) We provides a marketing strategy by discovering potential users by the generative step of CGH, which can be applied to boost the e-commence development. (3) We evaluate the effectiveness of the proposed CGH compared with the state-of-the-art baselines, and demonstrate its robustness and convergence properties on the public datasets. 2 COLLABORATIVE GENERATED HASHING The framework of the proposed CGH is shown in Fig. 1(c), where U , V and R are respectively observed user content, item content and rating matrix. B and D are binary codes of users and items, respectively. CGH consists of the generative step marked as dashed lines and the inference step denoted by solid lines. Once training is finished, we fix the model and make forward passes to obtain binary codes B and D through the inference step, and then conduct recommendation. For the marketing application, we create a new user via the generative step. In comparison of CGH with the closely related baseline CDL (Wang et al., 2015), the proposed CGH aims to learn binary codes instead of real latent vectors P and Q due to the advantage of hashing for online recommendation; plus the CGH optimizes an objective with the principle of MDL, while CDL optimized the joint objective of rating loss and item content reconstruction error. In comparison of CGH with DropoutNet (Volkovs et al., 2017), CGH can be used as a marketing strategy by discovering potential users; plus CGH learns hash functions by stacked denoising autoendoer, while DropoutNet obtained real latent factors by the standard neural network. In the following we start by first formulating the generative process and demonstrating the application in marketing area; we then formulate the inference step; we finally summarize the training objective and the optimization method. 2.1 MINING POTENTIAL USERS . Give a sparse rating matrix R and item content data V ∈ Rdv , where dv is the dimension of the content vector, and V is stacked by the bag-of-words vectors of item content in the item set V . most previous studies were focus on modeling deterministic frameworks to learn representations of items for item recommendation, such as CDL, CTR, DropoutNet, et.al. In this paper, we discover a new strategy from a perspective of marketing for item recommendation – mining potential users. We demonstrate the process of mining potential users by an item through the generative step in Fig. 2. After the inference step, the binary code of item j is available. By maximizing the similarity function δ(bi,dj) (detailed in Section 2.1), the optimal binary code bp is obtained. Then we generate the new user up through the generative step. Finally we find out potential users from the user set by some nearest neighborhood algorithms, such as KNN. As a marketing strategy, it can discover potential users for both warm-start items and cold-start items. Thus, from a perspective of marketing, it can be regarded as another kind of item recommendation. The generation process (also referred as decoding process) is denoted by dashed lines in Fig. 1 (c). Fix binary codes bi and dj of the user i and the item j, the bag-of-words vector ui of the user i (vj of the item j) is generated via p(θu). The ratings rij is generated by bi and dj . We use a simple Gaussian distribution to model the generation of ui and vj given bi and dj like Stochastic Generative Hashing (SGH) (Dai et al., 2017): p(ui|bi) = N (Cubi, λu-1I ), p(vj |dj) = N (Cvdj , λv -1I ), (1) where Cuk = [cuk]rk=1, cuk ∈ Rdu is the codebook (Dai et al., 2017) with r codewords, which is similar for Cv , and du is the dimension of the bag-of-the-words vector of users. The prior is modeled as the multivariate Bernoulli distribution on hash codes: p(bi) ∼ B(ρu), and p(dj) ∼ B(ρv), thus the prior probability are as follows: p(bi) = ∏r k=1 ρbikui (1− ρui) 1−bik , p(dj) = ∏r k=1 ρ djk vj (1− ρvj) 1−djk , (2) We formulate the rating with the similarity between binary codes of users and items like the most successful recommender systems, matrix factorization (Koren et al., 2009). Then the rating is thus drawn from the normal distribution centered at the similarity value, p(rij |bi,dj) ∼ N (δ(bi,dj), C−1ij ), (3) where δ(bi,dj) = 1 − 1rHamdis(bi,dj) denotes the similarity between binary codes bi and dj . Hamdis(bi,dj) represents Hamming distance between the two binary vectors, which has been widely applied in hashing-based recommender system (Wang & Blei, 2011; Lian et al., 2017; Zhang et al., 2018). Cij is the precision parameter that serves as confidence for rij similar to that in CTR (Wang & Blei, 2011) (Cij = a if rij = 1 and Cij = b otherwise) due to the fact that rij = 0 means the user i is either not interested in item j or not aware of it. With the generative model constructed, the joint probability of both observed ratings, content vectors and binary codes is given by p (R,U ,V ,B,D) = ∏ i,j p(rij |bi,dj)p(ui|bi)p(vj |dj)p(bi)p(dj) (4) 2.2 CONSTRAINTS ON BINARY LATENT VARIABLES The inference process (also referred as encoding process) shown in the Fig. 1 (c) with dashed lines, the binary latent variables bi (dj) depends on the content vector ui (vj) and the ratingR (shadowed in Fig 1). Inspired by the recent work on generative hashing (Dai et al., 2017) and DropoutNet (Volkovs et al., 2017), we use a multivariate Bernoulli distribution to model the inference process of bi and dj with linear parametrization, i.e., q(bi|ũi) = B(σ(TuT ũi)) q(dj |ṽj) = B(σ(TvT ṽj)), (5) where ũi = [ui,pi], ṽj = [vj , qj ]. pi and qj are the results of r-dimension matrix factorization (Koren et al., 2009) of R, i.e., rij ≈ pTi qj . Tu = [tuk]rk=1, tuk ∈ Rdu+r, Tv = [tvk]rk=1, tvk ∈ Rdv+r are the transformation matrices of linear parametrization. From SGH (Dai et al., 2017), the MAP solution of the eq. (5) is readily given by bi = argmax bi q(bi|ui) = sign(T Tu ui) + 1 2 , dj = argmax dj q(dj |vj) = sign(T Tv vj) + 1 2 . (6) With the linear projection followed by a sign function, we can easily get hash codes of users and items. However, hashing with a simple sign function suffers from large information loss according to (Zhang et al., 2016), which motivates us to add constraints on parameters in the inference step. To derive compact and informative hash codes for users and items, we add balanced and uncorrelated constraints in the inference step. The balanced constraint is proposed to maximize the entropy of each binary bit (Zhou & Zha, 2012), and the uncorrelated constraint makes each bit is independent of others. Then we can obtain compact and informative hash codes by the following constraints, Balanced constraint: ∑ k bik = 0, ∑ k djk = 0 Uncorrelated constraint: bibTi = Ir,djd T j = Ir (7) From the eq. (6), bi and dj are only dependent on parameters Tu and Tv , respectively, thus we add constraints on Tu and Tv directly. So eq. (7) is equivalent to the following constraints, Balanced constraint: T Tu 1r = 0, T Tv 1r = 0, Uncorrelated constraint: T Tu Tu = Idu+r, T Tv Tv = Idv+r. (8) By imposing the above constraints in the training step, compact and informative hash codes can be obtained through the inference process. Next we summarize the training objective and its optimization. 2.3 TRAINING OF CGH Since our goal is to reconstruct users, items and ratings by using the least information of binary codes, we train the CGH with the MDL principle, which finds the best parameters that maximally compress the training data and meanwhile keep the information carried, thus CGH aims to minimize the expected amount of informations related to q: L(q) =Eq[log p(R,U ,V ,B,D)− log q(B,D)] =Eq[log p(R|B,D) + log p(U |B) + log p(V |D)−KL(q(B|Ũ)||p(B))− KL(q(D|Ṽ )||p(D))] (9) Maximizing the posterior probability is equivalent to maximizing L(q) by only considering the variational distribution of q(B,D), the objective becomes LMAP (Θ,Φ) =− ∑ i,j Cij 2 (rij − δ(bi,dj))2 − λu 2 ∑ i (ui − Cvbi)2 − λv 2 ∑ j (vj − Cvdj)2 − KL(qφu ||pθu)− KL(qφv ||pθv )−∇(Θ,Φ) (10) where Θ = {θu,θv}, Φ = {φu,φv}, ∇(Θ,Φ) is the regularizer term with parameters Θ and Φ. By training the objective in eq. (10), we obtain binary codes, but some bits probably be correlated. To minimize the reconstruction error, SGH had to set up the code length as long as r = 200. Our goal in this paper is to obtain compact and informative hash codes, thus we impose the balance and independent constraints on hash codes by eq. (8). Maximizing the eq. (10) is transformed to minimizing the following constrained objective function of the proposed Collaborative Generative Hashing (CGH), LCGH(Θ,Φ) = ∑ i,j Cij 2 (rij − δ(bi,dj))2 + λu 2 ∑ i (ui − Cubi)2 + λv 2 ∑ j (vj − Cvdj)2+ KL(qφu ||pθu) + KL(qφv ||pθv ) + αu ∥∥T Tu 1r∥∥22 + αv ∥∥T Tv 1r∥∥22 + βu ∥∥T Tu Tu − Idu+r∥∥22 + βv ∥∥T Tv Tv − Idv+r∥∥22 +∇(Θ,Φ). (11) The objective of CGH in eq. (11) is a discrete optimization problem, which is difficult to optimize straightforwardly, so in the training stage, the tanh function is utilized to replace the sign function in the inference step, and then the continuous outputs are used as a relaxation of hash codes. With the relaxation, we train all components jointly with back-propagation. After training, we fix them and make forward passes to map the concatenate vectors in Ũ and Ṽ to binary codes B and D, respectively. The recommendation in various settings is then conducted using B and D by the similarity score estimated as before δ(bi,dj) = 1− 1rHamdis(bi,dj). The training settings are dependent on the recommendation settings, i.e, warm-start, cold-start item, and cold-start user. LCGH(Θ,Φ) aims to minimize the rating loss and two content reconstruction errors with regularizers. (a.) For the warm-start recommendation, ratings for all users and items are available, then the above objective is trivially optimized by setting the content weights to 0 and learning hashing function with the observed ratingsR. (b.) For the cold-start item recommendation, ratings for some items are missing, then the objective is optimized by setting the user content weight to 0 and learning parameters with the observed ratings R and item content V . (c.) The training setting for the cold-start user recommendation is similar to the cold-start item recommendation. 3 EXPERIMENTS We validate the proposed CGH on two public dataset: CiteUlike1 and RecSys 2017 Challenge dataset2 from the following two aspects: (1) Marketing analysis. To validate the effectiveness of CGH in marketing area, we fist defined a metric to evaluate the accuracy of mining potential users; we then test the performance for warm-start item and cold-start item, respectively. (2) Recommendation performance. We test the performance of CGH for recommendation in various settings including: warm-start, cold-start item, and cold-start user in terms of Accurcy@k (Yin et al., 2014). In the following, we first introduce the experimental settings, followed by the experimental results analysis from the above aspects. 3.1 EXPERIMENTAL SETTINGS To evaluate the power of finding out potential users and the accuracy of recommendation in different settings. (1) For the CiteUlike dataset, it contains 5,551 users, 16,980 articles, 204,986 observed user-article binary interaction pairs, and articles abstract content. Similar to (Wang & Blei, 2011), we extract bag-of-the-words item vector with dimension dv = 8000 by ranking the TF-IDF values. (2) For the RecSys 2017 Challenge dataset, it is the only publicly available datasets that contains both user and item content data enabling both cold-start item and cold-start user recommendation. It contains 300M user-item interactions from 1.5M users to 1.3M items and content data collected from the career oriented social network XING (Europern analog of LinkedIn). Like (Volkovs et al., 2017), we evaluate all methods on binary rating data, item content with dimension of du = 831 and user content with the dimension of dv = 2738. user features and 2738 item features forming the dimensions of user and item content . We randomly split the binary interaction (rating)R into three disjoint parts: warm start ratingsRw, cold-start user ratingsRu, and cold-start item ratingsRv , andRw is furtherly split into the training dataset Rwt and the testing dataset Rwe. Correspondingly, the user and item content datasets are split into three disjoint parts. The randomly selection is carried out 5 times independently, and we report the experimental results as the average values. 3.2 EVALUATION METRIC The ultimate goal of recommendation is to find out the top-k items that users may be interested in. Accuracy@k was widely adopted by many previous ranking based recommender systems (Koren, 2008; Chen et al., 2009). Thus we adopt the ranking-based evaluation metric Accuracy@k to evaluate the quality of the recommended item ranking list. Metric for Marketing Application. For a new application of the recommender system, there haven’t yet a metric to evaluate the marketing performance. Thus, in this paper, we define an evaluation metric similar to the ranking-based metric Accuracy@k used for the warm-start and cold-start recommendation in this paper. From Fig. 2, we discover the k nearest potential users for an item j. The basic idea of the metric is to test whether the user that really interested in an item appears in the k potential users list. For each positive rating (rij = 1) in the testing dataset Dtest: (1) we randomly choose 1000 negative users (users k with rkj = 0) and find k potential users in the 1001 user set; (2) we check if the positive user i (with positive rating rij = 1) appears in the k potential users list. If the answer is ’yes’ we have a ’hit’ and have a ’miss’ otherwise. The metric also denoted by Accuracy@k is formulated as: Accuracy@k = #hit@k |Dtest| , (12) 1http://www.citeulike.org/faq/data.adp 2http://www.recsyschallenge.com/2017/ where |Dtest| is the size of the test set, and #hit@k denotes the number of hits in the test set. 3.3 ACCURACY FOR MINING POTENTIAL USERS The experiments evaluate the performance of the marketing application in mining potential users for warm-start items on the test dataset Rwe and cold-start items on Rv . Specifically, we first train the model on the training dataset Rwt and the corresponding user and item content data. When the training is completed, we fix parameters and obtain hash codes bi and dj by making forward passes. Then we generate k potential users for items in the test dataset by the procedure demonstrated in Fig. 2, and evaluate the quality of the potential users list by Accuracy@k defined in Section 3.2. The marketing analysis for warm start item and cold-start item are reported in Fig. 3 (Left.), which shows the accuracy values varies with the numbers of potential users. It indicates the accuracy increases with the number of potential users for both cold-start and warm start settings. It’s reasonable because mining more potential users will have greater accuracy value defined in Section 3.2. Especially, the proposed CGH is effective for cold-start item, which indicates further e-commerce strategies can be developed for new items to attract those potential users. Besides, from the perspective of marketing, warm-start recommendation and cold-start recommendation has less gap than traditional recommendation. Robust Testing. We evaluate the performance varies with the number of users who really interested in the target item in test set. The experimental results shown in Fig. 3 (Center.) indicates the accuracy grows steadily with the size of test set, which reveals the CGH for marketing application is robust. Thus, it is practical to be used in the sparse and cold-start settings. Convergence of CGH. Fig. 3 (Right.) demonstrates the convergence of the proposed CGH, which reveals the reconstruction errors of ratings, users content, items content and the total error with the number of samples seen by CGH are converged, which furtherly validate the correction and effectiveness of the proposed CGH. 3.4 ACCURACY FOR RECOMMENDATION Accuracy for warm-start Recommendation. Fig. 4 (Left.) shows the accuracy comparison of warm-start recommendation on CiteUlike dataset. In which collaborative generated embedding (CGE) denotes the real version of CGH. The figure shows the proposed CGH (CGE) has a comparable performance with other hybrid recommender systems. The proposed CGH is hashing-based recommendation, hence binary vectors apply to recommendation which has the advantage in online recommendation as introduced in Section 1; while the baselines are real-valued recommendations which conducts recommendation on real latent space. Due to real latent vectors intuitively carried more information than hash codes. Thus it is acceptable to have small gaps between the real-valued hybrid recommendation and the hashing-based recommendation. In addition, there is still small gap of the real version CGE in comparison with DropoutNet, because the reconstruction error is consid- ered in CGH(CGE), while DropoutNet didn’t consider it. However, the reconstruction is significant in the generative step of CGH, which makes it feasible to mining effective potential users, thus CGH(CGE) has the advantage in marketing application. Accuracy for cold-start item recommendation. This experiment studies the accuracy comparison between competing hybrid recommender systems and CGH under the same cold-start item setting. We test the performance on the test dataset Rv introduced in Section 3.1. Specifically, in Rv each item (cold-start item) has less than 5 positive ratings. Then we select users with at least one positive rating as test users. For each test user, we first choose his/her ratings related to cold-start items as the test set, and the remaining ratings as the training set. Our goal is to test whether the marked-off cold-start items can be accurately recommended to the right user. The experimental results for cold-start item recommendation are shown in Fig. 4 (Center.). We conclude that CGH has a comparable performance with competing baselines and achieves better performance than CTR. The results evaluated by another metric MRR (detailed in Appendix.A) are similar. Accuracy for cold-start user recommendation. We also test the performance on the cold-start user setting on the test dataset Ru introduced in Section 3.1. Specifically, in Ru, each user (cold-start user) has less than 5 positive ratings. Then we select items with at least one positive rating as test items. For each test item, we first choose ratings related to cold-start users as the test set, and the remaining ratings as the training set. Our goal is to test whether the test item can be can be accurately recommended to marked-off user. Due to the fact that only Dropout can be applied to cold-start user recommendation, so we only compare the performance of CGH with Dropout. The experimental results for cold-start user recommendation shown in Fig. 4 (Right.) indicates our proposed CGH has similar performance with DropoutNet. Besides, CGH has the advantage of the application in marketing area. 4 CONCLUSION In this paper, a generated recommendation framework called collaborative generated hashing (CGH) is proposed to address the cold-start and efficiency issues for recommendation. The two main contributions are put forward in this paper: (1) we develop a collaborative generated hashing framework with the principle of Minimum Description Length together(MDL) with uncorrelated and balanced constraints on the inference process to derive compact and informative hash codes, which is significant for the accuracy of recommendation and marketing; (2) we propose a marketing strategy by the proposed CGH, specifically, we design a framework to discover the k potential users by the generate step; (3) we evaluate the proposed scheme on two the public datasets, the experimental results show the effectiveness of the proposed CGH for both warm-start and cold-start recommendation. A. MRR RESULTS ON FOR RECOMMENDATION We evaluate the accuracy in terms of the MRR (Yin et al., 2014) metric shown in Table 1 for warmstart recommendation. Our proposed CGH performs almost as well as the best result of the realvalued competing baselines. Table 1 summarizes MRR results for the four algorithms, the best result is marked as ‘?q and the second best is marked as ‘oq . We find that the performance of CGH is very close to the best result, that is consistent with the outcome of Accuracy@k reported in Fig. 4.
1. What is the main contribution of the paper, and how does it relate to existing works in the field? 2. How effective is the proposed method compared to other methods in improving accuracy for warm-start and cold-start recommendations? 3. What are some concerns regarding the novelty of the proposed approach, and how does it differ from other methods in the field? 4. How well does the paper introduce related work and baselines, and what improvements can be made in this regard? 5. Are there any issues with the experiments conducted in the paper, and how might they be improved? 6. Are there any areas where the paper's writing could be improved, such as clarity or concision?
Review
Review This paper introduces a collaborative generated hashing (CGH) method to learn hash funcations of users and items from content data. The approach first provides a strategy to discover potential users by the generative step and inference through adding balanced and uncorrelated constraints. The experiments demonstrates some effectiveness on improving accuracy for both warm-start and cold-start recommendations. This paper should be rejected because (1) this method only combines existing techs, such as Stochastic Generative Hashing (Eq.1 and Eq. 6), and lacks novelty; (2) lack of introduction to related work and baselines, (3) the experiments results can not support the claim, i.e. the effectiveness of CGH in marketing area, and (4) paper writing is awful and very hard to follow. Main argument Almost every essential parts of the proposed method are from existing methods: (I) Eq. 1 and Eq. 6 are proposed by Stochastic Generative Hashing [1]; (II) Eq. 2 and Eq. 5 are a multivariate Bernoulli distribution; (III) Eq. 3 is a normal distribution; (IV) Eq. 7 is proposed by [2]; (V) loss function Eq. 9 is follow the Minimum Description Length principle [1]; The proposed method CGH is a combination of these techs and compared with these methods, there are few novel aspects. This paper omits the related work part and does a rough introduction to two baselines (CDL and DropoutNet) in a confusing way in Section 2. A concise and precise introduction to other methods will help the reader to better understand the related works and the advantages and disadvantages of the proposed method. The experiments do not provide convincing evidence of the corretness of the proposed method, especially in Section 3.3. In Section 3.3, Figure 3 shows the performance on Accyracy@k without any baseline. The results do not demonstrate the validity of the method and therefore cannot support the author's claim. Things to improve the paper that did not impact the score: 1) page 1, 4th line in the 3rd paragraph, 'efficient' -> efficiently 2) page 3, 1st sentence in Section 2.1 3) page 3, hard to find the definition of the similarity function 4) page 4, 2nd line 'similar for' -> 'similar to' 5) page 6, 3rd paragraph 'From Fig. 2, we discosver the k nearest potential users for an item j'. What do you mean? Reference [1] Bo Dai, Ruiqi Guo, Sanjiv Kumar, Niao He, and Le Song. Stochastic generative hashing. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 913–922. JMLR. org, 2017. [2] Ke Zhou and Hongyuan Zha. Learning binary codes for collaborative filtering. In Proceedings of KDD’12, pp. 498–506. ACM, 2012.
ICLR
Title Collaborative Generated Hashing for Market Analysis and Fast Cold-start Recommendation Abstract Cold-start and efficiency issues of the Top-k recommendation are critical to largescale recommender systems. Previous hybrid recommendation methods are effective to deal with the cold-start issues by extracting real latent factors of cold-start items(users) from side information, but they still suffer low efficiency in online recommendation caused by the expensive similarity search in real latent space. This paper presents a collaborative generated hashing (CGH) to improve the efficiency by denoting users and items as binary codes, which applies to various settings: cold-start users, cold-start items and warm-start ones. Specifically, CGH is designed to learn hash functions of users and items through the Minimum Description Length (MDL) principle; thus, it can deal with various recommendation settings. In addition, CGH initiates a new marketing strategy through mining potential users by a generative step. To reconstruct effective users, the MDL principle is used to learn compact and informative binary codes from the content data. Extensive experiments on two public datasets show the advantages for recommendations in various settings over competing baselines and analyze the feasibility of the application in marketing. 1 INTRODUCTION With the explosion of e-commerce, most customers are accustomed to receiving a variety of recommendations, such as movies, books, news, or hotels they might be interested in. Traditional recommender systems just recommended items that are similar to what they liked or rated in the previous. Recommendations help users find their desirable items, and also creates new revenue opportunities for vendors, such as Amazon, Taobao, eBay, etc. Among them, one of the most popular recommendation methods, collaborative filtering is dependent on a large amount of user-item interactive information to provide an accurate recommendation. However, most of new e-commerce vendors do not have enough interactive data, which leads to low recommendation accuracy, i.e., cold-start issues. Previous studies on cold-start issues generally modeled as a combination of collaborative filtering and content filtering, known as hybrid recommender systems. Specifically, they learned real latent factors by incorporating the side information into the interactive data. Such as Collaborative Deep Learning (CDL) (Wang et al., 2015), Visual Bayesian Personalized Ranking (VBPR) (He & McAuley, 2016), Collaborative Topic modeling for Recommedation (CTR) (Wang & Blei, 2011), and the DropoutNet for addressing cold start (DropoutNet)(Volkovs et al., 2017), ABCPRec for Bridging Consumer and Producer Roles for User-Generated Content Recommendation (ABCPRec)(Tsukuda et al., 2019). All of the above hybrid recommender systems were modeled in real latent space, which leads to low efficiency for the online recommendation with the increasing scale of datasets. Recent studies show the promising of hashing based methods to tackle the efficiency challenge by representing users and items with binary codes (Zhang et al., 2014; Zhou & Zha, 2012; Zhang et al., 2016; Liu et al., 2019), because the preference score can be represented by the Hamming distance calculated via XOR operation efficient (Wang et al., 2014). However, the existing hashing based recommendations are learning-based frameworks, which leads to NP-hard problems of optimizing discrete objectives. Thus many scholars learned binary codes by some approximate techniques, such as the two-stage hashing learning method utilized in Preference Preserving Hashing(PPH) (Zhang et al., 2014) and the Iterative Quantization(ITQ) (Zhou & Zha, 2012). To reduce information loss, two learning-based hashing frameworks: bit-wise learning and block-wise learning were respectively proposed in hashing based recommendation frameworks (Zhang et al., 2016; Wang et al., 2019; Zhang et al., 2018; Zheng et al.). However, due to the requirement of binary outputs for learning-based hashing frameworks, the training procedure is expensive for large-scale recommendation, which motivates us to propose a generative approach to learn hash functions. In this paper, we propose the collaborative generated hashing(CGH) to learn hash functions of users and items from content data with the principle of Minimum Description Length (MDL) (Dai et al., 2017). In marketing area, mining potential customers is crucial to the e-commerce. CGH provides a strategy to discover potential users by the generative step. To reconstruct effective users, uncorrelated and balanced limits are imposed to learn compact and informative binary codes with the principle of the MDL. Especially, discovering potential customers is vital to the success of adding new items for a recommendation platform (Papies et al., 2017). Specifically, for a new item, we can generate a new potential user by the generative step (detailed in Section 2.1), and then search the nearest potential users in the user set. By recommending a new product to the potential users who might be interested in but didn’t plan to buy, further e-commerce strategies can be developed to attract those potential users. We organize the paper as follows: Section 2 introduce the main techniques of CGH. We first introduce the framework of CGH and compare it with the closely related competing baselines: CDL (Wang et al., 2015) and DropoutNet (Volkovs et al., 2017); we then formulate the generative step in Section 2.1 and the inference step in Section 2.2, respectively; we finally summarize the training objective and introduce the optimization in Section 2.3. Particularly, we demonstrate the process of mining potential users for the marketing application in Section 2.1. Section 3 presents the experimental results for marketing analysis and recommendation accuracy in various settings. Section 4 concludes the paper. The main contributions of this paper are summarized as follows: (1) We propose the Collaborative Generated Hashing (CGH) with the principle of MDL to learn compact but informative hash codes, which applies to various settings for recommendation. (2) We provides a marketing strategy by discovering potential users by the generative step of CGH, which can be applied to boost the e-commence development. (3) We evaluate the effectiveness of the proposed CGH compared with the state-of-the-art baselines, and demonstrate its robustness and convergence properties on the public datasets. 2 COLLABORATIVE GENERATED HASHING The framework of the proposed CGH is shown in Fig. 1(c), where U , V and R are respectively observed user content, item content and rating matrix. B and D are binary codes of users and items, respectively. CGH consists of the generative step marked as dashed lines and the inference step denoted by solid lines. Once training is finished, we fix the model and make forward passes to obtain binary codes B and D through the inference step, and then conduct recommendation. For the marketing application, we create a new user via the generative step. In comparison of CGH with the closely related baseline CDL (Wang et al., 2015), the proposed CGH aims to learn binary codes instead of real latent vectors P and Q due to the advantage of hashing for online recommendation; plus the CGH optimizes an objective with the principle of MDL, while CDL optimized the joint objective of rating loss and item content reconstruction error. In comparison of CGH with DropoutNet (Volkovs et al., 2017), CGH can be used as a marketing strategy by discovering potential users; plus CGH learns hash functions by stacked denoising autoendoer, while DropoutNet obtained real latent factors by the standard neural network. In the following we start by first formulating the generative process and demonstrating the application in marketing area; we then formulate the inference step; we finally summarize the training objective and the optimization method. 2.1 MINING POTENTIAL USERS . Give a sparse rating matrix R and item content data V ∈ Rdv , where dv is the dimension of the content vector, and V is stacked by the bag-of-words vectors of item content in the item set V . most previous studies were focus on modeling deterministic frameworks to learn representations of items for item recommendation, such as CDL, CTR, DropoutNet, et.al. In this paper, we discover a new strategy from a perspective of marketing for item recommendation – mining potential users. We demonstrate the process of mining potential users by an item through the generative step in Fig. 2. After the inference step, the binary code of item j is available. By maximizing the similarity function δ(bi,dj) (detailed in Section 2.1), the optimal binary code bp is obtained. Then we generate the new user up through the generative step. Finally we find out potential users from the user set by some nearest neighborhood algorithms, such as KNN. As a marketing strategy, it can discover potential users for both warm-start items and cold-start items. Thus, from a perspective of marketing, it can be regarded as another kind of item recommendation. The generation process (also referred as decoding process) is denoted by dashed lines in Fig. 1 (c). Fix binary codes bi and dj of the user i and the item j, the bag-of-words vector ui of the user i (vj of the item j) is generated via p(θu). The ratings rij is generated by bi and dj . We use a simple Gaussian distribution to model the generation of ui and vj given bi and dj like Stochastic Generative Hashing (SGH) (Dai et al., 2017): p(ui|bi) = N (Cubi, λu-1I ), p(vj |dj) = N (Cvdj , λv -1I ), (1) where Cuk = [cuk]rk=1, cuk ∈ Rdu is the codebook (Dai et al., 2017) with r codewords, which is similar for Cv , and du is the dimension of the bag-of-the-words vector of users. The prior is modeled as the multivariate Bernoulli distribution on hash codes: p(bi) ∼ B(ρu), and p(dj) ∼ B(ρv), thus the prior probability are as follows: p(bi) = ∏r k=1 ρbikui (1− ρui) 1−bik , p(dj) = ∏r k=1 ρ djk vj (1− ρvj) 1−djk , (2) We formulate the rating with the similarity between binary codes of users and items like the most successful recommender systems, matrix factorization (Koren et al., 2009). Then the rating is thus drawn from the normal distribution centered at the similarity value, p(rij |bi,dj) ∼ N (δ(bi,dj), C−1ij ), (3) where δ(bi,dj) = 1 − 1rHamdis(bi,dj) denotes the similarity between binary codes bi and dj . Hamdis(bi,dj) represents Hamming distance between the two binary vectors, which has been widely applied in hashing-based recommender system (Wang & Blei, 2011; Lian et al., 2017; Zhang et al., 2018). Cij is the precision parameter that serves as confidence for rij similar to that in CTR (Wang & Blei, 2011) (Cij = a if rij = 1 and Cij = b otherwise) due to the fact that rij = 0 means the user i is either not interested in item j or not aware of it. With the generative model constructed, the joint probability of both observed ratings, content vectors and binary codes is given by p (R,U ,V ,B,D) = ∏ i,j p(rij |bi,dj)p(ui|bi)p(vj |dj)p(bi)p(dj) (4) 2.2 CONSTRAINTS ON BINARY LATENT VARIABLES The inference process (also referred as encoding process) shown in the Fig. 1 (c) with dashed lines, the binary latent variables bi (dj) depends on the content vector ui (vj) and the ratingR (shadowed in Fig 1). Inspired by the recent work on generative hashing (Dai et al., 2017) and DropoutNet (Volkovs et al., 2017), we use a multivariate Bernoulli distribution to model the inference process of bi and dj with linear parametrization, i.e., q(bi|ũi) = B(σ(TuT ũi)) q(dj |ṽj) = B(σ(TvT ṽj)), (5) where ũi = [ui,pi], ṽj = [vj , qj ]. pi and qj are the results of r-dimension matrix factorization (Koren et al., 2009) of R, i.e., rij ≈ pTi qj . Tu = [tuk]rk=1, tuk ∈ Rdu+r, Tv = [tvk]rk=1, tvk ∈ Rdv+r are the transformation matrices of linear parametrization. From SGH (Dai et al., 2017), the MAP solution of the eq. (5) is readily given by bi = argmax bi q(bi|ui) = sign(T Tu ui) + 1 2 , dj = argmax dj q(dj |vj) = sign(T Tv vj) + 1 2 . (6) With the linear projection followed by a sign function, we can easily get hash codes of users and items. However, hashing with a simple sign function suffers from large information loss according to (Zhang et al., 2016), which motivates us to add constraints on parameters in the inference step. To derive compact and informative hash codes for users and items, we add balanced and uncorrelated constraints in the inference step. The balanced constraint is proposed to maximize the entropy of each binary bit (Zhou & Zha, 2012), and the uncorrelated constraint makes each bit is independent of others. Then we can obtain compact and informative hash codes by the following constraints, Balanced constraint: ∑ k bik = 0, ∑ k djk = 0 Uncorrelated constraint: bibTi = Ir,djd T j = Ir (7) From the eq. (6), bi and dj are only dependent on parameters Tu and Tv , respectively, thus we add constraints on Tu and Tv directly. So eq. (7) is equivalent to the following constraints, Balanced constraint: T Tu 1r = 0, T Tv 1r = 0, Uncorrelated constraint: T Tu Tu = Idu+r, T Tv Tv = Idv+r. (8) By imposing the above constraints in the training step, compact and informative hash codes can be obtained through the inference process. Next we summarize the training objective and its optimization. 2.3 TRAINING OF CGH Since our goal is to reconstruct users, items and ratings by using the least information of binary codes, we train the CGH with the MDL principle, which finds the best parameters that maximally compress the training data and meanwhile keep the information carried, thus CGH aims to minimize the expected amount of informations related to q: L(q) =Eq[log p(R,U ,V ,B,D)− log q(B,D)] =Eq[log p(R|B,D) + log p(U |B) + log p(V |D)−KL(q(B|Ũ)||p(B))− KL(q(D|Ṽ )||p(D))] (9) Maximizing the posterior probability is equivalent to maximizing L(q) by only considering the variational distribution of q(B,D), the objective becomes LMAP (Θ,Φ) =− ∑ i,j Cij 2 (rij − δ(bi,dj))2 − λu 2 ∑ i (ui − Cvbi)2 − λv 2 ∑ j (vj − Cvdj)2 − KL(qφu ||pθu)− KL(qφv ||pθv )−∇(Θ,Φ) (10) where Θ = {θu,θv}, Φ = {φu,φv}, ∇(Θ,Φ) is the regularizer term with parameters Θ and Φ. By training the objective in eq. (10), we obtain binary codes, but some bits probably be correlated. To minimize the reconstruction error, SGH had to set up the code length as long as r = 200. Our goal in this paper is to obtain compact and informative hash codes, thus we impose the balance and independent constraints on hash codes by eq. (8). Maximizing the eq. (10) is transformed to minimizing the following constrained objective function of the proposed Collaborative Generative Hashing (CGH), LCGH(Θ,Φ) = ∑ i,j Cij 2 (rij − δ(bi,dj))2 + λu 2 ∑ i (ui − Cubi)2 + λv 2 ∑ j (vj − Cvdj)2+ KL(qφu ||pθu) + KL(qφv ||pθv ) + αu ∥∥T Tu 1r∥∥22 + αv ∥∥T Tv 1r∥∥22 + βu ∥∥T Tu Tu − Idu+r∥∥22 + βv ∥∥T Tv Tv − Idv+r∥∥22 +∇(Θ,Φ). (11) The objective of CGH in eq. (11) is a discrete optimization problem, which is difficult to optimize straightforwardly, so in the training stage, the tanh function is utilized to replace the sign function in the inference step, and then the continuous outputs are used as a relaxation of hash codes. With the relaxation, we train all components jointly with back-propagation. After training, we fix them and make forward passes to map the concatenate vectors in Ũ and Ṽ to binary codes B and D, respectively. The recommendation in various settings is then conducted using B and D by the similarity score estimated as before δ(bi,dj) = 1− 1rHamdis(bi,dj). The training settings are dependent on the recommendation settings, i.e, warm-start, cold-start item, and cold-start user. LCGH(Θ,Φ) aims to minimize the rating loss and two content reconstruction errors with regularizers. (a.) For the warm-start recommendation, ratings for all users and items are available, then the above objective is trivially optimized by setting the content weights to 0 and learning hashing function with the observed ratingsR. (b.) For the cold-start item recommendation, ratings for some items are missing, then the objective is optimized by setting the user content weight to 0 and learning parameters with the observed ratings R and item content V . (c.) The training setting for the cold-start user recommendation is similar to the cold-start item recommendation. 3 EXPERIMENTS We validate the proposed CGH on two public dataset: CiteUlike1 and RecSys 2017 Challenge dataset2 from the following two aspects: (1) Marketing analysis. To validate the effectiveness of CGH in marketing area, we fist defined a metric to evaluate the accuracy of mining potential users; we then test the performance for warm-start item and cold-start item, respectively. (2) Recommendation performance. We test the performance of CGH for recommendation in various settings including: warm-start, cold-start item, and cold-start user in terms of Accurcy@k (Yin et al., 2014). In the following, we first introduce the experimental settings, followed by the experimental results analysis from the above aspects. 3.1 EXPERIMENTAL SETTINGS To evaluate the power of finding out potential users and the accuracy of recommendation in different settings. (1) For the CiteUlike dataset, it contains 5,551 users, 16,980 articles, 204,986 observed user-article binary interaction pairs, and articles abstract content. Similar to (Wang & Blei, 2011), we extract bag-of-the-words item vector with dimension dv = 8000 by ranking the TF-IDF values. (2) For the RecSys 2017 Challenge dataset, it is the only publicly available datasets that contains both user and item content data enabling both cold-start item and cold-start user recommendation. It contains 300M user-item interactions from 1.5M users to 1.3M items and content data collected from the career oriented social network XING (Europern analog of LinkedIn). Like (Volkovs et al., 2017), we evaluate all methods on binary rating data, item content with dimension of du = 831 and user content with the dimension of dv = 2738. user features and 2738 item features forming the dimensions of user and item content . We randomly split the binary interaction (rating)R into three disjoint parts: warm start ratingsRw, cold-start user ratingsRu, and cold-start item ratingsRv , andRw is furtherly split into the training dataset Rwt and the testing dataset Rwe. Correspondingly, the user and item content datasets are split into three disjoint parts. The randomly selection is carried out 5 times independently, and we report the experimental results as the average values. 3.2 EVALUATION METRIC The ultimate goal of recommendation is to find out the top-k items that users may be interested in. Accuracy@k was widely adopted by many previous ranking based recommender systems (Koren, 2008; Chen et al., 2009). Thus we adopt the ranking-based evaluation metric Accuracy@k to evaluate the quality of the recommended item ranking list. Metric for Marketing Application. For a new application of the recommender system, there haven’t yet a metric to evaluate the marketing performance. Thus, in this paper, we define an evaluation metric similar to the ranking-based metric Accuracy@k used for the warm-start and cold-start recommendation in this paper. From Fig. 2, we discover the k nearest potential users for an item j. The basic idea of the metric is to test whether the user that really interested in an item appears in the k potential users list. For each positive rating (rij = 1) in the testing dataset Dtest: (1) we randomly choose 1000 negative users (users k with rkj = 0) and find k potential users in the 1001 user set; (2) we check if the positive user i (with positive rating rij = 1) appears in the k potential users list. If the answer is ’yes’ we have a ’hit’ and have a ’miss’ otherwise. The metric also denoted by Accuracy@k is formulated as: Accuracy@k = #hit@k |Dtest| , (12) 1http://www.citeulike.org/faq/data.adp 2http://www.recsyschallenge.com/2017/ where |Dtest| is the size of the test set, and #hit@k denotes the number of hits in the test set. 3.3 ACCURACY FOR MINING POTENTIAL USERS The experiments evaluate the performance of the marketing application in mining potential users for warm-start items on the test dataset Rwe and cold-start items on Rv . Specifically, we first train the model on the training dataset Rwt and the corresponding user and item content data. When the training is completed, we fix parameters and obtain hash codes bi and dj by making forward passes. Then we generate k potential users for items in the test dataset by the procedure demonstrated in Fig. 2, and evaluate the quality of the potential users list by Accuracy@k defined in Section 3.2. The marketing analysis for warm start item and cold-start item are reported in Fig. 3 (Left.), which shows the accuracy values varies with the numbers of potential users. It indicates the accuracy increases with the number of potential users for both cold-start and warm start settings. It’s reasonable because mining more potential users will have greater accuracy value defined in Section 3.2. Especially, the proposed CGH is effective for cold-start item, which indicates further e-commerce strategies can be developed for new items to attract those potential users. Besides, from the perspective of marketing, warm-start recommendation and cold-start recommendation has less gap than traditional recommendation. Robust Testing. We evaluate the performance varies with the number of users who really interested in the target item in test set. The experimental results shown in Fig. 3 (Center.) indicates the accuracy grows steadily with the size of test set, which reveals the CGH for marketing application is robust. Thus, it is practical to be used in the sparse and cold-start settings. Convergence of CGH. Fig. 3 (Right.) demonstrates the convergence of the proposed CGH, which reveals the reconstruction errors of ratings, users content, items content and the total error with the number of samples seen by CGH are converged, which furtherly validate the correction and effectiveness of the proposed CGH. 3.4 ACCURACY FOR RECOMMENDATION Accuracy for warm-start Recommendation. Fig. 4 (Left.) shows the accuracy comparison of warm-start recommendation on CiteUlike dataset. In which collaborative generated embedding (CGE) denotes the real version of CGH. The figure shows the proposed CGH (CGE) has a comparable performance with other hybrid recommender systems. The proposed CGH is hashing-based recommendation, hence binary vectors apply to recommendation which has the advantage in online recommendation as introduced in Section 1; while the baselines are real-valued recommendations which conducts recommendation on real latent space. Due to real latent vectors intuitively carried more information than hash codes. Thus it is acceptable to have small gaps between the real-valued hybrid recommendation and the hashing-based recommendation. In addition, there is still small gap of the real version CGE in comparison with DropoutNet, because the reconstruction error is consid- ered in CGH(CGE), while DropoutNet didn’t consider it. However, the reconstruction is significant in the generative step of CGH, which makes it feasible to mining effective potential users, thus CGH(CGE) has the advantage in marketing application. Accuracy for cold-start item recommendation. This experiment studies the accuracy comparison between competing hybrid recommender systems and CGH under the same cold-start item setting. We test the performance on the test dataset Rv introduced in Section 3.1. Specifically, in Rv each item (cold-start item) has less than 5 positive ratings. Then we select users with at least one positive rating as test users. For each test user, we first choose his/her ratings related to cold-start items as the test set, and the remaining ratings as the training set. Our goal is to test whether the marked-off cold-start items can be accurately recommended to the right user. The experimental results for cold-start item recommendation are shown in Fig. 4 (Center.). We conclude that CGH has a comparable performance with competing baselines and achieves better performance than CTR. The results evaluated by another metric MRR (detailed in Appendix.A) are similar. Accuracy for cold-start user recommendation. We also test the performance on the cold-start user setting on the test dataset Ru introduced in Section 3.1. Specifically, in Ru, each user (cold-start user) has less than 5 positive ratings. Then we select items with at least one positive rating as test items. For each test item, we first choose ratings related to cold-start users as the test set, and the remaining ratings as the training set. Our goal is to test whether the test item can be can be accurately recommended to marked-off user. Due to the fact that only Dropout can be applied to cold-start user recommendation, so we only compare the performance of CGH with Dropout. The experimental results for cold-start user recommendation shown in Fig. 4 (Right.) indicates our proposed CGH has similar performance with DropoutNet. Besides, CGH has the advantage of the application in marketing area. 4 CONCLUSION In this paper, a generated recommendation framework called collaborative generated hashing (CGH) is proposed to address the cold-start and efficiency issues for recommendation. The two main contributions are put forward in this paper: (1) we develop a collaborative generated hashing framework with the principle of Minimum Description Length together(MDL) with uncorrelated and balanced constraints on the inference process to derive compact and informative hash codes, which is significant for the accuracy of recommendation and marketing; (2) we propose a marketing strategy by the proposed CGH, specifically, we design a framework to discover the k potential users by the generate step; (3) we evaluate the proposed scheme on two the public datasets, the experimental results show the effectiveness of the proposed CGH for both warm-start and cold-start recommendation. A. MRR RESULTS ON FOR RECOMMENDATION We evaluate the accuracy in terms of the MRR (Yin et al., 2014) metric shown in Table 1 for warmstart recommendation. Our proposed CGH performs almost as well as the best result of the realvalued competing baselines. Table 1 summarizes MRR results for the four algorithms, the best result is marked as ‘?q and the second best is marked as ‘oq . We find that the performance of CGH is very close to the best result, that is consistent with the outcome of Accuracy@k reported in Fig. 4.
1. What is the main contribution of the paper regarding efficient user and item recommendations? 2. What are the strengths and weaknesses of the proposed approach in terms of computational efficiency and accuracy? 3. How does the method compare to other hybrid models in terms of efficiency and accuracy? 4. What are the limitations of the generative step for candidate selection? 5. How does the method differ from classical learning of latent variables? 6. Are there any quantitative evidences provided to support the main claim of improved computational efficiency? 7. How does the proposed approach compare to approximate nearest-neighbors search methods? 8. What are the suggestions for improving the work? 9. What are the typos and errors in the text and derivations? 10. Can you provide more organic connections between the problem being solved and its potential application?
Review
Review The work considers the problem of efficient user and item recommendations in the warm- and cold-start settings. It aims at improving computational efficiency of the best candidate selection in these settings by utilizing binary codes representation. The transformation from an actual to a binary code representation is learned in a hybrid manner using both collaborative and content information. In order to keep such representations compact yet expressive enough, the authors impose a set of constraints that ensure balanced and uncorrelated transformations. Once binary codes are learned, the inference can be made by virtue of efficient Hamming distance computations. Moreover, the search for candidate entities can be performed via the generative step that projects binary codes onto actual feature space, where kNN-based techniques can be further utilized. Major drawback of the work is that it does not provide any quantitative evidence to support the main claim – that the proposed approach is at least more computationally efficient, since it underperforms competing methods in terms of accuracy. Essentially, the work answers the question whether it is possible to utilize hashing techniques based on binary codes; however, the question on the practicality and efficiency of this approach remains open. I would therefore suggest rejecting the work. One of the weak points of other methods noted by the authors is “the expensive similarity search in real latent space”. The authors aim to resolve that problem by learning hashing functions based on “compact and informative” binary codes representation. However, while an overall problem formulation is clearly described and the learning objective is well explained, no further evidence supporting initial claims is provided. Moreover, an overall logic seems contradictory: 1. Binary codes allow efficient preference estimation via XOR operation. 2. Learning binary codes is a difficult discrete optimization task. 3. Hence, we employ special MDL principle for solving a constraint optimization problem and employ relaxation of hash codes to move away from discrete optimization. After relaxation, the hash codes are no longer binary. Do you still enforce binary representation by some thresholding or other method? If yes, more words explaining this should be added to the text. If no, then how is it different from classical learning of latent variables? Essentially, relaxed non-binary hash codes are similar to latent vectors. The lack of description raises concerns in an overall efficiency and the authors, unfortunately, provide no evidence of improved computational performance. Given that the proposed approach underperforms competing methods in terms of accuracy of recommendations, more efforts should be made to demonstrate its competitive advantages in terms of time required for training and generating predictions. Another contradictory part is the generative step for candidate selection. The idea of using inference to generate the most pertinent user vector for a selected item hash code is novel and interesting. However, it requires searching neighbors in the real feature space, which can be very inefficient, depending on the structure of features. I’m not convinced that it is better than searching neighbors directly in the latent space, which can be done in the majority of hybrid models. Moreover, there exist various approximate nearest-neighbors search methods, e.g. Annoy, NMSLib, Faiss, etc., which allow trading-off accuracy and efficiency. Considering that hash codes also lose some information (which is observed in the results of experiments), it seems necessary to have a comparison with these approximate methods as well. It should be also noted that in some cases you don’t even have to run the similarity search. Many hybrid models learn latent representation of features directly and cold-start entities are straightforwardly described via combination of the corresponding latent vectors of their features (e.g, Factorization Machines [Rendle 2009]). Hence, affinity between a cold item and some user can be quickly estimated via inner product of their latent vectors. Suggestions on improving the work: Worth mentioning, that recent studies raise certain concerns about the superiority of modern neural network-based approaches over simpler (and properly tuned) linear baselines, see the work by [Dacrema, Cremonesi, Jannach 2019] on “A Worrying Analysis of Recent Neural Recommendation Approaches”. The DropoutNet method in your experiments is very similar to CDL in terms of accuracy. The latter, however, underperforms even simple knn models, as shown by the work mentioned above. The haven’t tested it in the cold-start regime, though. Still, I’d strongly recommend adding to your experiments comparison with simpler hybrid models, e.g., Factorization Machines. Also note that there are even stronger baselines published recently, e.g. HybridSVD by [Frolov, Oseledets 2019]. Additional remarks: 1) Figure 3 and the related text seem to focus on too obvious things. Indeed, by increasing the number of entities to compare against, you increase chances to have a hit. This part of the text, basically, states that the method works, which can already be seen from other results. 2) A lot of attention is given to the “marketing application”. It’s ok to have it in introduction and make a connection to the real-world problem; however, further mentions of it in the text feel excessive. In the experiment section you describe a standard evaluation procedure for the cold start, there is no need to refer to marketing application again as you do not provide any new metric. It would feel much more organic if you would have the results of A/B testing on real users. Otherwise, I’d suggest to focus more on the problem that you’re solving, not on possible application. 3) The text is in a very unpolished state. It reads more like a draft version. There are many typos and error both in text and in derivations. References: Rendle, Steffen. "Factorization machines." In 2010 IEEE International Conference on Data Mining, pp. 995-1000. IEEE, 2010. Frolov, Evgeny, and Ivan Oseledets. "HybridSVD: when collaborative information is not enough." In Proceedings of the 13th ACM Conference on Recommender Systems, pp. 331-339. ACM, 2019. Dacrema, Maurizio Ferrari, Paolo Cremonesi, and Dietmar Jannach. "Are we really making much progress? A worrying analysis of recent neural recommendation approaches." In Proceedings of the 13th ACM Conference on Recommender Systems, pp. 101-109. ACM, 2019.
ICLR
Title Can standard training with clean images outperform adversarial one in robust accuracy? Abstract The deep learning network has achieved great success in almost every field. Unfortunately, it is very vulnerable to adversarial attacks. A lot of researchers have devoted themselves to making the network robust. The most effective one is adversarial training, where malicious examples are generated and fed to train the network. However, this will incur a big computation load. In this work, we ask: “Can standard training with clean images outperform adversarial one in robust accuracy?” Surprisingly, the answer is YES. This success stems from two innovations. The first is a novel loss function that combines the traditional cross-entropy with the feature smoothing loss that encourages the features in an intermediate layer to be uniform. The collaboration between these terms sets up the grounds for our second innovation, namely Active Defense. When a clean or adversarial image feeds into the network, the defender first adds some random noise, then induces this example to a new smoother one via promotion of feature smoothing. At that point, it can be classified correctly with high probability. Thus the perturbations carefully generated by the attacker can be diminished. While there is an inevitable clean accuracy drop, it is still comparable with others. The great benefit is the robust accuracy outperforms most of the existing methods and is quite resilient to the increase of perturbation budget. Moreover, adaptive attackers also fail to generate effective adversarial examples as the induced perturbations overweight the initial ones imposed by an adversary. 1 INTRODUCTION The seminal work of (Goodfellow et al., 2015) pointed out a surprising weakness of modern deep neural networks: although they can perform on par with human beings, their reliability is far from satisfaction. Almost imperceptibly added perturbations will be enough to mislead the network to output a wrong class label with high confidence. It will dramatically undermine the deployment of networks in some safety-critical applications: autonomous driving, image-based ID verification, and medical image analysis. Since then, researchers have heavily investigated this risk exposure and proposed different defense strategies. One direction is some prepossessing techniques such as bit-depth reduction (Xu et al., 2018), JPEG compression, total variance minimization, image quilting (Guo et al., 2018), and Defense-GAN (Samangouei et al., 2018). The idea is to mitigate the effect of added noise and save the network to some extent. Unfortunately, (Athalye et al., 2018) showed that most of these approaches are based on obfuscated gradients and can be defeated. The other line of research adopts various adversarial training techniques where malicious examples are generated and fed to the network. A simple rationale behind this is if the network has this knowledge, it will become wise in test time. While there are different mechanisms such as Mixup inference (Pang et al., 2020), feature scattering (Zhang & Wang, 2019), feature denoising (Xie et al., 2019), geometry-aware instance reweighting (Zhang et al., 2021), and channel-wise activation suppressing (Bai et al., 2021), they all share the same philosophy. While people are astonished by the fact that imperceptibly added perturbations can fool the network, some theoretical works such as (Tsipras et al., 2019; Schmidt et al., 2018) showed that it is not entirely unexpected. Unfortunately, there are no solutions without the awareness of attack models. Ideally, all defenses should be ignorant of this. However, this knowledge is essential to the adversarial training method that remains most effective, although at the cost of a large computation load. Now the big question arises: “Can standard training with clean images outperform adversarial one in robust accuracy?” Here “clean images” means there is no manipulation of inputs even by adding some random noise such as (Jin & Rinard, 2020), although it is for manifold regularization rather than adversarial training. At first glance, it seems hopeless, as a widely accepted principle in the adversarial learning community is that a network can be clever only if it has been exposed to deceptions before. However, on the other hand, the networks are supposed to generalize well after standard training. How can it perform so badly for adversarial attacks? As a possible answer to this, (Ilyas et al., 2019) investigated the cause of adversarial examples and concluded that neural networks tend to exploit predictive yet brittle features to make classifications. These features are incomprehensible to humans and thus can be modified by adversarial attackers to mislead the networks, but (Ilyas et al., 2019) did not show how to teach the network to disregard these non-robust features and discover the robust ones to make final decisions. From this perspective, as it is difficult to tell the network to learn robust features, what if we add some hints in the loss function and let the network become robust in an implicit way? More specifically, in addition to the classical cross-entropy loss, we use a feature smoothing term that encourages the features in an intermediate layer to be uniform, as shown in the left of Figure 1. It sounds counterintuitive as this term will constrain the space of features that may lead to a wrong classification. However, due to the high capacity of networks, a very high standard accuracy can still be achieved with this additional term. When training completes, given an input, whether clean or crafted, extra perturbations can always be created by some added random noise followed by the promotion of feature smoothing at the cost of reduced accuracy. So long as these intentional perturbations overweight the adversary’s and the reduction in accuracy is affordable, the model will become robust. We call this procedure Active Defense, as shown in the right of Figure 1. We find experimentally, a clean example from CIFAR10/CIFAR-100 can be perturbed with l∞ = 25/255 ∼ 32/255, three to four times of l∞ = 8/255 usually adopted by an adversary, yet be classified with a high success rate. This fact sets up the adequate space for Active Defense for eliminating the effects of attacks. Our approach is independent of any attack models compared with other state-of-the-art methods, and its performance is much more stable under attacks with different budgets. The contributions of this work are summarized as follows: • We propose a novel training scheme with an extra feature smoothing loss term that only takes clean images as inputs, fundamentally different from all existing adversarial training methods that need supplementary crafted data. •We present Active Defense that adds the second round of perturbations through random noise and feature smoothing. It modifies the malicious examples in a way that is friendly to the network. This deviates from conventional passive ones that keep the input intact. 2 RELATED WORKS Due to adversarial threats to deep learning applications, there are many works to improve the robustness. Most of them adopt adversarial training. Among them, only a few take care of the features in intermediate layers as listed in the following. Feature denoising in (Xie et al., 2019) found that small perturbations in pixel space can lead to very substantial noise in the feature maps of the network and proposed various filters to denoise. (Zhang & Wang, 2019) proposed to generate adversarial images for training through feature scattering in the latent space. In essence, perturbed images are produced collaboratively via optimal transport distance minimization. (Zhang & Wang, 2019) used the feature maps as a guide to making new examples. Compared with these two, we are trying to force the intermediate feature map to be uniform through an additional loss term within the standard training framework without any modification of the network as in (Xie et al., 2019) or any other manipulations of features as in (Zhang & Wang, 2019). Regarding Active Defense, we have not seen any similar work. Perhaps the most related one is (Yang et al., 2019), which used sophisticated matrix completion techniques to reconstruct the random masked images. Our motivation is very different, as we try to exploit the deep network itself to enhance robustness without borrowing any third-party algorithms. 3 BACKGROUND In the classification problem, given the training data set of image-label pairs D = {(xi, yi)}ni=1 where yi ∈ {1, 2, ...,M}, the goal is to find an output probability vector F (x) of length M indexed by j, ideally such that y = argmaxjFj(x). Of course, there is always a mismatch between these two terms. The key thing here is to find a suitable loss function such that the empirical risk minimization (ERM) of 1n n∑ i=1 L(F (xi), yi) can be implemented with loss function L. Note that F (xi) is a vector, while yi is a scalar of the label, the very first thing is to transform yi into a vector through a vector function G(yi). People usually adopt the one-hot coding H(yi) of length M with all elements being 0 except Hyi(yi) = 1. The two probability distribution vectors F (xi) and H(yi) can be compared with cross-entropy. An adversary crafts an adversarial example xadv which is closest to x with ‖xadv − x‖p ≤ ε but misclassified as some other class. In this paper, we only consider attacks with p = ∞. The most commonly used strategy is the iterative projected gradient descent method(PGD) xt+1adv = P (x t adv + β × sign(∇xL(xtadv, G(y)))), (1) where β is the step size and P projects the generated example to the feasible region. Please note that L in adversarial attack may be different from L in training. People may choose traditional crossentropy loss with one-hot coding or CW loss (Carlini & Wagner, 2017) to implement Equation 1. Currently, a budget-aware step size-free variant of PGD has been proposed by (Croce & Hein, 2020), and since that, an ensemble of diverse parameter-free attacks called AutoAttack has become the de facto routine for robust accuracy evaluation. 4 METHOD In general, our method is very simple. In training, for feature map F l of a particular layer l with W ×H × C ,we use the loss L = Lce +max (LF l , δ) (2) LF l = 1 W ×H × C W∑ i=1 H∑ j=1 C∑ k=1 ∣∣F li,j,k −mean(F l)∣∣. (3) Here L has two terms. Lce is for cross-entropy loss, and LF l is our novel feature smoothing loss function. It is quite similar to the L1 norm of the particular feature cube and encourages the cube to be uniform. In order to avoid overfitting to the feature smoothing loss, we use max (LF l , δ) which disables the derivative of LF l when it drops below δ. In summary, we have two parameters, the feature layer l and the smoothing upper bound δ. Although this loss function sounds straightforward for our purpose to make the intermediate feature map smooth, there is a novel insight from the perspective of the trade-off between these two terms. In order to get low LF l , Lce will increase. In other words, after training, the network somehow understands that feature smoothing is not very annoying, and it only causes accuracy to drop to some extent. There is a huge implication in terms of robust accuracy. If we only use Lce, the network has no idea to deal with the crafted example except to be fooled. However, in our case, LF l gives us a dissipation channel of malicious perturbations that we can take via feature smoothing. Hopefully, this will remove most of the perturbations generated by an adversary, which we will elaborate on in our Active Defense design. The other concern may relate to the feature space constraints. Actually, due to the high capacity of networks, there is almost no difference in standard training accuracy between ours and cross-entropy loss. From the discussion above, our Active Defense is very intuitive. It consists of four steps depicted in the right of Figure 1. In Step 1, we add some random noise to an input which can somehow reduce the effect of adversarial disruptions; more importantly, this will ensure sufficient iterations of feature smoothing. Otherwise, the attacker can bypass this and our Active Defense will fail. Steps 2 and 3 are just forward/backward passes related to LF l . In Step 4, the noisy image get smoothed via gradient descent for LF l , and feeds into the network for another round of feature smoothing. The overall procedure of the proposed approach is in Algorithm 1. There are only three parameters, i.e., σ for uniform noise, β for the updating step size, and δ̃ for the upper bound of feature loss in test that is usually lower than δ for training in Equation 2, as we pursue extra feature smoothing to deal with adversarial attacks. The updated example through Active Defense will feed into the network for final class decision. This algorithm can run a few times which we denoted as outerloop in the following sections. Algorithm 1 Active Defense Algorithm 1: procedure ACTIVE DEFENSE(t, LF l ) . t is a test example. 2: t = t+ uniform(−σ, σ) . Step 1, σ is the parameter of the uniform distribution. 3: l = LF l (t) . Step 2 4: while l > δ̃ do . δ̃ is the upper bound of feature loss in test. 5: d = ∂l∂t . Step 3 6: t = t− β × d . Step 4, β is the step size. 7: l = LF l (t) . Step 2 8: end while 9: return t 10: end procedure 5 EXPERIMENTS To evaluate the performance of our approach, we run it on two datasets, CIFAR-10 and CIFAR100 (Krizhevsky, 2009), and compare it with other state-of-the-art adversarial training methods. The network we choose is WideResNet-28-10 (Zagoruyko & Komodakis, 2016) with three groups. Naturally, feature maps F l (l = 0, 1, 2) are used to denote the outputs of three groups respectively, and we choose l = 0, as it is closest to the input with the strongest backward pass derivative. We evaluate the robust accuracy with perturbation budget ε as both 8/255 and 16/255. It needs to be emphasized here that our algorithm is independent of ε. 5.1 STANDARD TRAINING Our approach adopts standard training with clean images using an additional feature loss term which boosts the intermediate features to be smooth. In all our experiments, the smoothing upper bound δ in Equation 2 is set to 0.02. Figures 2 and 3 demonstrate the training and test accuracy and the feature loss variation per training epoch. As we expected, while they achieve almost 100% training accuracy, there is only a tiny drop in test accuracy with and without feature smoothing terms: 95.02% vs 95.83% for CIFAR-10, and 78.3% vs 79.86% for CIFAR-100. This is due to the fact that the classifiers have a large number of parameters and are powerful enough. Here is an interesting observation of feature loss: even when we do not impose any constraints, it still goes down slowly. While we do enforce this term, it goes down very quickly at the beginning. This strongly hints feature smoothing benefits the generalization ability of networks. The feature loss with and without feature smoothing terms are dramatically different, listed in the pattern of training/test, 0.0130/0.0135 vs 0.0725/0.0713 for CIFAR-10, and 0.01258/0.013 vs 0.0901/0.0909 for CIFAR-100. This fact indicates the effects of this additional loss term. Also, although we set the δ to be 0.02, the feature loss is lower than that number, especially for CIFAR-100, which is also evidence that classification and feature smoothing are cooperative to some extent. Our Active Defense design exactly takes advantage of this harmony. 5.2 ACTIVE DEFENSE Equation 2 takes two terms, LF l and Lce. What is the consequence if we make the image smoother further, i.e., to decrease LF l?. This essentially will remove some brittle features, however, since we train the network with LF l , the network somehow understands that the remaining features are still useful for image classification. In other words, the drop in classification accuracy should be small. It is key to our success of Active Defense. To counter adversarial perturbations, we intentionally add some noise and smooth it out as in Algorithm 1. In all our experiments, we choose σ = 0.075 for uniform noise and β = 80 for the updating step size, while for the upper bound of feature loss δ̃, 0.0124 for CIFAR-10 and 0.0118 for CIFAR-100. Some experimental results with ε = 8/255 in Figures 4 and 5 show Active Defense can effectively recover the semantically significant structures destroyed by the adversarial attacks and our intentionally added noise, which leads to correct final classification. This evidence highlights the role played by our Active Defense. Usually, a single Active Defense takes about 200-400 iterations of while loop body in Algorithm 1 for ε = 8/255 and up to around 500 for ε = 16/255. In practice, this while loop body can be implemented in parallel. We evaluate the robust accuracy of our approach using AutoAttack and compare the performance with other state-of-the-art methods in Tables 1 and 2. As AutoAttack comprises a set of attack methods, it will win if any of them succeeds. It poses some challenges for our approach since our solution is stochastic. As one attack model may not be so strong on an average of 10k images, but for a particular example, it still has some chance to win. This chance will increase especially when given a row of ones applied in sequence, namely, clean, APGDce, APGDTdlr , FAB T and Square. For example, in Table 1, outerloop = 1 for CIFAR-10 achieves the lowest accuracy of 73.10% for APGDce, but AA only gets 62.31%, more than a ten percent drop. However for other comparison methods, this gap is small. To further improve the stability and narrow this gap, we run it ten times, i.e., outerloop = 10, and aggregate the output probability of each run. In Table 1, we only consider WideResNet-28-10, and ours with outerloop = 10 is the best. One may still wonder what if we apply our Active Defense with classical training only with cross-entropy loss. We also list the results in the rows of Standard (using Active Defense). It turns out that it is hard to exit the while loop in Algorithm 1, so we set δ̃ to be 0.0562 for CIFAR-10 and 0.0880 for CIFAR-100. The accuracy is very low. This fact justifies the necessity of our feature smoothing loss term. Table 2 lists the AA accuracy with all model architectures, and ours is still among the best. Note that when ε = 16/255, we rerun the whole AA including clean with a little bit change in clean accuracy due to the randomness of our method, and our AA accuracy significantly outperforms all others. 5.3 ADAPTIVE ADVERSARIES While it is always not easy to propose adaptive attacks, we try our best to defeat our defense scheme. Since the APGDce seems to be the best attack as listed in Table 1, we adapted it to generate possibly stronger adversarial examples. Specifically, the loss function of APGDce is modified to L = Lce + λ×max (LF l , δ) . (4) In other words, we try to understand how the feature loss term influences the efficiency of attacks with λ spaced uniformly (on a log scale) from 0.1 to 100, both positive and negative. Positive λ means more distortion in feature space, which pressures our smoothing process via Active Defense. While for negative λ, the iterations of the while loop in Algorithm 1 should be small, which results in more effective attacks. However these efforts are useless, manifested in robust accuracy of Table 3, almost the same as λ = 0. 5.4 DISCUSSION All above attacks are one-time in nature, as attackers may take an arbitrarily long time and possibly get many options but can only shoot one. Since our method introduces random noise in Active Defense phase, the results demonstrate the average success rate for 10K test examples. One may come up with a simple attack that sends the same example many times, and there is always a chance to defeat. But this brute-force attack can be resolved by our enhanced version of defense, that is, to run a lot of times of Algorithm 1, and aggregate the probability of each run. However, this will increase the computation load. On the other hand, it makes sense since there is no free lunch. Also, very fortunately, our capacity of defense can scale up much more conveniently than perhaps retraining the model from scratch as conventional adversarial training methods. Another very nice Clean Pear Ptb. Porcupine Ptb. Porcupine Ptb. Porcupine Aft. Pear Aft. Pear APGD-CE Porcupine APGD-T Baby Aft. Pear Clean Lobster APGD-CE Caterpillar APGD-T Caterpillar advantage is that our enhancement is definitively free from robust overfitting (Rice et al., 2020), as there is no attack model engaged at all. 6 CONCLUSION Adversarial learning is of great interest to the deep learning community. Most of the previous works focus on the efficient generation of malicious examples. However, in this paper, we shed some light on a question: Is it possible that a network can be robust without being taught with malicious examples? We propose a standard training model with an additional feature smoothing loss term, which is very different from all existing ones in that there is no adversarial input involved. The standard cross-entropy and feature smoothing loss can collaborate to some extent in training. At test time, we adopt Active Defense to distill feature maps of adversarial inputs. The experimental results demonstrate that this simple method can enhance the robustness of networks greatly. In future work, we will do some theoretical analysis and exploit other forms of cooperative loss that might be more beneficial than the feature smoothing one.
1. What is the novel defense mechanism proposed in the paper against adversarial examples? 2. What are the strengths and weaknesses of the proposed approach compared to SOTA adversarial training based defenses? 3. How does the reviewer assess the writing quality and clarity of certain sections in the paper? 4. Are there any suggestions regarding improving the practicality and scalability of the defense, particularly regarding test time smoothening and forward-backward runs? 5. Do you have any questions about the justification and motivation of introducing the additional loss, specifically regarding the concept of "Losses being comfortable with each other"? 6. How does the reviewer suggest improving the figures (4 and 5) and their corresponding explanations in the paper? 7. Any minor suggestions for improving the paper, such as changing "s" to "l" in Algorithm 1?
Summary Of The Paper Review
Summary Of The Paper The paper introduces a new empirical defense against adversarial examples. The new defense is competitive to SOTA adversarial training based defenses, but does not use adversarial training. Specifically, the paper proposes adding an additional loss to the standard classification loss which “smoothens” the feature maps of a specific layer in the model. At test time, random noise is added to each input, then the input is smoothened out (in an attempt to remove the adversary), by minimizing the Smoothening loss introduced earlier with respect to the input. The introduced defense is rigorously evaluated using SOTA attacks and shows competitive robust accuracies to adversarially-trained models on CIFAR-10 and CIFAR-100. Review Strengths: The introduced defense is sound and novel The defense does not depend on adversarial training, so it is quite fast to train in practice compared to adversarial training. The defense is evaluated using standard white-box and blackbox attacks and compares to SOTA robustness benchmarks. Weaknesses: The new defense requires smoothening the input at test time, which requires multiple (200 - 400) forward-backward runs through the network, which might be impractical in some scenarios. The writing quality can be improved and it is hard to follow some sections on the paper. The submission does not include code. Detailed questions/suggestions: As I understand, at test time, the defense smoothens the input by adding noise and doing 200-400 iterations through the network to minimize the smoothing loss. Does this have to be sequential? Or could these be done in parallel (batches)? This matters for assessing the practicality of the defenses. That being said, it would also be helpful to add an inference time table to make this clearer for the reader. The paper mentions: “What is the consequence if we make the image smoother further, i.e., to decrease L_{F^l} ? The natural effect will be the increase of cross-entropy loss L_{ce} , but the drop in classification accuracy should be small since L_{F^l} and L_{ce} are comfortable with each other.” in an attempt to justify or motivate the introduced loss? But I didn’t really get what this means. What does it mean that two losses are “comfortable with each other”? I think this is too hand-wavy and I encourage the authors to fix this. I am curious how the defense scales to ImageNet. I am not sure what Fig 4 and 5 are adding to the paper. What should the reader be looking at in these figures? I think adding takeaways of these figs in text or caption should be helpful. Small changes: In Alg 1, “s” should be “l”.
ICLR
Title Can standard training with clean images outperform adversarial one in robust accuracy? Abstract The deep learning network has achieved great success in almost every field. Unfortunately, it is very vulnerable to adversarial attacks. A lot of researchers have devoted themselves to making the network robust. The most effective one is adversarial training, where malicious examples are generated and fed to train the network. However, this will incur a big computation load. In this work, we ask: “Can standard training with clean images outperform adversarial one in robust accuracy?” Surprisingly, the answer is YES. This success stems from two innovations. The first is a novel loss function that combines the traditional cross-entropy with the feature smoothing loss that encourages the features in an intermediate layer to be uniform. The collaboration between these terms sets up the grounds for our second innovation, namely Active Defense. When a clean or adversarial image feeds into the network, the defender first adds some random noise, then induces this example to a new smoother one via promotion of feature smoothing. At that point, it can be classified correctly with high probability. Thus the perturbations carefully generated by the attacker can be diminished. While there is an inevitable clean accuracy drop, it is still comparable with others. The great benefit is the robust accuracy outperforms most of the existing methods and is quite resilient to the increase of perturbation budget. Moreover, adaptive attackers also fail to generate effective adversarial examples as the induced perturbations overweight the initial ones imposed by an adversary. 1 INTRODUCTION The seminal work of (Goodfellow et al., 2015) pointed out a surprising weakness of modern deep neural networks: although they can perform on par with human beings, their reliability is far from satisfaction. Almost imperceptibly added perturbations will be enough to mislead the network to output a wrong class label with high confidence. It will dramatically undermine the deployment of networks in some safety-critical applications: autonomous driving, image-based ID verification, and medical image analysis. Since then, researchers have heavily investigated this risk exposure and proposed different defense strategies. One direction is some prepossessing techniques such as bit-depth reduction (Xu et al., 2018), JPEG compression, total variance minimization, image quilting (Guo et al., 2018), and Defense-GAN (Samangouei et al., 2018). The idea is to mitigate the effect of added noise and save the network to some extent. Unfortunately, (Athalye et al., 2018) showed that most of these approaches are based on obfuscated gradients and can be defeated. The other line of research adopts various adversarial training techniques where malicious examples are generated and fed to the network. A simple rationale behind this is if the network has this knowledge, it will become wise in test time. While there are different mechanisms such as Mixup inference (Pang et al., 2020), feature scattering (Zhang & Wang, 2019), feature denoising (Xie et al., 2019), geometry-aware instance reweighting (Zhang et al., 2021), and channel-wise activation suppressing (Bai et al., 2021), they all share the same philosophy. While people are astonished by the fact that imperceptibly added perturbations can fool the network, some theoretical works such as (Tsipras et al., 2019; Schmidt et al., 2018) showed that it is not entirely unexpected. Unfortunately, there are no solutions without the awareness of attack models. Ideally, all defenses should be ignorant of this. However, this knowledge is essential to the adversarial training method that remains most effective, although at the cost of a large computation load. Now the big question arises: “Can standard training with clean images outperform adversarial one in robust accuracy?” Here “clean images” means there is no manipulation of inputs even by adding some random noise such as (Jin & Rinard, 2020), although it is for manifold regularization rather than adversarial training. At first glance, it seems hopeless, as a widely accepted principle in the adversarial learning community is that a network can be clever only if it has been exposed to deceptions before. However, on the other hand, the networks are supposed to generalize well after standard training. How can it perform so badly for adversarial attacks? As a possible answer to this, (Ilyas et al., 2019) investigated the cause of adversarial examples and concluded that neural networks tend to exploit predictive yet brittle features to make classifications. These features are incomprehensible to humans and thus can be modified by adversarial attackers to mislead the networks, but (Ilyas et al., 2019) did not show how to teach the network to disregard these non-robust features and discover the robust ones to make final decisions. From this perspective, as it is difficult to tell the network to learn robust features, what if we add some hints in the loss function and let the network become robust in an implicit way? More specifically, in addition to the classical cross-entropy loss, we use a feature smoothing term that encourages the features in an intermediate layer to be uniform, as shown in the left of Figure 1. It sounds counterintuitive as this term will constrain the space of features that may lead to a wrong classification. However, due to the high capacity of networks, a very high standard accuracy can still be achieved with this additional term. When training completes, given an input, whether clean or crafted, extra perturbations can always be created by some added random noise followed by the promotion of feature smoothing at the cost of reduced accuracy. So long as these intentional perturbations overweight the adversary’s and the reduction in accuracy is affordable, the model will become robust. We call this procedure Active Defense, as shown in the right of Figure 1. We find experimentally, a clean example from CIFAR10/CIFAR-100 can be perturbed with l∞ = 25/255 ∼ 32/255, three to four times of l∞ = 8/255 usually adopted by an adversary, yet be classified with a high success rate. This fact sets up the adequate space for Active Defense for eliminating the effects of attacks. Our approach is independent of any attack models compared with other state-of-the-art methods, and its performance is much more stable under attacks with different budgets. The contributions of this work are summarized as follows: • We propose a novel training scheme with an extra feature smoothing loss term that only takes clean images as inputs, fundamentally different from all existing adversarial training methods that need supplementary crafted data. •We present Active Defense that adds the second round of perturbations through random noise and feature smoothing. It modifies the malicious examples in a way that is friendly to the network. This deviates from conventional passive ones that keep the input intact. 2 RELATED WORKS Due to adversarial threats to deep learning applications, there are many works to improve the robustness. Most of them adopt adversarial training. Among them, only a few take care of the features in intermediate layers as listed in the following. Feature denoising in (Xie et al., 2019) found that small perturbations in pixel space can lead to very substantial noise in the feature maps of the network and proposed various filters to denoise. (Zhang & Wang, 2019) proposed to generate adversarial images for training through feature scattering in the latent space. In essence, perturbed images are produced collaboratively via optimal transport distance minimization. (Zhang & Wang, 2019) used the feature maps as a guide to making new examples. Compared with these two, we are trying to force the intermediate feature map to be uniform through an additional loss term within the standard training framework without any modification of the network as in (Xie et al., 2019) or any other manipulations of features as in (Zhang & Wang, 2019). Regarding Active Defense, we have not seen any similar work. Perhaps the most related one is (Yang et al., 2019), which used sophisticated matrix completion techniques to reconstruct the random masked images. Our motivation is very different, as we try to exploit the deep network itself to enhance robustness without borrowing any third-party algorithms. 3 BACKGROUND In the classification problem, given the training data set of image-label pairs D = {(xi, yi)}ni=1 where yi ∈ {1, 2, ...,M}, the goal is to find an output probability vector F (x) of length M indexed by j, ideally such that y = argmaxjFj(x). Of course, there is always a mismatch between these two terms. The key thing here is to find a suitable loss function such that the empirical risk minimization (ERM) of 1n n∑ i=1 L(F (xi), yi) can be implemented with loss function L. Note that F (xi) is a vector, while yi is a scalar of the label, the very first thing is to transform yi into a vector through a vector function G(yi). People usually adopt the one-hot coding H(yi) of length M with all elements being 0 except Hyi(yi) = 1. The two probability distribution vectors F (xi) and H(yi) can be compared with cross-entropy. An adversary crafts an adversarial example xadv which is closest to x with ‖xadv − x‖p ≤ ε but misclassified as some other class. In this paper, we only consider attacks with p = ∞. The most commonly used strategy is the iterative projected gradient descent method(PGD) xt+1adv = P (x t adv + β × sign(∇xL(xtadv, G(y)))), (1) where β is the step size and P projects the generated example to the feasible region. Please note that L in adversarial attack may be different from L in training. People may choose traditional crossentropy loss with one-hot coding or CW loss (Carlini & Wagner, 2017) to implement Equation 1. Currently, a budget-aware step size-free variant of PGD has been proposed by (Croce & Hein, 2020), and since that, an ensemble of diverse parameter-free attacks called AutoAttack has become the de facto routine for robust accuracy evaluation. 4 METHOD In general, our method is very simple. In training, for feature map F l of a particular layer l with W ×H × C ,we use the loss L = Lce +max (LF l , δ) (2) LF l = 1 W ×H × C W∑ i=1 H∑ j=1 C∑ k=1 ∣∣F li,j,k −mean(F l)∣∣. (3) Here L has two terms. Lce is for cross-entropy loss, and LF l is our novel feature smoothing loss function. It is quite similar to the L1 norm of the particular feature cube and encourages the cube to be uniform. In order to avoid overfitting to the feature smoothing loss, we use max (LF l , δ) which disables the derivative of LF l when it drops below δ. In summary, we have two parameters, the feature layer l and the smoothing upper bound δ. Although this loss function sounds straightforward for our purpose to make the intermediate feature map smooth, there is a novel insight from the perspective of the trade-off between these two terms. In order to get low LF l , Lce will increase. In other words, after training, the network somehow understands that feature smoothing is not very annoying, and it only causes accuracy to drop to some extent. There is a huge implication in terms of robust accuracy. If we only use Lce, the network has no idea to deal with the crafted example except to be fooled. However, in our case, LF l gives us a dissipation channel of malicious perturbations that we can take via feature smoothing. Hopefully, this will remove most of the perturbations generated by an adversary, which we will elaborate on in our Active Defense design. The other concern may relate to the feature space constraints. Actually, due to the high capacity of networks, there is almost no difference in standard training accuracy between ours and cross-entropy loss. From the discussion above, our Active Defense is very intuitive. It consists of four steps depicted in the right of Figure 1. In Step 1, we add some random noise to an input which can somehow reduce the effect of adversarial disruptions; more importantly, this will ensure sufficient iterations of feature smoothing. Otherwise, the attacker can bypass this and our Active Defense will fail. Steps 2 and 3 are just forward/backward passes related to LF l . In Step 4, the noisy image get smoothed via gradient descent for LF l , and feeds into the network for another round of feature smoothing. The overall procedure of the proposed approach is in Algorithm 1. There are only three parameters, i.e., σ for uniform noise, β for the updating step size, and δ̃ for the upper bound of feature loss in test that is usually lower than δ for training in Equation 2, as we pursue extra feature smoothing to deal with adversarial attacks. The updated example through Active Defense will feed into the network for final class decision. This algorithm can run a few times which we denoted as outerloop in the following sections. Algorithm 1 Active Defense Algorithm 1: procedure ACTIVE DEFENSE(t, LF l ) . t is a test example. 2: t = t+ uniform(−σ, σ) . Step 1, σ is the parameter of the uniform distribution. 3: l = LF l (t) . Step 2 4: while l > δ̃ do . δ̃ is the upper bound of feature loss in test. 5: d = ∂l∂t . Step 3 6: t = t− β × d . Step 4, β is the step size. 7: l = LF l (t) . Step 2 8: end while 9: return t 10: end procedure 5 EXPERIMENTS To evaluate the performance of our approach, we run it on two datasets, CIFAR-10 and CIFAR100 (Krizhevsky, 2009), and compare it with other state-of-the-art adversarial training methods. The network we choose is WideResNet-28-10 (Zagoruyko & Komodakis, 2016) with three groups. Naturally, feature maps F l (l = 0, 1, 2) are used to denote the outputs of three groups respectively, and we choose l = 0, as it is closest to the input with the strongest backward pass derivative. We evaluate the robust accuracy with perturbation budget ε as both 8/255 and 16/255. It needs to be emphasized here that our algorithm is independent of ε. 5.1 STANDARD TRAINING Our approach adopts standard training with clean images using an additional feature loss term which boosts the intermediate features to be smooth. In all our experiments, the smoothing upper bound δ in Equation 2 is set to 0.02. Figures 2 and 3 demonstrate the training and test accuracy and the feature loss variation per training epoch. As we expected, while they achieve almost 100% training accuracy, there is only a tiny drop in test accuracy with and without feature smoothing terms: 95.02% vs 95.83% for CIFAR-10, and 78.3% vs 79.86% for CIFAR-100. This is due to the fact that the classifiers have a large number of parameters and are powerful enough. Here is an interesting observation of feature loss: even when we do not impose any constraints, it still goes down slowly. While we do enforce this term, it goes down very quickly at the beginning. This strongly hints feature smoothing benefits the generalization ability of networks. The feature loss with and without feature smoothing terms are dramatically different, listed in the pattern of training/test, 0.0130/0.0135 vs 0.0725/0.0713 for CIFAR-10, and 0.01258/0.013 vs 0.0901/0.0909 for CIFAR-100. This fact indicates the effects of this additional loss term. Also, although we set the δ to be 0.02, the feature loss is lower than that number, especially for CIFAR-100, which is also evidence that classification and feature smoothing are cooperative to some extent. Our Active Defense design exactly takes advantage of this harmony. 5.2 ACTIVE DEFENSE Equation 2 takes two terms, LF l and Lce. What is the consequence if we make the image smoother further, i.e., to decrease LF l?. This essentially will remove some brittle features, however, since we train the network with LF l , the network somehow understands that the remaining features are still useful for image classification. In other words, the drop in classification accuracy should be small. It is key to our success of Active Defense. To counter adversarial perturbations, we intentionally add some noise and smooth it out as in Algorithm 1. In all our experiments, we choose σ = 0.075 for uniform noise and β = 80 for the updating step size, while for the upper bound of feature loss δ̃, 0.0124 for CIFAR-10 and 0.0118 for CIFAR-100. Some experimental results with ε = 8/255 in Figures 4 and 5 show Active Defense can effectively recover the semantically significant structures destroyed by the adversarial attacks and our intentionally added noise, which leads to correct final classification. This evidence highlights the role played by our Active Defense. Usually, a single Active Defense takes about 200-400 iterations of while loop body in Algorithm 1 for ε = 8/255 and up to around 500 for ε = 16/255. In practice, this while loop body can be implemented in parallel. We evaluate the robust accuracy of our approach using AutoAttack and compare the performance with other state-of-the-art methods in Tables 1 and 2. As AutoAttack comprises a set of attack methods, it will win if any of them succeeds. It poses some challenges for our approach since our solution is stochastic. As one attack model may not be so strong on an average of 10k images, but for a particular example, it still has some chance to win. This chance will increase especially when given a row of ones applied in sequence, namely, clean, APGDce, APGDTdlr , FAB T and Square. For example, in Table 1, outerloop = 1 for CIFAR-10 achieves the lowest accuracy of 73.10% for APGDce, but AA only gets 62.31%, more than a ten percent drop. However for other comparison methods, this gap is small. To further improve the stability and narrow this gap, we run it ten times, i.e., outerloop = 10, and aggregate the output probability of each run. In Table 1, we only consider WideResNet-28-10, and ours with outerloop = 10 is the best. One may still wonder what if we apply our Active Defense with classical training only with cross-entropy loss. We also list the results in the rows of Standard (using Active Defense). It turns out that it is hard to exit the while loop in Algorithm 1, so we set δ̃ to be 0.0562 for CIFAR-10 and 0.0880 for CIFAR-100. The accuracy is very low. This fact justifies the necessity of our feature smoothing loss term. Table 2 lists the AA accuracy with all model architectures, and ours is still among the best. Note that when ε = 16/255, we rerun the whole AA including clean with a little bit change in clean accuracy due to the randomness of our method, and our AA accuracy significantly outperforms all others. 5.3 ADAPTIVE ADVERSARIES While it is always not easy to propose adaptive attacks, we try our best to defeat our defense scheme. Since the APGDce seems to be the best attack as listed in Table 1, we adapted it to generate possibly stronger adversarial examples. Specifically, the loss function of APGDce is modified to L = Lce + λ×max (LF l , δ) . (4) In other words, we try to understand how the feature loss term influences the efficiency of attacks with λ spaced uniformly (on a log scale) from 0.1 to 100, both positive and negative. Positive λ means more distortion in feature space, which pressures our smoothing process via Active Defense. While for negative λ, the iterations of the while loop in Algorithm 1 should be small, which results in more effective attacks. However these efforts are useless, manifested in robust accuracy of Table 3, almost the same as λ = 0. 5.4 DISCUSSION All above attacks are one-time in nature, as attackers may take an arbitrarily long time and possibly get many options but can only shoot one. Since our method introduces random noise in Active Defense phase, the results demonstrate the average success rate for 10K test examples. One may come up with a simple attack that sends the same example many times, and there is always a chance to defeat. But this brute-force attack can be resolved by our enhanced version of defense, that is, to run a lot of times of Algorithm 1, and aggregate the probability of each run. However, this will increase the computation load. On the other hand, it makes sense since there is no free lunch. Also, very fortunately, our capacity of defense can scale up much more conveniently than perhaps retraining the model from scratch as conventional adversarial training methods. Another very nice Clean Pear Ptb. Porcupine Ptb. Porcupine Ptb. Porcupine Aft. Pear Aft. Pear APGD-CE Porcupine APGD-T Baby Aft. Pear Clean Lobster APGD-CE Caterpillar APGD-T Caterpillar advantage is that our enhancement is definitively free from robust overfitting (Rice et al., 2020), as there is no attack model engaged at all. 6 CONCLUSION Adversarial learning is of great interest to the deep learning community. Most of the previous works focus on the efficient generation of malicious examples. However, in this paper, we shed some light on a question: Is it possible that a network can be robust without being taught with malicious examples? We propose a standard training model with an additional feature smoothing loss term, which is very different from all existing ones in that there is no adversarial input involved. The standard cross-entropy and feature smoothing loss can collaborate to some extent in training. At test time, we adopt Active Defense to distill feature maps of adversarial inputs. The experimental results demonstrate that this simple method can enhance the robustness of networks greatly. In future work, we will do some theoretical analysis and exploit other forms of cooperative loss that might be more beneficial than the feature smoothing one.
1. What is the focus and contribution of the paper regarding adversarial attacks? 2. What are the strengths and weaknesses of the proposed defense mechanism? 3. Do you have any concerns or questions about the effectiveness of the noise addition step in the active defense? 4. How does the feature smoothing loss contribute to the enhancement of adversarial robustness? 5. Are there any limitations or potential drawbacks to the proposed method? 6. Can you provide further intuition on why the interaction between feature smoothing during training and active defense enhances adversarial robustness? 7. How does the proposed method compare to other state-of-the-art defenses against adversarial attacks? 8. What is the role of the L1/absolute value penalty in equation 3, and would the method still be effective with an L2/square penalty?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a defense against adversarial attacks that does not involve using attacks in the defense process. The defense consists of two parts: first, a feature smoothing loss is added to the main objective function during training. Then, when defending against attacks, noise is added to the input followed by additional perturbations to the input designed to decrease the feature smoothing loss. The proposed method achieves favorable accuracies relative to baseline defenses. Review Strengths The paper develops an effective and simple adversarial defense that requires only adding a simple regularization term during training, and additional processing of inputs during testing. The proposed defense appears to outperform baseline defenses on strong attacks. The accuracies achieved by the defense appear competitive. Weaknesses The proposed defense is novel, but some ideas are similar to ones previously proposed in the literature. In particular, the idea of feature smoothing is related to prior work as noted in section 2. Moreover, the idea of adding noise to inputs as a defense has been explored in the literature on randomized smoothing. The proposed defense is not theoretically motivated. This is not strictly necessary given the strong empirical results; however, it would be helpful to provide further intuition on why the interaction between feature smoothing during training and during active defense may enhance adversarial robustness. It is not clear to what extent the noise (step 1 of active defense) by itself enhances robustness vs. the feature smoothing step of the active defense. The authors demonstrate that standard training by itself with active defense is ineffective, indicating the importance of the feature smoothing loss during training. It would be helpful to conduct a similar ablation experiment to evaluate the importance of the noise vs. feature smoothing steps in the active defense. It is possible that the high performance of the proposed method is due to obfuscated gradients since for the attacker, optimizing inputs through the active defense may be more difficult. Adding additional experiments to demonstrate that obfuscated gradients do not occur would strengthen the paper (the authors may want to consider the experiments in Athalye et al., 2018). The authors do not state whether the proposed method is state-of-the-art. To the best of their knowledge, do the authors achieve state-of-the-art robust accuracy on CIFAR-10 at ϵ = 8 / 255 ? What is the role of the L1/absolute value penalty in eqn 3? Would the method still be effective with an L2/square penalty? Minor comments Some typos: "AutoAttak" in section 3, " It needs to emphasize here " -> " It needs to be emphasized here" in section 5
ICLR
Title Can standard training with clean images outperform adversarial one in robust accuracy? Abstract The deep learning network has achieved great success in almost every field. Unfortunately, it is very vulnerable to adversarial attacks. A lot of researchers have devoted themselves to making the network robust. The most effective one is adversarial training, where malicious examples are generated and fed to train the network. However, this will incur a big computation load. In this work, we ask: “Can standard training with clean images outperform adversarial one in robust accuracy?” Surprisingly, the answer is YES. This success stems from two innovations. The first is a novel loss function that combines the traditional cross-entropy with the feature smoothing loss that encourages the features in an intermediate layer to be uniform. The collaboration between these terms sets up the grounds for our second innovation, namely Active Defense. When a clean or adversarial image feeds into the network, the defender first adds some random noise, then induces this example to a new smoother one via promotion of feature smoothing. At that point, it can be classified correctly with high probability. Thus the perturbations carefully generated by the attacker can be diminished. While there is an inevitable clean accuracy drop, it is still comparable with others. The great benefit is the robust accuracy outperforms most of the existing methods and is quite resilient to the increase of perturbation budget. Moreover, adaptive attackers also fail to generate effective adversarial examples as the induced perturbations overweight the initial ones imposed by an adversary. 1 INTRODUCTION The seminal work of (Goodfellow et al., 2015) pointed out a surprising weakness of modern deep neural networks: although they can perform on par with human beings, their reliability is far from satisfaction. Almost imperceptibly added perturbations will be enough to mislead the network to output a wrong class label with high confidence. It will dramatically undermine the deployment of networks in some safety-critical applications: autonomous driving, image-based ID verification, and medical image analysis. Since then, researchers have heavily investigated this risk exposure and proposed different defense strategies. One direction is some prepossessing techniques such as bit-depth reduction (Xu et al., 2018), JPEG compression, total variance minimization, image quilting (Guo et al., 2018), and Defense-GAN (Samangouei et al., 2018). The idea is to mitigate the effect of added noise and save the network to some extent. Unfortunately, (Athalye et al., 2018) showed that most of these approaches are based on obfuscated gradients and can be defeated. The other line of research adopts various adversarial training techniques where malicious examples are generated and fed to the network. A simple rationale behind this is if the network has this knowledge, it will become wise in test time. While there are different mechanisms such as Mixup inference (Pang et al., 2020), feature scattering (Zhang & Wang, 2019), feature denoising (Xie et al., 2019), geometry-aware instance reweighting (Zhang et al., 2021), and channel-wise activation suppressing (Bai et al., 2021), they all share the same philosophy. While people are astonished by the fact that imperceptibly added perturbations can fool the network, some theoretical works such as (Tsipras et al., 2019; Schmidt et al., 2018) showed that it is not entirely unexpected. Unfortunately, there are no solutions without the awareness of attack models. Ideally, all defenses should be ignorant of this. However, this knowledge is essential to the adversarial training method that remains most effective, although at the cost of a large computation load. Now the big question arises: “Can standard training with clean images outperform adversarial one in robust accuracy?” Here “clean images” means there is no manipulation of inputs even by adding some random noise such as (Jin & Rinard, 2020), although it is for manifold regularization rather than adversarial training. At first glance, it seems hopeless, as a widely accepted principle in the adversarial learning community is that a network can be clever only if it has been exposed to deceptions before. However, on the other hand, the networks are supposed to generalize well after standard training. How can it perform so badly for adversarial attacks? As a possible answer to this, (Ilyas et al., 2019) investigated the cause of adversarial examples and concluded that neural networks tend to exploit predictive yet brittle features to make classifications. These features are incomprehensible to humans and thus can be modified by adversarial attackers to mislead the networks, but (Ilyas et al., 2019) did not show how to teach the network to disregard these non-robust features and discover the robust ones to make final decisions. From this perspective, as it is difficult to tell the network to learn robust features, what if we add some hints in the loss function and let the network become robust in an implicit way? More specifically, in addition to the classical cross-entropy loss, we use a feature smoothing term that encourages the features in an intermediate layer to be uniform, as shown in the left of Figure 1. It sounds counterintuitive as this term will constrain the space of features that may lead to a wrong classification. However, due to the high capacity of networks, a very high standard accuracy can still be achieved with this additional term. When training completes, given an input, whether clean or crafted, extra perturbations can always be created by some added random noise followed by the promotion of feature smoothing at the cost of reduced accuracy. So long as these intentional perturbations overweight the adversary’s and the reduction in accuracy is affordable, the model will become robust. We call this procedure Active Defense, as shown in the right of Figure 1. We find experimentally, a clean example from CIFAR10/CIFAR-100 can be perturbed with l∞ = 25/255 ∼ 32/255, three to four times of l∞ = 8/255 usually adopted by an adversary, yet be classified with a high success rate. This fact sets up the adequate space for Active Defense for eliminating the effects of attacks. Our approach is independent of any attack models compared with other state-of-the-art methods, and its performance is much more stable under attacks with different budgets. The contributions of this work are summarized as follows: • We propose a novel training scheme with an extra feature smoothing loss term that only takes clean images as inputs, fundamentally different from all existing adversarial training methods that need supplementary crafted data. •We present Active Defense that adds the second round of perturbations through random noise and feature smoothing. It modifies the malicious examples in a way that is friendly to the network. This deviates from conventional passive ones that keep the input intact. 2 RELATED WORKS Due to adversarial threats to deep learning applications, there are many works to improve the robustness. Most of them adopt adversarial training. Among them, only a few take care of the features in intermediate layers as listed in the following. Feature denoising in (Xie et al., 2019) found that small perturbations in pixel space can lead to very substantial noise in the feature maps of the network and proposed various filters to denoise. (Zhang & Wang, 2019) proposed to generate adversarial images for training through feature scattering in the latent space. In essence, perturbed images are produced collaboratively via optimal transport distance minimization. (Zhang & Wang, 2019) used the feature maps as a guide to making new examples. Compared with these two, we are trying to force the intermediate feature map to be uniform through an additional loss term within the standard training framework without any modification of the network as in (Xie et al., 2019) or any other manipulations of features as in (Zhang & Wang, 2019). Regarding Active Defense, we have not seen any similar work. Perhaps the most related one is (Yang et al., 2019), which used sophisticated matrix completion techniques to reconstruct the random masked images. Our motivation is very different, as we try to exploit the deep network itself to enhance robustness without borrowing any third-party algorithms. 3 BACKGROUND In the classification problem, given the training data set of image-label pairs D = {(xi, yi)}ni=1 where yi ∈ {1, 2, ...,M}, the goal is to find an output probability vector F (x) of length M indexed by j, ideally such that y = argmaxjFj(x). Of course, there is always a mismatch between these two terms. The key thing here is to find a suitable loss function such that the empirical risk minimization (ERM) of 1n n∑ i=1 L(F (xi), yi) can be implemented with loss function L. Note that F (xi) is a vector, while yi is a scalar of the label, the very first thing is to transform yi into a vector through a vector function G(yi). People usually adopt the one-hot coding H(yi) of length M with all elements being 0 except Hyi(yi) = 1. The two probability distribution vectors F (xi) and H(yi) can be compared with cross-entropy. An adversary crafts an adversarial example xadv which is closest to x with ‖xadv − x‖p ≤ ε but misclassified as some other class. In this paper, we only consider attacks with p = ∞. The most commonly used strategy is the iterative projected gradient descent method(PGD) xt+1adv = P (x t adv + β × sign(∇xL(xtadv, G(y)))), (1) where β is the step size and P projects the generated example to the feasible region. Please note that L in adversarial attack may be different from L in training. People may choose traditional crossentropy loss with one-hot coding or CW loss (Carlini & Wagner, 2017) to implement Equation 1. Currently, a budget-aware step size-free variant of PGD has been proposed by (Croce & Hein, 2020), and since that, an ensemble of diverse parameter-free attacks called AutoAttack has become the de facto routine for robust accuracy evaluation. 4 METHOD In general, our method is very simple. In training, for feature map F l of a particular layer l with W ×H × C ,we use the loss L = Lce +max (LF l , δ) (2) LF l = 1 W ×H × C W∑ i=1 H∑ j=1 C∑ k=1 ∣∣F li,j,k −mean(F l)∣∣. (3) Here L has two terms. Lce is for cross-entropy loss, and LF l is our novel feature smoothing loss function. It is quite similar to the L1 norm of the particular feature cube and encourages the cube to be uniform. In order to avoid overfitting to the feature smoothing loss, we use max (LF l , δ) which disables the derivative of LF l when it drops below δ. In summary, we have two parameters, the feature layer l and the smoothing upper bound δ. Although this loss function sounds straightforward for our purpose to make the intermediate feature map smooth, there is a novel insight from the perspective of the trade-off between these two terms. In order to get low LF l , Lce will increase. In other words, after training, the network somehow understands that feature smoothing is not very annoying, and it only causes accuracy to drop to some extent. There is a huge implication in terms of robust accuracy. If we only use Lce, the network has no idea to deal with the crafted example except to be fooled. However, in our case, LF l gives us a dissipation channel of malicious perturbations that we can take via feature smoothing. Hopefully, this will remove most of the perturbations generated by an adversary, which we will elaborate on in our Active Defense design. The other concern may relate to the feature space constraints. Actually, due to the high capacity of networks, there is almost no difference in standard training accuracy between ours and cross-entropy loss. From the discussion above, our Active Defense is very intuitive. It consists of four steps depicted in the right of Figure 1. In Step 1, we add some random noise to an input which can somehow reduce the effect of adversarial disruptions; more importantly, this will ensure sufficient iterations of feature smoothing. Otherwise, the attacker can bypass this and our Active Defense will fail. Steps 2 and 3 are just forward/backward passes related to LF l . In Step 4, the noisy image get smoothed via gradient descent for LF l , and feeds into the network for another round of feature smoothing. The overall procedure of the proposed approach is in Algorithm 1. There are only three parameters, i.e., σ for uniform noise, β for the updating step size, and δ̃ for the upper bound of feature loss in test that is usually lower than δ for training in Equation 2, as we pursue extra feature smoothing to deal with adversarial attacks. The updated example through Active Defense will feed into the network for final class decision. This algorithm can run a few times which we denoted as outerloop in the following sections. Algorithm 1 Active Defense Algorithm 1: procedure ACTIVE DEFENSE(t, LF l ) . t is a test example. 2: t = t+ uniform(−σ, σ) . Step 1, σ is the parameter of the uniform distribution. 3: l = LF l (t) . Step 2 4: while l > δ̃ do . δ̃ is the upper bound of feature loss in test. 5: d = ∂l∂t . Step 3 6: t = t− β × d . Step 4, β is the step size. 7: l = LF l (t) . Step 2 8: end while 9: return t 10: end procedure 5 EXPERIMENTS To evaluate the performance of our approach, we run it on two datasets, CIFAR-10 and CIFAR100 (Krizhevsky, 2009), and compare it with other state-of-the-art adversarial training methods. The network we choose is WideResNet-28-10 (Zagoruyko & Komodakis, 2016) with three groups. Naturally, feature maps F l (l = 0, 1, 2) are used to denote the outputs of three groups respectively, and we choose l = 0, as it is closest to the input with the strongest backward pass derivative. We evaluate the robust accuracy with perturbation budget ε as both 8/255 and 16/255. It needs to be emphasized here that our algorithm is independent of ε. 5.1 STANDARD TRAINING Our approach adopts standard training with clean images using an additional feature loss term which boosts the intermediate features to be smooth. In all our experiments, the smoothing upper bound δ in Equation 2 is set to 0.02. Figures 2 and 3 demonstrate the training and test accuracy and the feature loss variation per training epoch. As we expected, while they achieve almost 100% training accuracy, there is only a tiny drop in test accuracy with and without feature smoothing terms: 95.02% vs 95.83% for CIFAR-10, and 78.3% vs 79.86% for CIFAR-100. This is due to the fact that the classifiers have a large number of parameters and are powerful enough. Here is an interesting observation of feature loss: even when we do not impose any constraints, it still goes down slowly. While we do enforce this term, it goes down very quickly at the beginning. This strongly hints feature smoothing benefits the generalization ability of networks. The feature loss with and without feature smoothing terms are dramatically different, listed in the pattern of training/test, 0.0130/0.0135 vs 0.0725/0.0713 for CIFAR-10, and 0.01258/0.013 vs 0.0901/0.0909 for CIFAR-100. This fact indicates the effects of this additional loss term. Also, although we set the δ to be 0.02, the feature loss is lower than that number, especially for CIFAR-100, which is also evidence that classification and feature smoothing are cooperative to some extent. Our Active Defense design exactly takes advantage of this harmony. 5.2 ACTIVE DEFENSE Equation 2 takes two terms, LF l and Lce. What is the consequence if we make the image smoother further, i.e., to decrease LF l?. This essentially will remove some brittle features, however, since we train the network with LF l , the network somehow understands that the remaining features are still useful for image classification. In other words, the drop in classification accuracy should be small. It is key to our success of Active Defense. To counter adversarial perturbations, we intentionally add some noise and smooth it out as in Algorithm 1. In all our experiments, we choose σ = 0.075 for uniform noise and β = 80 for the updating step size, while for the upper bound of feature loss δ̃, 0.0124 for CIFAR-10 and 0.0118 for CIFAR-100. Some experimental results with ε = 8/255 in Figures 4 and 5 show Active Defense can effectively recover the semantically significant structures destroyed by the adversarial attacks and our intentionally added noise, which leads to correct final classification. This evidence highlights the role played by our Active Defense. Usually, a single Active Defense takes about 200-400 iterations of while loop body in Algorithm 1 for ε = 8/255 and up to around 500 for ε = 16/255. In practice, this while loop body can be implemented in parallel. We evaluate the robust accuracy of our approach using AutoAttack and compare the performance with other state-of-the-art methods in Tables 1 and 2. As AutoAttack comprises a set of attack methods, it will win if any of them succeeds. It poses some challenges for our approach since our solution is stochastic. As one attack model may not be so strong on an average of 10k images, but for a particular example, it still has some chance to win. This chance will increase especially when given a row of ones applied in sequence, namely, clean, APGDce, APGDTdlr , FAB T and Square. For example, in Table 1, outerloop = 1 for CIFAR-10 achieves the lowest accuracy of 73.10% for APGDce, but AA only gets 62.31%, more than a ten percent drop. However for other comparison methods, this gap is small. To further improve the stability and narrow this gap, we run it ten times, i.e., outerloop = 10, and aggregate the output probability of each run. In Table 1, we only consider WideResNet-28-10, and ours with outerloop = 10 is the best. One may still wonder what if we apply our Active Defense with classical training only with cross-entropy loss. We also list the results in the rows of Standard (using Active Defense). It turns out that it is hard to exit the while loop in Algorithm 1, so we set δ̃ to be 0.0562 for CIFAR-10 and 0.0880 for CIFAR-100. The accuracy is very low. This fact justifies the necessity of our feature smoothing loss term. Table 2 lists the AA accuracy with all model architectures, and ours is still among the best. Note that when ε = 16/255, we rerun the whole AA including clean with a little bit change in clean accuracy due to the randomness of our method, and our AA accuracy significantly outperforms all others. 5.3 ADAPTIVE ADVERSARIES While it is always not easy to propose adaptive attacks, we try our best to defeat our defense scheme. Since the APGDce seems to be the best attack as listed in Table 1, we adapted it to generate possibly stronger adversarial examples. Specifically, the loss function of APGDce is modified to L = Lce + λ×max (LF l , δ) . (4) In other words, we try to understand how the feature loss term influences the efficiency of attacks with λ spaced uniformly (on a log scale) from 0.1 to 100, both positive and negative. Positive λ means more distortion in feature space, which pressures our smoothing process via Active Defense. While for negative λ, the iterations of the while loop in Algorithm 1 should be small, which results in more effective attacks. However these efforts are useless, manifested in robust accuracy of Table 3, almost the same as λ = 0. 5.4 DISCUSSION All above attacks are one-time in nature, as attackers may take an arbitrarily long time and possibly get many options but can only shoot one. Since our method introduces random noise in Active Defense phase, the results demonstrate the average success rate for 10K test examples. One may come up with a simple attack that sends the same example many times, and there is always a chance to defeat. But this brute-force attack can be resolved by our enhanced version of defense, that is, to run a lot of times of Algorithm 1, and aggregate the probability of each run. However, this will increase the computation load. On the other hand, it makes sense since there is no free lunch. Also, very fortunately, our capacity of defense can scale up much more conveniently than perhaps retraining the model from scratch as conventional adversarial training methods. Another very nice Clean Pear Ptb. Porcupine Ptb. Porcupine Ptb. Porcupine Aft. Pear Aft. Pear APGD-CE Porcupine APGD-T Baby Aft. Pear Clean Lobster APGD-CE Caterpillar APGD-T Caterpillar advantage is that our enhancement is definitively free from robust overfitting (Rice et al., 2020), as there is no attack model engaged at all. 6 CONCLUSION Adversarial learning is of great interest to the deep learning community. Most of the previous works focus on the efficient generation of malicious examples. However, in this paper, we shed some light on a question: Is it possible that a network can be robust without being taught with malicious examples? We propose a standard training model with an additional feature smoothing loss term, which is very different from all existing ones in that there is no adversarial input involved. The standard cross-entropy and feature smoothing loss can collaborate to some extent in training. At test time, we adopt Active Defense to distill feature maps of adversarial inputs. The experimental results demonstrate that this simple method can enhance the robustness of networks greatly. In future work, we will do some theoretical analysis and exploit other forms of cooperative loss that might be more beneficial than the feature smoothing one.
1. What is the main contribution of the paper regarding robust models? 2. What are the weaknesses of the paper's approach, particularly concerning its novelty and potential effectiveness? 3. How does the reviewer assess the clarity and convincing nature of the paper's motivation and performance gains? 4. Can you explain the feature smoothing method and its purpose in both the training and testing phases? 5. How does the reviewer evaluate the proposed method's ability to provide true robustness, especially considering its relationship to existing works like [1] and [2]?
Summary Of The Paper Review
Summary Of The Paper This paper aims to provide an approach to train a robust model, where the model is trained without adversarial examples. To this end, the authors introduce an inference-time defense strategy. However, this idea is not novel. Moreover, the inference-time defense strategy may give false robustness. Review The motivation of its current version is unclear. The performance gain of robustness is not convincing. The feature smoothing method is proposed for training robust model, but it also is used in the test phase. The motivation for applying this strategy in the test phase is unclear. Intuitively, removing the inference-time smoothing operation (the backpropagation process) will degrade the accuracy of the proposed method. The proposed method is actually ‘attack’ the generation process of adversarial examples, so this strategy may give false robustness. This kind of approach is not novel [1]. Besides adaptive attack, the criterion suggested by [2] is also necessary to verify the robustness. [1] Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks. arXiv [2] Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. ICML2018
ICLR
Title Can standard training with clean images outperform adversarial one in robust accuracy? Abstract The deep learning network has achieved great success in almost every field. Unfortunately, it is very vulnerable to adversarial attacks. A lot of researchers have devoted themselves to making the network robust. The most effective one is adversarial training, where malicious examples are generated and fed to train the network. However, this will incur a big computation load. In this work, we ask: “Can standard training with clean images outperform adversarial one in robust accuracy?” Surprisingly, the answer is YES. This success stems from two innovations. The first is a novel loss function that combines the traditional cross-entropy with the feature smoothing loss that encourages the features in an intermediate layer to be uniform. The collaboration between these terms sets up the grounds for our second innovation, namely Active Defense. When a clean or adversarial image feeds into the network, the defender first adds some random noise, then induces this example to a new smoother one via promotion of feature smoothing. At that point, it can be classified correctly with high probability. Thus the perturbations carefully generated by the attacker can be diminished. While there is an inevitable clean accuracy drop, it is still comparable with others. The great benefit is the robust accuracy outperforms most of the existing methods and is quite resilient to the increase of perturbation budget. Moreover, adaptive attackers also fail to generate effective adversarial examples as the induced perturbations overweight the initial ones imposed by an adversary. 1 INTRODUCTION The seminal work of (Goodfellow et al., 2015) pointed out a surprising weakness of modern deep neural networks: although they can perform on par with human beings, their reliability is far from satisfaction. Almost imperceptibly added perturbations will be enough to mislead the network to output a wrong class label with high confidence. It will dramatically undermine the deployment of networks in some safety-critical applications: autonomous driving, image-based ID verification, and medical image analysis. Since then, researchers have heavily investigated this risk exposure and proposed different defense strategies. One direction is some prepossessing techniques such as bit-depth reduction (Xu et al., 2018), JPEG compression, total variance minimization, image quilting (Guo et al., 2018), and Defense-GAN (Samangouei et al., 2018). The idea is to mitigate the effect of added noise and save the network to some extent. Unfortunately, (Athalye et al., 2018) showed that most of these approaches are based on obfuscated gradients and can be defeated. The other line of research adopts various adversarial training techniques where malicious examples are generated and fed to the network. A simple rationale behind this is if the network has this knowledge, it will become wise in test time. While there are different mechanisms such as Mixup inference (Pang et al., 2020), feature scattering (Zhang & Wang, 2019), feature denoising (Xie et al., 2019), geometry-aware instance reweighting (Zhang et al., 2021), and channel-wise activation suppressing (Bai et al., 2021), they all share the same philosophy. While people are astonished by the fact that imperceptibly added perturbations can fool the network, some theoretical works such as (Tsipras et al., 2019; Schmidt et al., 2018) showed that it is not entirely unexpected. Unfortunately, there are no solutions without the awareness of attack models. Ideally, all defenses should be ignorant of this. However, this knowledge is essential to the adversarial training method that remains most effective, although at the cost of a large computation load. Now the big question arises: “Can standard training with clean images outperform adversarial one in robust accuracy?” Here “clean images” means there is no manipulation of inputs even by adding some random noise such as (Jin & Rinard, 2020), although it is for manifold regularization rather than adversarial training. At first glance, it seems hopeless, as a widely accepted principle in the adversarial learning community is that a network can be clever only if it has been exposed to deceptions before. However, on the other hand, the networks are supposed to generalize well after standard training. How can it perform so badly for adversarial attacks? As a possible answer to this, (Ilyas et al., 2019) investigated the cause of adversarial examples and concluded that neural networks tend to exploit predictive yet brittle features to make classifications. These features are incomprehensible to humans and thus can be modified by adversarial attackers to mislead the networks, but (Ilyas et al., 2019) did not show how to teach the network to disregard these non-robust features and discover the robust ones to make final decisions. From this perspective, as it is difficult to tell the network to learn robust features, what if we add some hints in the loss function and let the network become robust in an implicit way? More specifically, in addition to the classical cross-entropy loss, we use a feature smoothing term that encourages the features in an intermediate layer to be uniform, as shown in the left of Figure 1. It sounds counterintuitive as this term will constrain the space of features that may lead to a wrong classification. However, due to the high capacity of networks, a very high standard accuracy can still be achieved with this additional term. When training completes, given an input, whether clean or crafted, extra perturbations can always be created by some added random noise followed by the promotion of feature smoothing at the cost of reduced accuracy. So long as these intentional perturbations overweight the adversary’s and the reduction in accuracy is affordable, the model will become robust. We call this procedure Active Defense, as shown in the right of Figure 1. We find experimentally, a clean example from CIFAR10/CIFAR-100 can be perturbed with l∞ = 25/255 ∼ 32/255, three to four times of l∞ = 8/255 usually adopted by an adversary, yet be classified with a high success rate. This fact sets up the adequate space for Active Defense for eliminating the effects of attacks. Our approach is independent of any attack models compared with other state-of-the-art methods, and its performance is much more stable under attacks with different budgets. The contributions of this work are summarized as follows: • We propose a novel training scheme with an extra feature smoothing loss term that only takes clean images as inputs, fundamentally different from all existing adversarial training methods that need supplementary crafted data. •We present Active Defense that adds the second round of perturbations through random noise and feature smoothing. It modifies the malicious examples in a way that is friendly to the network. This deviates from conventional passive ones that keep the input intact. 2 RELATED WORKS Due to adversarial threats to deep learning applications, there are many works to improve the robustness. Most of them adopt adversarial training. Among them, only a few take care of the features in intermediate layers as listed in the following. Feature denoising in (Xie et al., 2019) found that small perturbations in pixel space can lead to very substantial noise in the feature maps of the network and proposed various filters to denoise. (Zhang & Wang, 2019) proposed to generate adversarial images for training through feature scattering in the latent space. In essence, perturbed images are produced collaboratively via optimal transport distance minimization. (Zhang & Wang, 2019) used the feature maps as a guide to making new examples. Compared with these two, we are trying to force the intermediate feature map to be uniform through an additional loss term within the standard training framework without any modification of the network as in (Xie et al., 2019) or any other manipulations of features as in (Zhang & Wang, 2019). Regarding Active Defense, we have not seen any similar work. Perhaps the most related one is (Yang et al., 2019), which used sophisticated matrix completion techniques to reconstruct the random masked images. Our motivation is very different, as we try to exploit the deep network itself to enhance robustness without borrowing any third-party algorithms. 3 BACKGROUND In the classification problem, given the training data set of image-label pairs D = {(xi, yi)}ni=1 where yi ∈ {1, 2, ...,M}, the goal is to find an output probability vector F (x) of length M indexed by j, ideally such that y = argmaxjFj(x). Of course, there is always a mismatch between these two terms. The key thing here is to find a suitable loss function such that the empirical risk minimization (ERM) of 1n n∑ i=1 L(F (xi), yi) can be implemented with loss function L. Note that F (xi) is a vector, while yi is a scalar of the label, the very first thing is to transform yi into a vector through a vector function G(yi). People usually adopt the one-hot coding H(yi) of length M with all elements being 0 except Hyi(yi) = 1. The two probability distribution vectors F (xi) and H(yi) can be compared with cross-entropy. An adversary crafts an adversarial example xadv which is closest to x with ‖xadv − x‖p ≤ ε but misclassified as some other class. In this paper, we only consider attacks with p = ∞. The most commonly used strategy is the iterative projected gradient descent method(PGD) xt+1adv = P (x t adv + β × sign(∇xL(xtadv, G(y)))), (1) where β is the step size and P projects the generated example to the feasible region. Please note that L in adversarial attack may be different from L in training. People may choose traditional crossentropy loss with one-hot coding or CW loss (Carlini & Wagner, 2017) to implement Equation 1. Currently, a budget-aware step size-free variant of PGD has been proposed by (Croce & Hein, 2020), and since that, an ensemble of diverse parameter-free attacks called AutoAttack has become the de facto routine for robust accuracy evaluation. 4 METHOD In general, our method is very simple. In training, for feature map F l of a particular layer l with W ×H × C ,we use the loss L = Lce +max (LF l , δ) (2) LF l = 1 W ×H × C W∑ i=1 H∑ j=1 C∑ k=1 ∣∣F li,j,k −mean(F l)∣∣. (3) Here L has two terms. Lce is for cross-entropy loss, and LF l is our novel feature smoothing loss function. It is quite similar to the L1 norm of the particular feature cube and encourages the cube to be uniform. In order to avoid overfitting to the feature smoothing loss, we use max (LF l , δ) which disables the derivative of LF l when it drops below δ. In summary, we have two parameters, the feature layer l and the smoothing upper bound δ. Although this loss function sounds straightforward for our purpose to make the intermediate feature map smooth, there is a novel insight from the perspective of the trade-off between these two terms. In order to get low LF l , Lce will increase. In other words, after training, the network somehow understands that feature smoothing is not very annoying, and it only causes accuracy to drop to some extent. There is a huge implication in terms of robust accuracy. If we only use Lce, the network has no idea to deal with the crafted example except to be fooled. However, in our case, LF l gives us a dissipation channel of malicious perturbations that we can take via feature smoothing. Hopefully, this will remove most of the perturbations generated by an adversary, which we will elaborate on in our Active Defense design. The other concern may relate to the feature space constraints. Actually, due to the high capacity of networks, there is almost no difference in standard training accuracy between ours and cross-entropy loss. From the discussion above, our Active Defense is very intuitive. It consists of four steps depicted in the right of Figure 1. In Step 1, we add some random noise to an input which can somehow reduce the effect of adversarial disruptions; more importantly, this will ensure sufficient iterations of feature smoothing. Otherwise, the attacker can bypass this and our Active Defense will fail. Steps 2 and 3 are just forward/backward passes related to LF l . In Step 4, the noisy image get smoothed via gradient descent for LF l , and feeds into the network for another round of feature smoothing. The overall procedure of the proposed approach is in Algorithm 1. There are only three parameters, i.e., σ for uniform noise, β for the updating step size, and δ̃ for the upper bound of feature loss in test that is usually lower than δ for training in Equation 2, as we pursue extra feature smoothing to deal with adversarial attacks. The updated example through Active Defense will feed into the network for final class decision. This algorithm can run a few times which we denoted as outerloop in the following sections. Algorithm 1 Active Defense Algorithm 1: procedure ACTIVE DEFENSE(t, LF l ) . t is a test example. 2: t = t+ uniform(−σ, σ) . Step 1, σ is the parameter of the uniform distribution. 3: l = LF l (t) . Step 2 4: while l > δ̃ do . δ̃ is the upper bound of feature loss in test. 5: d = ∂l∂t . Step 3 6: t = t− β × d . Step 4, β is the step size. 7: l = LF l (t) . Step 2 8: end while 9: return t 10: end procedure 5 EXPERIMENTS To evaluate the performance of our approach, we run it on two datasets, CIFAR-10 and CIFAR100 (Krizhevsky, 2009), and compare it with other state-of-the-art adversarial training methods. The network we choose is WideResNet-28-10 (Zagoruyko & Komodakis, 2016) with three groups. Naturally, feature maps F l (l = 0, 1, 2) are used to denote the outputs of three groups respectively, and we choose l = 0, as it is closest to the input with the strongest backward pass derivative. We evaluate the robust accuracy with perturbation budget ε as both 8/255 and 16/255. It needs to be emphasized here that our algorithm is independent of ε. 5.1 STANDARD TRAINING Our approach adopts standard training with clean images using an additional feature loss term which boosts the intermediate features to be smooth. In all our experiments, the smoothing upper bound δ in Equation 2 is set to 0.02. Figures 2 and 3 demonstrate the training and test accuracy and the feature loss variation per training epoch. As we expected, while they achieve almost 100% training accuracy, there is only a tiny drop in test accuracy with and without feature smoothing terms: 95.02% vs 95.83% for CIFAR-10, and 78.3% vs 79.86% for CIFAR-100. This is due to the fact that the classifiers have a large number of parameters and are powerful enough. Here is an interesting observation of feature loss: even when we do not impose any constraints, it still goes down slowly. While we do enforce this term, it goes down very quickly at the beginning. This strongly hints feature smoothing benefits the generalization ability of networks. The feature loss with and without feature smoothing terms are dramatically different, listed in the pattern of training/test, 0.0130/0.0135 vs 0.0725/0.0713 for CIFAR-10, and 0.01258/0.013 vs 0.0901/0.0909 for CIFAR-100. This fact indicates the effects of this additional loss term. Also, although we set the δ to be 0.02, the feature loss is lower than that number, especially for CIFAR-100, which is also evidence that classification and feature smoothing are cooperative to some extent. Our Active Defense design exactly takes advantage of this harmony. 5.2 ACTIVE DEFENSE Equation 2 takes two terms, LF l and Lce. What is the consequence if we make the image smoother further, i.e., to decrease LF l?. This essentially will remove some brittle features, however, since we train the network with LF l , the network somehow understands that the remaining features are still useful for image classification. In other words, the drop in classification accuracy should be small. It is key to our success of Active Defense. To counter adversarial perturbations, we intentionally add some noise and smooth it out as in Algorithm 1. In all our experiments, we choose σ = 0.075 for uniform noise and β = 80 for the updating step size, while for the upper bound of feature loss δ̃, 0.0124 for CIFAR-10 and 0.0118 for CIFAR-100. Some experimental results with ε = 8/255 in Figures 4 and 5 show Active Defense can effectively recover the semantically significant structures destroyed by the adversarial attacks and our intentionally added noise, which leads to correct final classification. This evidence highlights the role played by our Active Defense. Usually, a single Active Defense takes about 200-400 iterations of while loop body in Algorithm 1 for ε = 8/255 and up to around 500 for ε = 16/255. In practice, this while loop body can be implemented in parallel. We evaluate the robust accuracy of our approach using AutoAttack and compare the performance with other state-of-the-art methods in Tables 1 and 2. As AutoAttack comprises a set of attack methods, it will win if any of them succeeds. It poses some challenges for our approach since our solution is stochastic. As one attack model may not be so strong on an average of 10k images, but for a particular example, it still has some chance to win. This chance will increase especially when given a row of ones applied in sequence, namely, clean, APGDce, APGDTdlr , FAB T and Square. For example, in Table 1, outerloop = 1 for CIFAR-10 achieves the lowest accuracy of 73.10% for APGDce, but AA only gets 62.31%, more than a ten percent drop. However for other comparison methods, this gap is small. To further improve the stability and narrow this gap, we run it ten times, i.e., outerloop = 10, and aggregate the output probability of each run. In Table 1, we only consider WideResNet-28-10, and ours with outerloop = 10 is the best. One may still wonder what if we apply our Active Defense with classical training only with cross-entropy loss. We also list the results in the rows of Standard (using Active Defense). It turns out that it is hard to exit the while loop in Algorithm 1, so we set δ̃ to be 0.0562 for CIFAR-10 and 0.0880 for CIFAR-100. The accuracy is very low. This fact justifies the necessity of our feature smoothing loss term. Table 2 lists the AA accuracy with all model architectures, and ours is still among the best. Note that when ε = 16/255, we rerun the whole AA including clean with a little bit change in clean accuracy due to the randomness of our method, and our AA accuracy significantly outperforms all others. 5.3 ADAPTIVE ADVERSARIES While it is always not easy to propose adaptive attacks, we try our best to defeat our defense scheme. Since the APGDce seems to be the best attack as listed in Table 1, we adapted it to generate possibly stronger adversarial examples. Specifically, the loss function of APGDce is modified to L = Lce + λ×max (LF l , δ) . (4) In other words, we try to understand how the feature loss term influences the efficiency of attacks with λ spaced uniformly (on a log scale) from 0.1 to 100, both positive and negative. Positive λ means more distortion in feature space, which pressures our smoothing process via Active Defense. While for negative λ, the iterations of the while loop in Algorithm 1 should be small, which results in more effective attacks. However these efforts are useless, manifested in robust accuracy of Table 3, almost the same as λ = 0. 5.4 DISCUSSION All above attacks are one-time in nature, as attackers may take an arbitrarily long time and possibly get many options but can only shoot one. Since our method introduces random noise in Active Defense phase, the results demonstrate the average success rate for 10K test examples. One may come up with a simple attack that sends the same example many times, and there is always a chance to defeat. But this brute-force attack can be resolved by our enhanced version of defense, that is, to run a lot of times of Algorithm 1, and aggregate the probability of each run. However, this will increase the computation load. On the other hand, it makes sense since there is no free lunch. Also, very fortunately, our capacity of defense can scale up much more conveniently than perhaps retraining the model from scratch as conventional adversarial training methods. Another very nice Clean Pear Ptb. Porcupine Ptb. Porcupine Ptb. Porcupine Aft. Pear Aft. Pear APGD-CE Porcupine APGD-T Baby Aft. Pear Clean Lobster APGD-CE Caterpillar APGD-T Caterpillar advantage is that our enhancement is definitively free from robust overfitting (Rice et al., 2020), as there is no attack model engaged at all. 6 CONCLUSION Adversarial learning is of great interest to the deep learning community. Most of the previous works focus on the efficient generation of malicious examples. However, in this paper, we shed some light on a question: Is it possible that a network can be robust without being taught with malicious examples? We propose a standard training model with an additional feature smoothing loss term, which is very different from all existing ones in that there is no adversarial input involved. The standard cross-entropy and feature smoothing loss can collaborate to some extent in training. At test time, we adopt Active Defense to distill feature maps of adversarial inputs. The experimental results demonstrate that this simple method can enhance the robustness of networks greatly. In future work, we will do some theoretical analysis and exploit other forms of cooperative loss that might be more beneficial than the feature smoothing one.
1. What is the novel defense strategy proposed by the paper? 2. What are the strengths and weaknesses of the proposed method? 3. How effective is the proposed defense against adaptive attacks? 4. Are there any concerns regarding the implementation and presentation of the algorithm? 5. What additional experiments or improvements could enhance the paper's contributions?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a new defense strategy involving both randomization and smoothing. To my knowledge, no other paper has proposed precisely the same defense. Review The most significant concern I have is that the “adaptive attack” is not, in my view, a sufficient try at breaking the proposed method. The authors should perform an adaptive PGD attack where each gradient is computed as follows: Draw multiple samples of random noise, add each of them individually to your current adversarial example iterate, run Algorithm 1 for each of the noised examples, plug the resulting images into the model to obtain a cross-entropy (also try CW) loss value, and finally backprop through the entire Algorithm 1 to compute a gradient with respect to the adversarial example iterate. Imagenet experiments would be greatly appreciated as ImageNet data is far higher dimensional, more realistic, and typically has very different properties than low-dimensional data. There are several issues with the writing of Algorithm 1: What is lower case ell? Also, s is only computed and never used or output. Just a note about formatting. The authors should be careful about the difference between \citep and \citet natbib commands. Additionally, there are numerous grammatical errors and misspellings which should be corrected before publication.
ICLR
Title Local Permutation Equivariance For Graph Neural Networks Abstract In this work we develop a new method, named locally permutation-equivariant graph neural networks, which provides a framework for building graph neural networks that operate on local node neighbourhoods, through sub-graphs, while using permutation equivariant update functions. Message passing neural networks have been shown to be limited in their expressive power and recent approaches to over come this either lack scalability or require structural information to be encoded into the feature space. The general framework presented here overcomes the scalability issues associated with global permutation equivariance by operating on sub-graphs through restricted representations. In addition, we prove that there is no loss of expressivity by using restricted representations. Furthermore, the proposed framework only requires a choice of k-hops for creating sub-graphs and a choice of representation space to be used for each layer, which makes the method easily applicable across a range of graph based domains. We experimentally validate the method on a range of graph benchmark classification tasks, demonstrating either state-of-the-art results or very competitive results on all benchmarks. Further, we demonstrate that the use of local update functions offers a significant improvement in GPU memory over global methods. 1 INTRODUCTION Many forms of data are naturally structured as graphs such as molecules, bioinformatics, social, or financial and it is therefore of interest to have algorithms which operate over graphs. Machine learning on graphs has received much interest in recent years, with the general framework of a message passing network providing both a useful inductive bias and scalability across a range of domains (Gilmer et al., 2017). However, Xu et al. (2019) show that a model based on a message passing framework with permutation invariant aggregation functions is limited in expressive power. Therefore, there exists many non isomorphic graphs that a model of this form cannot distinguish between. Figure 5 demonstrates two non-isomorphic graphs for which a message passing framework with max pooling would not be able to distinguish between the two graphs. More expressive graph networks exists and a common measure of expressivity is the WeisfeilerLehman (WL) test. One such method of building more expressive networks is based directly on the WL test. Here a neural network architecture is built based on variants of the WL test. Bouritsas et al. (2020) use a permutation invariant local update function, but incorporate permutation equivariant structural information into the feature space. Morris et al. (2019a) build models based on different WL variants that consider local and global connections. Bodnar et al. (2021b) introduce a WL test on simplicial complexes and incorporate this into a message passing scheme. Bodnar et al. (2021a) extend work on simplicial complexes to cell complexes, which subsume simplicial complexes. On the other hand, rather than trying to directly incorporate techniques from WL tests directly into networks other model make use of permutation symmetries to build permutation equivariant graph neural networks (Maron et al., 2018). This model can be built for k-order feature spaces and it was shown by Maron et al. (2019) that such models can distinguish between non-isomorphic graphs as well as the k-WL test. Natural graph networks are a different class of graph neural network, where the constraint placed upon the linear layer is that of naturality (de Haan et al., 2020). The naturality constraint says that for each isomorphism class a map must be chosen that is equivariant to automorphisms. In general the task of learning on graphs consists of utilising many graphs of different sizes. Current methods for utilising permutation equivariant graph neural networks require that the graph be represented as an adjacency tensor, which limits there scalability. Furthermore, global natural graph networks also perform computations on entire graph features, which leads to a large computational complexity for large graphs. Local gauge symmetries have been considered to build models with local equivariance (Cohen et al., 2019). This approach improves scalability of models by utilising local update functions, however for graphs we do not have a single local symmetry. Currently this is overcome in the majority of graph neural networks presented by utilising some form of message passing, but, in general, all works use a permutation invariant aggregation function leading to good scalability but poor expressivity. Local natural graph networks attempt to overcome the limited expressivity through placing a local naturality constraint on the message passing and having different message passing kernels on non-isomorphic edges. Through considering graph neural networks from an elementary category theory perspective and making use of aspects of group theory we present a framework for building local permutation equivariant models. This allows us to build a graph neural network model with local update functions that are permutation equivariant by considering restricted representations of the representation space of the whole graph. Further, we prove that this does not cause a loss of expressivity of the model and that this maintains the option to have a k-order feature space that ensures expressivity equal to k-WL test. Also, by constraining the kernel space under restricted representations, a natural weight sharing scheme becomes apparent, namely sharing weights across local graph neighbourhoods of the same degree. The approach of building models with a framework based on group theory makes clear the generality of the approach, where choices of representation space can be made for each convolutional layer without requiring prior information such as structural information to be encoded into the feature space. This framework can also be shown to include other leading methods as specific cases. 2 BACKGROUND 2.1 GRAPH NETWORKS Different graph neural networks express graphs in alternative forms. Generally, for a message passing model, a matrix of node features and a matrix of edge features is combined with a sparse edge index array specifying the connectivity of the graph. In other works, the graph is provided in a dense format, where the graph is given as a adjacency tensor with node and edge features held in one tensor. In this work we present the graph as follows: Definition 1 A Concrete Graph G is a finite set of nodes V(G) ⊂ N and a set of edges E(G) ⊂ V(G)× V(G). The set of node ids may be non-contiguous and we make use of this here as we extract overlapping sub-graphs when performing the local updates. The same underlying graph can be given in may forms by a permutation of the ordering of the natural numbers of the nodes. Definition 2 A sub-Concrete Graph H is created by taking a node i ∈ V(G), and extracting the nodes j ∈ V(G) and edges (i, j) ⊂ V(G)× V(G), such that there is a connection between nodes i and j. Once a sub-concrete graph has been extracted, this same underlying sub-graph could be expressed through different permutations of the underlying numbering of the nodes. For brevity we refer to sub-concrete graphs as subgraphs throughout the paper. Definition 3 A Graph isomorphism, φ : G→ G′ is a bijection between the vertex sets of two graphs G andG′, such that two vertices u and v are adjacent inG if and only if φ(u) and φ(v) are adjacent in G′. This mapping is edge preserving, i.e. satisfies for all (i, j) ∈ V(G)× V(G): (i, j) ∈ E(G)⇐⇒ (φ(i), φ(j)) ∈ E(G′) An isomorphism from the graph to itself is known as an automorphism. Relabelling of the graph by a permutation of the nodes is called a graph isomorphism, where an example of two isomorphic graphs is given in Figure 5. We desire that the linear layers of the graph neural network respect the composition of graph isomorphisms. This requires us to define the feature space of the graphs and how feature spaces of isomorphic graphs are related. 2.2 PERMUTATION REPRESENTATIONS The feature space of the graphs is a vector space V , where a representation of the group G is a homomorphism ρ : G → GL(V ) of G to the group of automorphisms of V (Fulton & Harris, 2013). A map KG between two representations of G is a vector space map. The elements of the group g ∈ G can act on a vector v ∈ V by the representation matrix v → ρ(g)v. The symmetric subspace of the representation is the space of solutions to the constraint ∀g ∈ G : ρ(g)v = v. Here we are considering the symmetries of the symmetric group Sn. This constraint can be solved for different order representations (Maron et al., 2018; Finzi et al., 2021). We present the space of linear layers mapping from k-order representations to k′-order representations in Figure 2. In addition, for the linear map KG, we require that if a graph is passed through KG and then transformed by permutation to an isomorphic graph this result is the same as if a graph is transformed by the same permutation to an isomorphic graph and then passed through KG. In short, this requires that permutation equivariance is satisfied. 2.3 CATEGORY THEORY This section does not provide a complete overview of category theory, nor even a full introduction, but aims to provide a sufficient level of understanding to aid the reader with further sections of the paper, where we believe presenting the comparison between models from a category theory perspective makes more clear the distinctions between them. A category, C, consists of a set of objects, Ob(C), and a set of morphisms (structure-preserving mappings) or arrows, f : A → B, A,B ∈ Ob(C). There is a binary operation on morphisms called composition. Each object has an identity morphism. Categories can be constructed from given ones by constructing a subcategory, in which each object, morphism, and identity is from the original category, or by building upon a category, where objects, morphisms, and identities are inherited from the original category. A functor is a mapping from one category to another that preserves the categorical structure. For two categories C and D a functor F : C → D maps each object A ∈ Ob(C) to an object F (A) ∈ Ob(D) and maps each morphism f : A→ B in C to a morphism F (f) : F (A)→ F (B) in D. Definition 4 A groupoid is a category in which each morphism is invertible. A groupoid where there is only one object is usually a group. 3 GLOBAL EQUIVARIANT GRAPH NETWORKS 3.1 GLOBAL PERMUTATION EQUIVARIANCE Global permutation equivariant models have been considered by Hartford et al. (2018); Maron et al. (2018; 2019); Albooyeh et al. (2019), with Maron et al. (2018) demonstrating that for order-2 layers there are 15 operations that span the full basis for an permutation equivariant linear layer. These 15 basis elements are shown in Figure 2 with each basis element given by a different color in the map from representation ρ2 → ρ2. Despite these methods, when solved for the entire basis space, having expressivity as good as the k-WL test, they operate on the entire graph. Operating on the entire graph features limits the scalability of the methods. In addition to poor scalability, global permutation appears to be a strong constraint to place upon the model. In the instance where the graph is flattened and an MLP is used to update node and edge features the model would have n4 trainable parameters, where n is the number of nodes. On the other hand, a permutation equivariant update has only 15 trainable parameters and in general 15 n4. Viewing a global permutation equivariant graph network from a category theory perspective there is one object with a collection of arrows representing the elements of the group. Here the arrows or morphisms go both from and to this same single object. The feature space is a functor which maps from a group representation to a vector space. For a global permutation equivariant model the same map is used for every graph. G e g1 g2 Symmetric Group 3.2 GLOBAL NATURALITY Global natural graph networks (GNGN) consider the condition of naturality, (de Haan et al., 2020). GNGNs require that for each isomorphism class of graphs there is a map that is equivariant to automorphisms. This naturality constraint is given by the condition ρ′(φ) ◦ KG = KG′ ◦ ρ(φ), which must hold for every graph isomorphism φ : G → G′ and linear map KG. While the global permutation equivariance constraint requires that all graphs be processed with the same map, global naturality allows for different, non-isomorphic, graphs to be processed by different maps and as such is a generalisation of global permutation equivariance. As is the case for global permutation equivariant models, GNGNs scale poorly as the constraint is placed over the entire graph and linear layers require global computations on the graphs. Viewing a GNGN from a category theory perspective there is a different object for each concrete graph, which form a groupoid. Then, there is a mosphism or arrow for each graph isomorphism. These can either be automorphisms, if the arrow maps to itself, or isomorphisms if the arrow maps to a different object. The feature spaces are functors which map from this graph category to the category of vector spaces. The GNG layer is a natural transformation between such functors consisting of a different map for each non-isomorphic graph. G1 G2 G3 Groupoid of Concrete Graphs 4 LOCAL EQUIVARIANT GRAPH NETWORKS Local equivariant models have started to receive attention following the successes of global equivariant models and local invariant models. The class of models that are based on the WL test are not in general locally permutation equivariant in that they still use a message passing model with permutation invariant update function. Despite this, many of these models inject permutation equivariant information into the feature space, which improves the expressivity of the models (Bouritsas et al., 2020; Morris et al., 2019a; Bodnar et al., 2021b;a). The information to be injected into the feature space is predetermined in these models by a choice of what structural or topological information to use, whereas our model uses representations of the permutation group, making it a very general model that still guarantees expressivity. In contrast to utilising results from the WL test covariant compositional networks (CCN) look at permutation equivariant functions, but they do not consider the entire basis space as was considered in Maron et al. (2018) and instead consider four equivariant operations (Kondor et al., 2018). This means that the permutation equivariant linear layers are not as expressive as those used in the global permutation equivariant layers. Furthermore, in a CCN the node neighbourhood and feature dimensions grow with each layer, which can be problematic for larger graphs and limits their scalability. Another local equivariant model is that of local natural graph networks (LNGN) (de Haan et al., 2020). An LNGN uses a message passing framework, but instead of using a permutation invariant aggregation function, it specifies the constraint that node features transform under isomophisms of the node neighbourhood and that a different message passing kernel is used on non-isomorphic edges. In practice this leads to little weight sharing in graphs that are quite heterogeneous and as such the layer is re-interpreted such that a message from node p to node q, kpqvp, is given by a function k(Gpq, vp) of the edge neighbourhood Gpq and feature value vp at p. Viewing a LNGN from a category theoretic perspective there is a groupoid of node neighbourhoods where morphisms are isomorphisms between node neighbourhoods and a groupoid of edge neighbourhoods where morphisms are ismorphisms between edge neighbourhoods. In addition, there is a functor mapping from edge neighbourhoods to the node neighbourhood of the start node and a functor mapping similarly but to the tail node of the edge neighbourhood. The node feature spaces are functors mapping from the category of node neighbourhoods to the category of vector spaces. Further, composition of two functors creates a mapping from edge neighbourhoods to the category of vector spaces. A LNG kernel is a natural transformation between these functors. N1 N2 N3 Groupoid of Node Neighbourhoods E1 E2 E3 Groupoid of Edge Neighbourhoods 5 LOCAL PERMUTATION EQUIVARIANCE A local permutation equivariant graph network (LPEGN) improves upon the scalability of global permutation equivariant models by considering permutation equivariance at lower scales. Here, instead of performing the update function on the entire graph, we perform the update function on node neighbourhoods as is done in message passing models. Furthermore, while performing the update functions on node neighbourhoods, we maintain improved expressivity through using korder permutation representations. The intuition behind imposing permutation equivariance on node neighbourhoods rather than the entire graph is that the model can learn expressive features about a part of the sub-graph without requiring knowledge of permutations multiple hops away from the central update node. This framework generalises global permutation equivariant models as it is compatible with all length scales, meaning that, if the graph structure is used to determine node neighbourhoods, then any k value can be chosen to determine the k-hops from the central update node producing the sub-graph which permutation equivariance is required for. Therefore, if the value chosen for the k-hops is sufficiently large then the layer becomes a global permutation update. The basis functions for different order representation spaces are given with the split into different degrees for a 1-hop node neighbourhood in Figure 1. The method therefore requires a choice of k for the number of hops away from the central node to consider in the local update and we discuss this choice in Section 5.2. In addition, the framework then allows for a choice of weight sharing, which we discuss in Section 5.3. 5.1 RESTRICTED REPRESENTATION Given a graph comprised of n nodes, global equivariant models consider the permutation representation of the permutation group G = Sn, namely the representation ρ : G → GL(Rc). Here we consider local updates on sub-graphs with m nodes, where we are interested in the sub-group H = Sm ≤ Sn. Therefore we can consider the restricted representation of the sub-group Sm, where the restricted representation can be seen as dropping some symmetries from the group Sn. The restricted representation is denoted by ρ̃ := ResGH(ρ) : H → GL(Rc). The global equivariance case using representations, ρ, and the case using restricted representations, ρ̃, are shown in Figure 3. Both figures show a basis mapping from order 1 to order 1 permutation representation. The restricted repre- sentation ResS5S4 drops the permutation symmetry associated to node 5. Dropping the permutation symmetry of node 5 results in 3 additional parameters, one for the update of node 5 based on node 5’s features, another for the update of node 5 based on the features of the other nodes in the graph, and a final parameter for the update of the other nodes in the graph based on node 5’s features. We proove that using restricted representations in our framework has no loss of expressivity in Appendix A.8. 5.2 CHOICE OF LOCAL NEIGHBOURHOOD The LPEGN model framework performs the permutation equivariant update on local sub-graphs, although a choice can be made as to how these sub-graphs are created. One option is the use the underlying graph structure and choose a k value to extract local neighbourhoods that include nodes which are at most k-hops from the central node. This method creates a sub-graph for each node in the graph. Here the choice of the k value can be seen as choosing a length scale for which the permutation symmetry should be exploited over. In other words, choosing a value of k = 1 is the shortest length scale and node features will be updated such that they are permutation equivariant to their 1-hop neighbours, but not equivariant to nodes further away in the graph. On the other hand, choosing a k value sufficiently large will create a model equivalent to global permutation equivariant models, where each update is permutation equivariant to permutations of the entire graph. Throughout this work we choose k = 1 unless otherwise stated to take the most local permutation equivariant updates. We show how this choice of k value will impact the method through analysing the MUTAG dataset in Figure 10. 5.3 CHOICE OF WEIGHT SHARING In general when constructing the sub-graphs a variety of different sized sub-graphs are found due to differing degrees of the nodes in the graph. This allows for a further choice, namely the weight sharing method to be used. Given that the permutation equivariance constraint is a strong constraint to place over the linear layers, we perform weight sharing across sub-graphs of the same size. This means that sub-graphs of different sizes do not share weights and can be updated differently. The intuition for this is that sub-graphs of the same size already have some similarity in that they are of the same size, while sub-graphs of a different size are less likely to be similar and hence should be updated differently. Throughout this paper we choose to use weight sharing across local neighbourhoods of the same size degree, although in situations where there is very few local neighbourhoods of a particular size we group these together. 5.4 CHOICE OF REPRESENTATION SPACE In Section 5.1 we considered the restricted representation of a sub-group Sm ≤ Sn and in Section 5.2 we detailed how local sub-graphs are selected. Here we must make a connection between the two to present the representational space used in our LPEGN framework. When focusing in on the nodes that we didn’t drop the permutation symmetry of it can be seen, in Figure 3, that for these nodes the restricted representation is equivalent to the global permutation equivariant representation. Furthermore, given our choice of sub-graph construction we would seek to drop the permutation symmetry from a node in the graph due to the fact it is not connected to the central update node. Therefore the edge features connecting the central node to the node we are dropping the permutation symmetry of are zero. Hence, we are not interested in the additional parameters introduced in the restricted representation connecting the two nodes. Furthermore, as the node we are dropping permutation symmetries for is not connected to the chosen sub-graph we are also not interested in the additional parameters introduced in the restricted representation for this node. As a result, due to the choice of sub-graph construction, the restricted representation for our sub-group has zero features in the position of new parameters introduced and is therefore equivalent to the permutation representation on a lower dimensional space. Therefore where global permutation equivariant updates use representations ρ : G → GL(Rc), our local permutation equivariant model uses representations ρ̃ : H → GL(Rc̄), where c̄ ≤ c. The scheme for creating representations of local neighbourhoods is shown in Figure 1, where some representations of the local neighbourhoods are shown. 5.5 LOCAL PERMUTATION EQUIVARIANT GRAPH NETWORK A LPEGN combines the chosen method of creating sub-graphs as local neighbourhoods with a choice of weight sharing scheme and makes use of permutation representations on these sub-graphs. The process of creating sub-graphs, updating based on the choice of weight sharing using permutation representations, and re-constructing the graph structure is presented in Figure 1. Viewing a LPEGN from a category theoretic perspective, each different size node neighbourhood is a sub-group, H , which is a different object. There are morphisms or arrows for each permutation of the neighbourhood. This forms a groupoid. The sub-group representations are functors from the category of node neighbourhoods to the category of vector spaces. H1 H2 H3 e h1 h2 e h1 h2 e h1 h2 Groupoid of Symmetric Sub-Groups 6 EXPERIMENTS 6.1 GRAPH BENCHMARKS We tested our method on a series of 7 different real-world graph classification problems from the benchmark of (Yanardag & Vishwanathan, 2015). It is noteworthy to point out some interesting features of each dataset. We note that both MUTAG and PTC are very small datasets, with MUTAG only having 18 graphs in the test set when using a 10 % testing split. Further, the Proteins dataset has the largest graphs with an average number of nodes in each graph of 39. Also, NCI1 and NCI109 are the largest datasets having over 4000 graphs each, leading to less spurious results. Finally, IMDBB and IMDB-M generally have smaller graphs, with IMDB-M only having an average number of 13 nodes in each graph. The small size of graphs coupled with having 3 classes appears to make IMBD-M a challenging problem. Table 1 compares our LPEGN model to a range of other methods. This highlights that our method achieves a new state-of-the-art result on the NCI1 dataset and is the second strongest on PTC and NCI109. Furthermore, our method performs competitively across all datasets. We achieve a poor ranking score on the Proteins datasets, although the classification accuracy of the model is competitive with leading results and only falls slightly short of the bulk of other methods. A comparison of the distribution of training accuracy is presented in figure 8 and a ranking based method is presented in Figure 9. 6.2 SCALABILITY We compare global permutation equivariant models with our local permutation equivariant model to assess the improvements in scalability offered by local permutation equivariance. Here we compare the GPU memory required by the model against the average size of graph in the dataset. It is expected that as the computational cost of global methods scales superlinearly with the size of the graph, due to the requirement to treat the entire graph as a single adjacency tensor, that local equivariance will have a lower computational cost as each update only requires local node neighbourhoods to be expressed as adjacency tensors, which are typically much smaller than the size of the graph. Therefore global methods scale with O(n2), for graphs with n nodes, while local methods scale with O(nm2), where m is the number of nodes in a node neighbourhood and typically m n. Figure 4 shows how global and local permutation equivariant models scale with GPU memory usage as the average size of the graphs in the dataset increases. This will allow the LPEGN method to scale to graph datasets that was not possible with global equivariance. 7 FUTURE WORK From Table 1 it is clear that IMDB-M is a dataset for which our method has weaker performance. As stated in Section A.3 between hidden local equivariant graph neural network layers for the experiments in this paper we only make use of order 1 and 2 representations. As it was shown by Maron et al. (2019) that increasing the order of the permutation representation increases the expressivity inline with the k-WL test, the expressivity of our method could be improved through the consideration of higher order permutation representations. Making use of higher order representations, we believe, would improve results on the IMBD-M dataset and therefore makes for an interesting future direction. 8 CONCLUSION We present a graph neural network framework for building models comprising of local permutation equivariant update functions. The method presented is general in that it presents a framework for operating on sub-graphs with permutation equivariant convolutions, where a choice of representation space can be made depending on the expressivity required. This maintains expressivity in the update functions by utilising restricted representations, while improving scalability over global permutation equivariant methods by operating on smaller sub-graphs. We show that this method includes many previous published approaches as specific cases. Using a general approach as our framework does makes it easier to build provably expressive graph neural networks without the need to embed structural information about the task at hand, as is done in other methods. Further, we experimentally validate the method using k = 1 to create the sub-graphs and ρ1 ⊕ ρ2 representations for the local update functions on a set of graph classification datasets. This model produces state-of-the-art results on one of the datasets, achieves second best results on two datasets, and is competitive on the remaining four. In addition, ranking the model against existing methods on each dataset shows that our method is one of the strongest performing methods. Furthermore, when compared to global permutation equivariant models our method offers a significant improvement in terms of the GPU memory usage, improving the scalability of the method. A APPENDIX A.1 ISOMORPHIC GRAPHS An example of two isomporhic and two non-isomorphic graphs are shown in Figure 5. To a permutation invariant message passing update function utilising a max pooling aggregation function the isomorphic and non-isomorphic graphs are equivalent when updating the central node. A.2 MATHEMATICAL BACKGROUND Definition 5 A group is a set G with a binary operation ◦ satisfying the following laws: (G0) (Closure law): For all g, h ∈ G, g ◦ h ∈ G (G1) (Associative law): g ◦ (h ◦ k) = (g ◦ h) ◦ k for all g, h, k ∈ G (G2) (Identity law): There exists e ∈ G such that g ◦ e = e ◦ g = g for all g ∈ G (G3) (Inverse law): For all g ∈ G, there exists h ∈ G with g ◦ h = h ◦ g = e Definition 6 A representation of a finite group on a finite-dimensional complex vector space V is a homomorphism ρ → GL(V ) of the group to automorphisms of V (Fulton & Harris, 2013). This allows group elements to be expressed as invertible matrices and the group operation to be matrix multiplication. A.3 MODEL ARCHITECTURE We consider the input graphs as an input feature space that is an order 2 representation. For each local permutation equivariant linear layer we use order 1 and 2 representations as the feature spaces. This allows for projection down from graph to node feature spaces through the basis for ρ2 → ρ1, projection up from node to graph feature spaces through the basis for ρ1 → ρ2, and mappings across the same order representations through ρ2 → ρ2 and ρ1 → ρ1. The final local permutation equivariant linear layer maps to order 0 representations through ρ2 → ρ0 and ρ1 → ρ0 for the task of graph level classification. In addition to the graph layers, we also add 3 MLP layers to the end of the model. Despite these specific choices which were made to provide a baseline of our method for comparison to existing methods the framework we present is much more general and different representation spaces can be chosen. We present the general framework in Figure 6. This shows how different permutation representation spaces, ρ1 ⊕ ρ2 ⊕ · · · ⊕ ρi, can be chosen for different layers in the model and how different k values can be chosen when creating the sub-graphs in each layer. A.4 EXPRESSIVITY Figure 7 shows the training accuracy achived by the LPEGN model across a range of datasets. (a, b, c, and d) show that the LPEGN model is able to achieve 100% training accuracy on PTC, Proteins, NCI1, and NCI109 datasets. This demonstrates that the model utilising only order 1- and 2-permutation representations is sufficiently expressive. In addition, (e) shows that the model achieves very close to 100% accuracy on the IMDBB dataset. On the other hand, (f) shows that the model training accuracy plateaus above 70% accuracy for the IMDBM dataset. This highlights the model is not sufficiently expressive to achieve 100% accuracy on this datset. As discussed in Section 7 we belive that utilising higher order permutation representations would make the model more expressive and as a result achieve a higher accuracy on IMDBM. A.5 COMPARISON OF RESULTS In addition to the comparison across datasets in Table 1 Figure 8 shows the training accuracy distribution of the LPEGN method and compares to other methods from Table 1. The multimodal distribution of LPEGN for the PTC dataset highlights why it has a large standard deviation. This is likely a result of the fact that the PTC dataset is very small. Given the poor ranking of the LPEGN method in Table 1, comparing the results to other methods here highlights that the LPEGNN result is competitive. For the NCI1 and NCI109 datasets the distribution of results of our method highlight the strong performance of the method. For IMDBB and IMDBM the distribution of results for the LPEGN method also highlight that it is competative on these datasets. Further, we propose an additional method of comparison, namely the counts of wins of our LPEGN method with other methods and the counts of significant wins. The result of comparing the counts of wins shown in Figure 9 highlights that our method is one of the strongest performing across the range of datasets. Where LPEGN under-performs against other methods this can largely be attributed to weaknesses on the PROTEINS and IMDBM datasets, which we suspect using higher order representations could improve. A.6 CHOICE OF LOCAL NEIGHBOURHOOD We show how this choice of k value will impact the method through analysing the MUTAG dataset and comparing the size of sub-graphs found for different k values, ranging from the most local, k = 1, up to equivalence of a global update, k = 15, shown in Figure 10. A.7 COMPARISON OF LOCAL AND GLOBAL FEATURE SPACES We compare the case of global permutation equivariance to our local permutation equivariance, demonstrating how sub-graphs and the choice of representation is made in Figure 11. A.8 PROOF OF NO LOSS OF EXPRESSIVITY WHEN USING RESTRICTED REPRESENTATIONS Restricting the permutation representation, ρn, from a group G with n nodes to a subgroup H with m nodes yields the restricted representation ρ̃m := ResGH(ρn). The bases for the permutation representation ρn from a set of nodes to a set of nodes is given in Figure 3 and has 2 basis elements. We show that the restricted representation adds 3 more basis elements in Figure 3. From definition 2 there are no edge features associated between the nodes in the sub-graph and the nodes outside of the sub-graph. Therefore 2 of the basis elements introduced in the restricted representations are always multiplied by zeros and not required. Further, the extra 3rd basis element introduced is simply weighting the node features not part of the sub-graph by themselves and as our method extracts a sub-graph for each node in the graph this update is subsumed by the sub-graph update of that node. Therefore the restricted representation for our framework is equal to the permutation representation of a lower dimensional space and ρ̃m = ρm. Therefore, the proof of no loss of expressivity when using restricted representations follows from the proof that k-order graph networks are as powerful as k-WL (Maron et al., 2019). A.9 IMPLEMENTING OTHER MODELS WITHIN OUR FRAMEWORK We have re-drawn our model in a step-by-step format to try and highlight the difference to other models and make clear that this is a more general framework for learning permutation equivariant models in Figure 6. In the datasets used, for graph classification benchmark tasks, the input to the model is a graph with node and edge features, this can be represented as 2nd order permutation representation, so the input representation would be j = 2. The convolution can then map from this representation, ρj , to multiple different representation spaces, ρ0 ⊕ ρ1 ⊕ · · · ⊕ ρi. Subsequent convolutions can then map from these multiple permutation representations, ρ0 ⊕ ρ1 ⊕ · · · ⊕ ρi, to multiple different permutation representations, ρ0 ⊕ ρ1 ⊕ · · · ⊕ ρi. The choice of representations used can be made depending on a trade off between expressivity and computational cost, as lower order representation spaces have less expressivity, but also lower computational cost. Local Natural Graph Networks (LNGNs) (de Haan et al., 2020) take the input feature space and embed this into an invariant scalar feature of the edge neighbourhood graph. This is the same as using specific choice k-hop sub-graph creation and permutation representation space for the subgraph convolution. In the case of LNGNs the choice would be k = 1 and mapping the input feature space to representation ρ0 creating a permutation invariant feature space. Then any graph neural network with invariant features can be used, in the paper the choice made is to use a GCN (Kipf & Welling, 2016), which can also be covered by our framework. Here the choice would again be to use k = 1 when creating the subgroups and using a subgraph convolution with representation spaces ρ0 → ρ0. Global Equivariant Graph Networks (EGNs) (Maron et al., 2018) use a choice of k = n, for n-node graphs when creating the sub graphs, which corresponds to not selecting a sub graph and instead operating over the entire graph. They then use the representation space ρ2 → ρ2 mapping from a graph feature space to a graph feature space. Local Permutation Equivariant Graph Networks (LPEGN) (Ours) In our paper we choose to use k = 1 throughout to keep inline with the vast majority of previous work on graph neural networks, but we use a representation space of ρ1 ⊕ ρ2 → ρ1 ⊕ ρ2 in the hidden layers of the model and we note that this was simply a choice that seemed a simple case to present as a comparison with previous work in the benchmark classification task.
1. What is the focus of the paper on graph neural networks? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. Are there any concerns regarding the experimental results and scalability claims? 5. Are there any questions regarding the architecture and implementation details of the proposed method?
Summary Of The Paper Review
Summary Of The Paper This paper introduces local permutation equivariant graph network. The main motivation for introducing these graph neural networks is to improve in term of scalability with respect to global permutation equivariant models. This paper uses the very abstract language of category theory. In Section 6, the authors provide some experimental results on some real-world graph classification problems. Review The contribution of this paper is unclear to me. The approach is very similar with 'Natural graph networks' by Pim de Haan, Taco Cohen, Max Welling (NEURIPS 2020) and I would have like to see a clear comparison of the approach and the results. The paper is very hard to follow. I agree that category theory is a very abstract field and an introduction is beyond the scope of the paper but it would have been nice to give some simple examples on toy examples highlighting the benefits of the approach proposed here. It is impossible to understand the architecture used here as the description in Section 5 remains at a very high level. I could not find the code associated with this paper. The experiment about scalability in section 6.2 is not convincing at all as it deals with graphs with a maximum average number of nodes of 40.
ICLR
Title Local Permutation Equivariance For Graph Neural Networks Abstract In this work we develop a new method, named locally permutation-equivariant graph neural networks, which provides a framework for building graph neural networks that operate on local node neighbourhoods, through sub-graphs, while using permutation equivariant update functions. Message passing neural networks have been shown to be limited in their expressive power and recent approaches to over come this either lack scalability or require structural information to be encoded into the feature space. The general framework presented here overcomes the scalability issues associated with global permutation equivariance by operating on sub-graphs through restricted representations. In addition, we prove that there is no loss of expressivity by using restricted representations. Furthermore, the proposed framework only requires a choice of k-hops for creating sub-graphs and a choice of representation space to be used for each layer, which makes the method easily applicable across a range of graph based domains. We experimentally validate the method on a range of graph benchmark classification tasks, demonstrating either state-of-the-art results or very competitive results on all benchmarks. Further, we demonstrate that the use of local update functions offers a significant improvement in GPU memory over global methods. 1 INTRODUCTION Many forms of data are naturally structured as graphs such as molecules, bioinformatics, social, or financial and it is therefore of interest to have algorithms which operate over graphs. Machine learning on graphs has received much interest in recent years, with the general framework of a message passing network providing both a useful inductive bias and scalability across a range of domains (Gilmer et al., 2017). However, Xu et al. (2019) show that a model based on a message passing framework with permutation invariant aggregation functions is limited in expressive power. Therefore, there exists many non isomorphic graphs that a model of this form cannot distinguish between. Figure 5 demonstrates two non-isomorphic graphs for which a message passing framework with max pooling would not be able to distinguish between the two graphs. More expressive graph networks exists and a common measure of expressivity is the WeisfeilerLehman (WL) test. One such method of building more expressive networks is based directly on the WL test. Here a neural network architecture is built based on variants of the WL test. Bouritsas et al. (2020) use a permutation invariant local update function, but incorporate permutation equivariant structural information into the feature space. Morris et al. (2019a) build models based on different WL variants that consider local and global connections. Bodnar et al. (2021b) introduce a WL test on simplicial complexes and incorporate this into a message passing scheme. Bodnar et al. (2021a) extend work on simplicial complexes to cell complexes, which subsume simplicial complexes. On the other hand, rather than trying to directly incorporate techniques from WL tests directly into networks other model make use of permutation symmetries to build permutation equivariant graph neural networks (Maron et al., 2018). This model can be built for k-order feature spaces and it was shown by Maron et al. (2019) that such models can distinguish between non-isomorphic graphs as well as the k-WL test. Natural graph networks are a different class of graph neural network, where the constraint placed upon the linear layer is that of naturality (de Haan et al., 2020). The naturality constraint says that for each isomorphism class a map must be chosen that is equivariant to automorphisms. In general the task of learning on graphs consists of utilising many graphs of different sizes. Current methods for utilising permutation equivariant graph neural networks require that the graph be represented as an adjacency tensor, which limits there scalability. Furthermore, global natural graph networks also perform computations on entire graph features, which leads to a large computational complexity for large graphs. Local gauge symmetries have been considered to build models with local equivariance (Cohen et al., 2019). This approach improves scalability of models by utilising local update functions, however for graphs we do not have a single local symmetry. Currently this is overcome in the majority of graph neural networks presented by utilising some form of message passing, but, in general, all works use a permutation invariant aggregation function leading to good scalability but poor expressivity. Local natural graph networks attempt to overcome the limited expressivity through placing a local naturality constraint on the message passing and having different message passing kernels on non-isomorphic edges. Through considering graph neural networks from an elementary category theory perspective and making use of aspects of group theory we present a framework for building local permutation equivariant models. This allows us to build a graph neural network model with local update functions that are permutation equivariant by considering restricted representations of the representation space of the whole graph. Further, we prove that this does not cause a loss of expressivity of the model and that this maintains the option to have a k-order feature space that ensures expressivity equal to k-WL test. Also, by constraining the kernel space under restricted representations, a natural weight sharing scheme becomes apparent, namely sharing weights across local graph neighbourhoods of the same degree. The approach of building models with a framework based on group theory makes clear the generality of the approach, where choices of representation space can be made for each convolutional layer without requiring prior information such as structural information to be encoded into the feature space. This framework can also be shown to include other leading methods as specific cases. 2 BACKGROUND 2.1 GRAPH NETWORKS Different graph neural networks express graphs in alternative forms. Generally, for a message passing model, a matrix of node features and a matrix of edge features is combined with a sparse edge index array specifying the connectivity of the graph. In other works, the graph is provided in a dense format, where the graph is given as a adjacency tensor with node and edge features held in one tensor. In this work we present the graph as follows: Definition 1 A Concrete Graph G is a finite set of nodes V(G) ⊂ N and a set of edges E(G) ⊂ V(G)× V(G). The set of node ids may be non-contiguous and we make use of this here as we extract overlapping sub-graphs when performing the local updates. The same underlying graph can be given in may forms by a permutation of the ordering of the natural numbers of the nodes. Definition 2 A sub-Concrete Graph H is created by taking a node i ∈ V(G), and extracting the nodes j ∈ V(G) and edges (i, j) ⊂ V(G)× V(G), such that there is a connection between nodes i and j. Once a sub-concrete graph has been extracted, this same underlying sub-graph could be expressed through different permutations of the underlying numbering of the nodes. For brevity we refer to sub-concrete graphs as subgraphs throughout the paper. Definition 3 A Graph isomorphism, φ : G→ G′ is a bijection between the vertex sets of two graphs G andG′, such that two vertices u and v are adjacent inG if and only if φ(u) and φ(v) are adjacent in G′. This mapping is edge preserving, i.e. satisfies for all (i, j) ∈ V(G)× V(G): (i, j) ∈ E(G)⇐⇒ (φ(i), φ(j)) ∈ E(G′) An isomorphism from the graph to itself is known as an automorphism. Relabelling of the graph by a permutation of the nodes is called a graph isomorphism, where an example of two isomorphic graphs is given in Figure 5. We desire that the linear layers of the graph neural network respect the composition of graph isomorphisms. This requires us to define the feature space of the graphs and how feature spaces of isomorphic graphs are related. 2.2 PERMUTATION REPRESENTATIONS The feature space of the graphs is a vector space V , where a representation of the group G is a homomorphism ρ : G → GL(V ) of G to the group of automorphisms of V (Fulton & Harris, 2013). A map KG between two representations of G is a vector space map. The elements of the group g ∈ G can act on a vector v ∈ V by the representation matrix v → ρ(g)v. The symmetric subspace of the representation is the space of solutions to the constraint ∀g ∈ G : ρ(g)v = v. Here we are considering the symmetries of the symmetric group Sn. This constraint can be solved for different order representations (Maron et al., 2018; Finzi et al., 2021). We present the space of linear layers mapping from k-order representations to k′-order representations in Figure 2. In addition, for the linear map KG, we require that if a graph is passed through KG and then transformed by permutation to an isomorphic graph this result is the same as if a graph is transformed by the same permutation to an isomorphic graph and then passed through KG. In short, this requires that permutation equivariance is satisfied. 2.3 CATEGORY THEORY This section does not provide a complete overview of category theory, nor even a full introduction, but aims to provide a sufficient level of understanding to aid the reader with further sections of the paper, where we believe presenting the comparison between models from a category theory perspective makes more clear the distinctions between them. A category, C, consists of a set of objects, Ob(C), and a set of morphisms (structure-preserving mappings) or arrows, f : A → B, A,B ∈ Ob(C). There is a binary operation on morphisms called composition. Each object has an identity morphism. Categories can be constructed from given ones by constructing a subcategory, in which each object, morphism, and identity is from the original category, or by building upon a category, where objects, morphisms, and identities are inherited from the original category. A functor is a mapping from one category to another that preserves the categorical structure. For two categories C and D a functor F : C → D maps each object A ∈ Ob(C) to an object F (A) ∈ Ob(D) and maps each morphism f : A→ B in C to a morphism F (f) : F (A)→ F (B) in D. Definition 4 A groupoid is a category in which each morphism is invertible. A groupoid where there is only one object is usually a group. 3 GLOBAL EQUIVARIANT GRAPH NETWORKS 3.1 GLOBAL PERMUTATION EQUIVARIANCE Global permutation equivariant models have been considered by Hartford et al. (2018); Maron et al. (2018; 2019); Albooyeh et al. (2019), with Maron et al. (2018) demonstrating that for order-2 layers there are 15 operations that span the full basis for an permutation equivariant linear layer. These 15 basis elements are shown in Figure 2 with each basis element given by a different color in the map from representation ρ2 → ρ2. Despite these methods, when solved for the entire basis space, having expressivity as good as the k-WL test, they operate on the entire graph. Operating on the entire graph features limits the scalability of the methods. In addition to poor scalability, global permutation appears to be a strong constraint to place upon the model. In the instance where the graph is flattened and an MLP is used to update node and edge features the model would have n4 trainable parameters, where n is the number of nodes. On the other hand, a permutation equivariant update has only 15 trainable parameters and in general 15 n4. Viewing a global permutation equivariant graph network from a category theory perspective there is one object with a collection of arrows representing the elements of the group. Here the arrows or morphisms go both from and to this same single object. The feature space is a functor which maps from a group representation to a vector space. For a global permutation equivariant model the same map is used for every graph. G e g1 g2 Symmetric Group 3.2 GLOBAL NATURALITY Global natural graph networks (GNGN) consider the condition of naturality, (de Haan et al., 2020). GNGNs require that for each isomorphism class of graphs there is a map that is equivariant to automorphisms. This naturality constraint is given by the condition ρ′(φ) ◦ KG = KG′ ◦ ρ(φ), which must hold for every graph isomorphism φ : G → G′ and linear map KG. While the global permutation equivariance constraint requires that all graphs be processed with the same map, global naturality allows for different, non-isomorphic, graphs to be processed by different maps and as such is a generalisation of global permutation equivariance. As is the case for global permutation equivariant models, GNGNs scale poorly as the constraint is placed over the entire graph and linear layers require global computations on the graphs. Viewing a GNGN from a category theory perspective there is a different object for each concrete graph, which form a groupoid. Then, there is a mosphism or arrow for each graph isomorphism. These can either be automorphisms, if the arrow maps to itself, or isomorphisms if the arrow maps to a different object. The feature spaces are functors which map from this graph category to the category of vector spaces. The GNG layer is a natural transformation between such functors consisting of a different map for each non-isomorphic graph. G1 G2 G3 Groupoid of Concrete Graphs 4 LOCAL EQUIVARIANT GRAPH NETWORKS Local equivariant models have started to receive attention following the successes of global equivariant models and local invariant models. The class of models that are based on the WL test are not in general locally permutation equivariant in that they still use a message passing model with permutation invariant update function. Despite this, many of these models inject permutation equivariant information into the feature space, which improves the expressivity of the models (Bouritsas et al., 2020; Morris et al., 2019a; Bodnar et al., 2021b;a). The information to be injected into the feature space is predetermined in these models by a choice of what structural or topological information to use, whereas our model uses representations of the permutation group, making it a very general model that still guarantees expressivity. In contrast to utilising results from the WL test covariant compositional networks (CCN) look at permutation equivariant functions, but they do not consider the entire basis space as was considered in Maron et al. (2018) and instead consider four equivariant operations (Kondor et al., 2018). This means that the permutation equivariant linear layers are not as expressive as those used in the global permutation equivariant layers. Furthermore, in a CCN the node neighbourhood and feature dimensions grow with each layer, which can be problematic for larger graphs and limits their scalability. Another local equivariant model is that of local natural graph networks (LNGN) (de Haan et al., 2020). An LNGN uses a message passing framework, but instead of using a permutation invariant aggregation function, it specifies the constraint that node features transform under isomophisms of the node neighbourhood and that a different message passing kernel is used on non-isomorphic edges. In practice this leads to little weight sharing in graphs that are quite heterogeneous and as such the layer is re-interpreted such that a message from node p to node q, kpqvp, is given by a function k(Gpq, vp) of the edge neighbourhood Gpq and feature value vp at p. Viewing a LNGN from a category theoretic perspective there is a groupoid of node neighbourhoods where morphisms are isomorphisms between node neighbourhoods and a groupoid of edge neighbourhoods where morphisms are ismorphisms between edge neighbourhoods. In addition, there is a functor mapping from edge neighbourhoods to the node neighbourhood of the start node and a functor mapping similarly but to the tail node of the edge neighbourhood. The node feature spaces are functors mapping from the category of node neighbourhoods to the category of vector spaces. Further, composition of two functors creates a mapping from edge neighbourhoods to the category of vector spaces. A LNG kernel is a natural transformation between these functors. N1 N2 N3 Groupoid of Node Neighbourhoods E1 E2 E3 Groupoid of Edge Neighbourhoods 5 LOCAL PERMUTATION EQUIVARIANCE A local permutation equivariant graph network (LPEGN) improves upon the scalability of global permutation equivariant models by considering permutation equivariance at lower scales. Here, instead of performing the update function on the entire graph, we perform the update function on node neighbourhoods as is done in message passing models. Furthermore, while performing the update functions on node neighbourhoods, we maintain improved expressivity through using korder permutation representations. The intuition behind imposing permutation equivariance on node neighbourhoods rather than the entire graph is that the model can learn expressive features about a part of the sub-graph without requiring knowledge of permutations multiple hops away from the central update node. This framework generalises global permutation equivariant models as it is compatible with all length scales, meaning that, if the graph structure is used to determine node neighbourhoods, then any k value can be chosen to determine the k-hops from the central update node producing the sub-graph which permutation equivariance is required for. Therefore, if the value chosen for the k-hops is sufficiently large then the layer becomes a global permutation update. The basis functions for different order representation spaces are given with the split into different degrees for a 1-hop node neighbourhood in Figure 1. The method therefore requires a choice of k for the number of hops away from the central node to consider in the local update and we discuss this choice in Section 5.2. In addition, the framework then allows for a choice of weight sharing, which we discuss in Section 5.3. 5.1 RESTRICTED REPRESENTATION Given a graph comprised of n nodes, global equivariant models consider the permutation representation of the permutation group G = Sn, namely the representation ρ : G → GL(Rc). Here we consider local updates on sub-graphs with m nodes, where we are interested in the sub-group H = Sm ≤ Sn. Therefore we can consider the restricted representation of the sub-group Sm, where the restricted representation can be seen as dropping some symmetries from the group Sn. The restricted representation is denoted by ρ̃ := ResGH(ρ) : H → GL(Rc). The global equivariance case using representations, ρ, and the case using restricted representations, ρ̃, are shown in Figure 3. Both figures show a basis mapping from order 1 to order 1 permutation representation. The restricted repre- sentation ResS5S4 drops the permutation symmetry associated to node 5. Dropping the permutation symmetry of node 5 results in 3 additional parameters, one for the update of node 5 based on node 5’s features, another for the update of node 5 based on the features of the other nodes in the graph, and a final parameter for the update of the other nodes in the graph based on node 5’s features. We proove that using restricted representations in our framework has no loss of expressivity in Appendix A.8. 5.2 CHOICE OF LOCAL NEIGHBOURHOOD The LPEGN model framework performs the permutation equivariant update on local sub-graphs, although a choice can be made as to how these sub-graphs are created. One option is the use the underlying graph structure and choose a k value to extract local neighbourhoods that include nodes which are at most k-hops from the central node. This method creates a sub-graph for each node in the graph. Here the choice of the k value can be seen as choosing a length scale for which the permutation symmetry should be exploited over. In other words, choosing a value of k = 1 is the shortest length scale and node features will be updated such that they are permutation equivariant to their 1-hop neighbours, but not equivariant to nodes further away in the graph. On the other hand, choosing a k value sufficiently large will create a model equivalent to global permutation equivariant models, where each update is permutation equivariant to permutations of the entire graph. Throughout this work we choose k = 1 unless otherwise stated to take the most local permutation equivariant updates. We show how this choice of k value will impact the method through analysing the MUTAG dataset in Figure 10. 5.3 CHOICE OF WEIGHT SHARING In general when constructing the sub-graphs a variety of different sized sub-graphs are found due to differing degrees of the nodes in the graph. This allows for a further choice, namely the weight sharing method to be used. Given that the permutation equivariance constraint is a strong constraint to place over the linear layers, we perform weight sharing across sub-graphs of the same size. This means that sub-graphs of different sizes do not share weights and can be updated differently. The intuition for this is that sub-graphs of the same size already have some similarity in that they are of the same size, while sub-graphs of a different size are less likely to be similar and hence should be updated differently. Throughout this paper we choose to use weight sharing across local neighbourhoods of the same size degree, although in situations where there is very few local neighbourhoods of a particular size we group these together. 5.4 CHOICE OF REPRESENTATION SPACE In Section 5.1 we considered the restricted representation of a sub-group Sm ≤ Sn and in Section 5.2 we detailed how local sub-graphs are selected. Here we must make a connection between the two to present the representational space used in our LPEGN framework. When focusing in on the nodes that we didn’t drop the permutation symmetry of it can be seen, in Figure 3, that for these nodes the restricted representation is equivalent to the global permutation equivariant representation. Furthermore, given our choice of sub-graph construction we would seek to drop the permutation symmetry from a node in the graph due to the fact it is not connected to the central update node. Therefore the edge features connecting the central node to the node we are dropping the permutation symmetry of are zero. Hence, we are not interested in the additional parameters introduced in the restricted representation connecting the two nodes. Furthermore, as the node we are dropping permutation symmetries for is not connected to the chosen sub-graph we are also not interested in the additional parameters introduced in the restricted representation for this node. As a result, due to the choice of sub-graph construction, the restricted representation for our sub-group has zero features in the position of new parameters introduced and is therefore equivalent to the permutation representation on a lower dimensional space. Therefore where global permutation equivariant updates use representations ρ : G → GL(Rc), our local permutation equivariant model uses representations ρ̃ : H → GL(Rc̄), where c̄ ≤ c. The scheme for creating representations of local neighbourhoods is shown in Figure 1, where some representations of the local neighbourhoods are shown. 5.5 LOCAL PERMUTATION EQUIVARIANT GRAPH NETWORK A LPEGN combines the chosen method of creating sub-graphs as local neighbourhoods with a choice of weight sharing scheme and makes use of permutation representations on these sub-graphs. The process of creating sub-graphs, updating based on the choice of weight sharing using permutation representations, and re-constructing the graph structure is presented in Figure 1. Viewing a LPEGN from a category theoretic perspective, each different size node neighbourhood is a sub-group, H , which is a different object. There are morphisms or arrows for each permutation of the neighbourhood. This forms a groupoid. The sub-group representations are functors from the category of node neighbourhoods to the category of vector spaces. H1 H2 H3 e h1 h2 e h1 h2 e h1 h2 Groupoid of Symmetric Sub-Groups 6 EXPERIMENTS 6.1 GRAPH BENCHMARKS We tested our method on a series of 7 different real-world graph classification problems from the benchmark of (Yanardag & Vishwanathan, 2015). It is noteworthy to point out some interesting features of each dataset. We note that both MUTAG and PTC are very small datasets, with MUTAG only having 18 graphs in the test set when using a 10 % testing split. Further, the Proteins dataset has the largest graphs with an average number of nodes in each graph of 39. Also, NCI1 and NCI109 are the largest datasets having over 4000 graphs each, leading to less spurious results. Finally, IMDBB and IMDB-M generally have smaller graphs, with IMDB-M only having an average number of 13 nodes in each graph. The small size of graphs coupled with having 3 classes appears to make IMBD-M a challenging problem. Table 1 compares our LPEGN model to a range of other methods. This highlights that our method achieves a new state-of-the-art result on the NCI1 dataset and is the second strongest on PTC and NCI109. Furthermore, our method performs competitively across all datasets. We achieve a poor ranking score on the Proteins datasets, although the classification accuracy of the model is competitive with leading results and only falls slightly short of the bulk of other methods. A comparison of the distribution of training accuracy is presented in figure 8 and a ranking based method is presented in Figure 9. 6.2 SCALABILITY We compare global permutation equivariant models with our local permutation equivariant model to assess the improvements in scalability offered by local permutation equivariance. Here we compare the GPU memory required by the model against the average size of graph in the dataset. It is expected that as the computational cost of global methods scales superlinearly with the size of the graph, due to the requirement to treat the entire graph as a single adjacency tensor, that local equivariance will have a lower computational cost as each update only requires local node neighbourhoods to be expressed as adjacency tensors, which are typically much smaller than the size of the graph. Therefore global methods scale with O(n2), for graphs with n nodes, while local methods scale with O(nm2), where m is the number of nodes in a node neighbourhood and typically m n. Figure 4 shows how global and local permutation equivariant models scale with GPU memory usage as the average size of the graphs in the dataset increases. This will allow the LPEGN method to scale to graph datasets that was not possible with global equivariance. 7 FUTURE WORK From Table 1 it is clear that IMDB-M is a dataset for which our method has weaker performance. As stated in Section A.3 between hidden local equivariant graph neural network layers for the experiments in this paper we only make use of order 1 and 2 representations. As it was shown by Maron et al. (2019) that increasing the order of the permutation representation increases the expressivity inline with the k-WL test, the expressivity of our method could be improved through the consideration of higher order permutation representations. Making use of higher order representations, we believe, would improve results on the IMBD-M dataset and therefore makes for an interesting future direction. 8 CONCLUSION We present a graph neural network framework for building models comprising of local permutation equivariant update functions. The method presented is general in that it presents a framework for operating on sub-graphs with permutation equivariant convolutions, where a choice of representation space can be made depending on the expressivity required. This maintains expressivity in the update functions by utilising restricted representations, while improving scalability over global permutation equivariant methods by operating on smaller sub-graphs. We show that this method includes many previous published approaches as specific cases. Using a general approach as our framework does makes it easier to build provably expressive graph neural networks without the need to embed structural information about the task at hand, as is done in other methods. Further, we experimentally validate the method using k = 1 to create the sub-graphs and ρ1 ⊕ ρ2 representations for the local update functions on a set of graph classification datasets. This model produces state-of-the-art results on one of the datasets, achieves second best results on two datasets, and is competitive on the remaining four. In addition, ranking the model against existing methods on each dataset shows that our method is one of the strongest performing methods. Furthermore, when compared to global permutation equivariant models our method offers a significant improvement in terms of the GPU memory usage, improving the scalability of the method. A APPENDIX A.1 ISOMORPHIC GRAPHS An example of two isomporhic and two non-isomorphic graphs are shown in Figure 5. To a permutation invariant message passing update function utilising a max pooling aggregation function the isomorphic and non-isomorphic graphs are equivalent when updating the central node. A.2 MATHEMATICAL BACKGROUND Definition 5 A group is a set G with a binary operation ◦ satisfying the following laws: (G0) (Closure law): For all g, h ∈ G, g ◦ h ∈ G (G1) (Associative law): g ◦ (h ◦ k) = (g ◦ h) ◦ k for all g, h, k ∈ G (G2) (Identity law): There exists e ∈ G such that g ◦ e = e ◦ g = g for all g ∈ G (G3) (Inverse law): For all g ∈ G, there exists h ∈ G with g ◦ h = h ◦ g = e Definition 6 A representation of a finite group on a finite-dimensional complex vector space V is a homomorphism ρ → GL(V ) of the group to automorphisms of V (Fulton & Harris, 2013). This allows group elements to be expressed as invertible matrices and the group operation to be matrix multiplication. A.3 MODEL ARCHITECTURE We consider the input graphs as an input feature space that is an order 2 representation. For each local permutation equivariant linear layer we use order 1 and 2 representations as the feature spaces. This allows for projection down from graph to node feature spaces through the basis for ρ2 → ρ1, projection up from node to graph feature spaces through the basis for ρ1 → ρ2, and mappings across the same order representations through ρ2 → ρ2 and ρ1 → ρ1. The final local permutation equivariant linear layer maps to order 0 representations through ρ2 → ρ0 and ρ1 → ρ0 for the task of graph level classification. In addition to the graph layers, we also add 3 MLP layers to the end of the model. Despite these specific choices which were made to provide a baseline of our method for comparison to existing methods the framework we present is much more general and different representation spaces can be chosen. We present the general framework in Figure 6. This shows how different permutation representation spaces, ρ1 ⊕ ρ2 ⊕ · · · ⊕ ρi, can be chosen for different layers in the model and how different k values can be chosen when creating the sub-graphs in each layer. A.4 EXPRESSIVITY Figure 7 shows the training accuracy achived by the LPEGN model across a range of datasets. (a, b, c, and d) show that the LPEGN model is able to achieve 100% training accuracy on PTC, Proteins, NCI1, and NCI109 datasets. This demonstrates that the model utilising only order 1- and 2-permutation representations is sufficiently expressive. In addition, (e) shows that the model achieves very close to 100% accuracy on the IMDBB dataset. On the other hand, (f) shows that the model training accuracy plateaus above 70% accuracy for the IMDBM dataset. This highlights the model is not sufficiently expressive to achieve 100% accuracy on this datset. As discussed in Section 7 we belive that utilising higher order permutation representations would make the model more expressive and as a result achieve a higher accuracy on IMDBM. A.5 COMPARISON OF RESULTS In addition to the comparison across datasets in Table 1 Figure 8 shows the training accuracy distribution of the LPEGN method and compares to other methods from Table 1. The multimodal distribution of LPEGN for the PTC dataset highlights why it has a large standard deviation. This is likely a result of the fact that the PTC dataset is very small. Given the poor ranking of the LPEGN method in Table 1, comparing the results to other methods here highlights that the LPEGNN result is competitive. For the NCI1 and NCI109 datasets the distribution of results of our method highlight the strong performance of the method. For IMDBB and IMDBM the distribution of results for the LPEGN method also highlight that it is competative on these datasets. Further, we propose an additional method of comparison, namely the counts of wins of our LPEGN method with other methods and the counts of significant wins. The result of comparing the counts of wins shown in Figure 9 highlights that our method is one of the strongest performing across the range of datasets. Where LPEGN under-performs against other methods this can largely be attributed to weaknesses on the PROTEINS and IMDBM datasets, which we suspect using higher order representations could improve. A.6 CHOICE OF LOCAL NEIGHBOURHOOD We show how this choice of k value will impact the method through analysing the MUTAG dataset and comparing the size of sub-graphs found for different k values, ranging from the most local, k = 1, up to equivalence of a global update, k = 15, shown in Figure 10. A.7 COMPARISON OF LOCAL AND GLOBAL FEATURE SPACES We compare the case of global permutation equivariance to our local permutation equivariance, demonstrating how sub-graphs and the choice of representation is made in Figure 11. A.8 PROOF OF NO LOSS OF EXPRESSIVITY WHEN USING RESTRICTED REPRESENTATIONS Restricting the permutation representation, ρn, from a group G with n nodes to a subgroup H with m nodes yields the restricted representation ρ̃m := ResGH(ρn). The bases for the permutation representation ρn from a set of nodes to a set of nodes is given in Figure 3 and has 2 basis elements. We show that the restricted representation adds 3 more basis elements in Figure 3. From definition 2 there are no edge features associated between the nodes in the sub-graph and the nodes outside of the sub-graph. Therefore 2 of the basis elements introduced in the restricted representations are always multiplied by zeros and not required. Further, the extra 3rd basis element introduced is simply weighting the node features not part of the sub-graph by themselves and as our method extracts a sub-graph for each node in the graph this update is subsumed by the sub-graph update of that node. Therefore the restricted representation for our framework is equal to the permutation representation of a lower dimensional space and ρ̃m = ρm. Therefore, the proof of no loss of expressivity when using restricted representations follows from the proof that k-order graph networks are as powerful as k-WL (Maron et al., 2019). A.9 IMPLEMENTING OTHER MODELS WITHIN OUR FRAMEWORK We have re-drawn our model in a step-by-step format to try and highlight the difference to other models and make clear that this is a more general framework for learning permutation equivariant models in Figure 6. In the datasets used, for graph classification benchmark tasks, the input to the model is a graph with node and edge features, this can be represented as 2nd order permutation representation, so the input representation would be j = 2. The convolution can then map from this representation, ρj , to multiple different representation spaces, ρ0 ⊕ ρ1 ⊕ · · · ⊕ ρi. Subsequent convolutions can then map from these multiple permutation representations, ρ0 ⊕ ρ1 ⊕ · · · ⊕ ρi, to multiple different permutation representations, ρ0 ⊕ ρ1 ⊕ · · · ⊕ ρi. The choice of representations used can be made depending on a trade off between expressivity and computational cost, as lower order representation spaces have less expressivity, but also lower computational cost. Local Natural Graph Networks (LNGNs) (de Haan et al., 2020) take the input feature space and embed this into an invariant scalar feature of the edge neighbourhood graph. This is the same as using specific choice k-hop sub-graph creation and permutation representation space for the subgraph convolution. In the case of LNGNs the choice would be k = 1 and mapping the input feature space to representation ρ0 creating a permutation invariant feature space. Then any graph neural network with invariant features can be used, in the paper the choice made is to use a GCN (Kipf & Welling, 2016), which can also be covered by our framework. Here the choice would again be to use k = 1 when creating the subgroups and using a subgraph convolution with representation spaces ρ0 → ρ0. Global Equivariant Graph Networks (EGNs) (Maron et al., 2018) use a choice of k = n, for n-node graphs when creating the sub graphs, which corresponds to not selecting a sub graph and instead operating over the entire graph. They then use the representation space ρ2 → ρ2 mapping from a graph feature space to a graph feature space. Local Permutation Equivariant Graph Networks (LPEGN) (Ours) In our paper we choose to use k = 1 throughout to keep inline with the vast majority of previous work on graph neural networks, but we use a representation space of ρ1 ⊕ ρ2 → ρ1 ⊕ ρ2 in the hidden layers of the model and we note that this was simply a choice that seemed a simple case to present as a comparison with previous work in the benchmark classification task.
1. What is the main contribution of the paper regarding Graph Neural Networks? 2. What are the strengths and weaknesses of the proposed framework for building GNNs that operate on local node neighborhoods in a permutation equivariant way? 3. Do you have any concerns about the exposition and clarity of the paper? 4. How does the reviewer assess the scalability studies conducted in the paper? 5. Are there any comparisons with relevant works that use subgraph counting isomorphism/automorphism over local neighborhood?
Summary Of The Paper Review
Summary Of The Paper Graph Neural Networks have recently become the state of the art for tasks on graphs due to their flexibility, scalability. In this work, the authors propose a framework to build GNN's that operate on local node neighborhoods in a permutation equivariant way - and argue that since LPEGN operates on lower dimensional spaces in comparison to regular GNN's the proposed technique offers significant improvements in terms of GPU memory usage. The authors make use of category theory basics - and employ restricted representations of finite symmetric groups (i.e. fix some nodes while permuting other elements) based on the number of nodes in the neighborhood of the node - and ensure there is weight sharing between nodes with the same degree to achieve their objective. Review Initial Recommendation: Rejection Reason: In my view, in current format, the weaknesses outweigh the strengths of the paper. Please see details below. Strengths: Idea - Use of restricted representation of symmetric groups to reduce dimension of vector space - associated with group representation for linear permutation equivariant layers. Practicality - Employ a weight sharing scheme when the size of the node neighborhood is the same Demonstrate scalability in terms of GPU memory usage Weakness: For me, the main weakness of this paper is in its exposition and clarity that raised several questions. I have listed my main concerns below. Lack of precision and clarity in the paper, for e.g. (i) definition 2 is never again used in the paper (ii) section 3.1 The space of linear equivariant layers, given by bell number (4)) is 15 - what do you mean reducing linear layer to just 15 parameters? The authors claim that there is no loss of expressivity while using restricted representations of finite symmetric groups but there is no formal proof for the same? Does this always hold? Do you need them to be normal subgroups? Is there any specific requirement for the representations (what happens when irreducible reps are used, etc) used? While considering node neighborhood and edge neighborhood morphism misses out comparisons with relevant works which use subgraph counting isomorphism/ automorphism over local neighborhood [1][2][3][4] to obtain provably expressive representations The mentioned works also almost always perform better than the proposed work in the datasets. The scalability studies are incomplete - e.g. do not describe what the datasets used are? how many graphs are there in the dataset? etc? No study on effect on performance when k>1 (hops) is used (the plots in the appendix only show the size) References: Bouritsas, Giorgos, et al. "Improving graph neural network expressivity via subgraph isomorphism counting." arXiv preprint arXiv:2006.09252 (2020). Morris, Christopher, Gaurav Rattan, and Petra Mutzel. "Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings." arXiv preprint arXiv:1904.01543 (2019) Bodnar, Cristian, et al. "Weisfeiler and lehman go cellular: Cw networks." arXiv preprint arXiv:2106.12575 (2021). Bodnar, Cristian, et al. "Weisfeiler and lehman go topological: Message passing simplicial networks." arXiv preprint arXiv:2103.03212 (2021).
ICLR
Title Local Permutation Equivariance For Graph Neural Networks Abstract In this work we develop a new method, named locally permutation-equivariant graph neural networks, which provides a framework for building graph neural networks that operate on local node neighbourhoods, through sub-graphs, while using permutation equivariant update functions. Message passing neural networks have been shown to be limited in their expressive power and recent approaches to over come this either lack scalability or require structural information to be encoded into the feature space. The general framework presented here overcomes the scalability issues associated with global permutation equivariance by operating on sub-graphs through restricted representations. In addition, we prove that there is no loss of expressivity by using restricted representations. Furthermore, the proposed framework only requires a choice of k-hops for creating sub-graphs and a choice of representation space to be used for each layer, which makes the method easily applicable across a range of graph based domains. We experimentally validate the method on a range of graph benchmark classification tasks, demonstrating either state-of-the-art results or very competitive results on all benchmarks. Further, we demonstrate that the use of local update functions offers a significant improvement in GPU memory over global methods. 1 INTRODUCTION Many forms of data are naturally structured as graphs such as molecules, bioinformatics, social, or financial and it is therefore of interest to have algorithms which operate over graphs. Machine learning on graphs has received much interest in recent years, with the general framework of a message passing network providing both a useful inductive bias and scalability across a range of domains (Gilmer et al., 2017). However, Xu et al. (2019) show that a model based on a message passing framework with permutation invariant aggregation functions is limited in expressive power. Therefore, there exists many non isomorphic graphs that a model of this form cannot distinguish between. Figure 5 demonstrates two non-isomorphic graphs for which a message passing framework with max pooling would not be able to distinguish between the two graphs. More expressive graph networks exists and a common measure of expressivity is the WeisfeilerLehman (WL) test. One such method of building more expressive networks is based directly on the WL test. Here a neural network architecture is built based on variants of the WL test. Bouritsas et al. (2020) use a permutation invariant local update function, but incorporate permutation equivariant structural information into the feature space. Morris et al. (2019a) build models based on different WL variants that consider local and global connections. Bodnar et al. (2021b) introduce a WL test on simplicial complexes and incorporate this into a message passing scheme. Bodnar et al. (2021a) extend work on simplicial complexes to cell complexes, which subsume simplicial complexes. On the other hand, rather than trying to directly incorporate techniques from WL tests directly into networks other model make use of permutation symmetries to build permutation equivariant graph neural networks (Maron et al., 2018). This model can be built for k-order feature spaces and it was shown by Maron et al. (2019) that such models can distinguish between non-isomorphic graphs as well as the k-WL test. Natural graph networks are a different class of graph neural network, where the constraint placed upon the linear layer is that of naturality (de Haan et al., 2020). The naturality constraint says that for each isomorphism class a map must be chosen that is equivariant to automorphisms. In general the task of learning on graphs consists of utilising many graphs of different sizes. Current methods for utilising permutation equivariant graph neural networks require that the graph be represented as an adjacency tensor, which limits there scalability. Furthermore, global natural graph networks also perform computations on entire graph features, which leads to a large computational complexity for large graphs. Local gauge symmetries have been considered to build models with local equivariance (Cohen et al., 2019). This approach improves scalability of models by utilising local update functions, however for graphs we do not have a single local symmetry. Currently this is overcome in the majority of graph neural networks presented by utilising some form of message passing, but, in general, all works use a permutation invariant aggregation function leading to good scalability but poor expressivity. Local natural graph networks attempt to overcome the limited expressivity through placing a local naturality constraint on the message passing and having different message passing kernels on non-isomorphic edges. Through considering graph neural networks from an elementary category theory perspective and making use of aspects of group theory we present a framework for building local permutation equivariant models. This allows us to build a graph neural network model with local update functions that are permutation equivariant by considering restricted representations of the representation space of the whole graph. Further, we prove that this does not cause a loss of expressivity of the model and that this maintains the option to have a k-order feature space that ensures expressivity equal to k-WL test. Also, by constraining the kernel space under restricted representations, a natural weight sharing scheme becomes apparent, namely sharing weights across local graph neighbourhoods of the same degree. The approach of building models with a framework based on group theory makes clear the generality of the approach, where choices of representation space can be made for each convolutional layer without requiring prior information such as structural information to be encoded into the feature space. This framework can also be shown to include other leading methods as specific cases. 2 BACKGROUND 2.1 GRAPH NETWORKS Different graph neural networks express graphs in alternative forms. Generally, for a message passing model, a matrix of node features and a matrix of edge features is combined with a sparse edge index array specifying the connectivity of the graph. In other works, the graph is provided in a dense format, where the graph is given as a adjacency tensor with node and edge features held in one tensor. In this work we present the graph as follows: Definition 1 A Concrete Graph G is a finite set of nodes V(G) ⊂ N and a set of edges E(G) ⊂ V(G)× V(G). The set of node ids may be non-contiguous and we make use of this here as we extract overlapping sub-graphs when performing the local updates. The same underlying graph can be given in may forms by a permutation of the ordering of the natural numbers of the nodes. Definition 2 A sub-Concrete Graph H is created by taking a node i ∈ V(G), and extracting the nodes j ∈ V(G) and edges (i, j) ⊂ V(G)× V(G), such that there is a connection between nodes i and j. Once a sub-concrete graph has been extracted, this same underlying sub-graph could be expressed through different permutations of the underlying numbering of the nodes. For brevity we refer to sub-concrete graphs as subgraphs throughout the paper. Definition 3 A Graph isomorphism, φ : G→ G′ is a bijection between the vertex sets of two graphs G andG′, such that two vertices u and v are adjacent inG if and only if φ(u) and φ(v) are adjacent in G′. This mapping is edge preserving, i.e. satisfies for all (i, j) ∈ V(G)× V(G): (i, j) ∈ E(G)⇐⇒ (φ(i), φ(j)) ∈ E(G′) An isomorphism from the graph to itself is known as an automorphism. Relabelling of the graph by a permutation of the nodes is called a graph isomorphism, where an example of two isomorphic graphs is given in Figure 5. We desire that the linear layers of the graph neural network respect the composition of graph isomorphisms. This requires us to define the feature space of the graphs and how feature spaces of isomorphic graphs are related. 2.2 PERMUTATION REPRESENTATIONS The feature space of the graphs is a vector space V , where a representation of the group G is a homomorphism ρ : G → GL(V ) of G to the group of automorphisms of V (Fulton & Harris, 2013). A map KG between two representations of G is a vector space map. The elements of the group g ∈ G can act on a vector v ∈ V by the representation matrix v → ρ(g)v. The symmetric subspace of the representation is the space of solutions to the constraint ∀g ∈ G : ρ(g)v = v. Here we are considering the symmetries of the symmetric group Sn. This constraint can be solved for different order representations (Maron et al., 2018; Finzi et al., 2021). We present the space of linear layers mapping from k-order representations to k′-order representations in Figure 2. In addition, for the linear map KG, we require that if a graph is passed through KG and then transformed by permutation to an isomorphic graph this result is the same as if a graph is transformed by the same permutation to an isomorphic graph and then passed through KG. In short, this requires that permutation equivariance is satisfied. 2.3 CATEGORY THEORY This section does not provide a complete overview of category theory, nor even a full introduction, but aims to provide a sufficient level of understanding to aid the reader with further sections of the paper, where we believe presenting the comparison between models from a category theory perspective makes more clear the distinctions between them. A category, C, consists of a set of objects, Ob(C), and a set of morphisms (structure-preserving mappings) or arrows, f : A → B, A,B ∈ Ob(C). There is a binary operation on morphisms called composition. Each object has an identity morphism. Categories can be constructed from given ones by constructing a subcategory, in which each object, morphism, and identity is from the original category, or by building upon a category, where objects, morphisms, and identities are inherited from the original category. A functor is a mapping from one category to another that preserves the categorical structure. For two categories C and D a functor F : C → D maps each object A ∈ Ob(C) to an object F (A) ∈ Ob(D) and maps each morphism f : A→ B in C to a morphism F (f) : F (A)→ F (B) in D. Definition 4 A groupoid is a category in which each morphism is invertible. A groupoid where there is only one object is usually a group. 3 GLOBAL EQUIVARIANT GRAPH NETWORKS 3.1 GLOBAL PERMUTATION EQUIVARIANCE Global permutation equivariant models have been considered by Hartford et al. (2018); Maron et al. (2018; 2019); Albooyeh et al. (2019), with Maron et al. (2018) demonstrating that for order-2 layers there are 15 operations that span the full basis for an permutation equivariant linear layer. These 15 basis elements are shown in Figure 2 with each basis element given by a different color in the map from representation ρ2 → ρ2. Despite these methods, when solved for the entire basis space, having expressivity as good as the k-WL test, they operate on the entire graph. Operating on the entire graph features limits the scalability of the methods. In addition to poor scalability, global permutation appears to be a strong constraint to place upon the model. In the instance where the graph is flattened and an MLP is used to update node and edge features the model would have n4 trainable parameters, where n is the number of nodes. On the other hand, a permutation equivariant update has only 15 trainable parameters and in general 15 n4. Viewing a global permutation equivariant graph network from a category theory perspective there is one object with a collection of arrows representing the elements of the group. Here the arrows or morphisms go both from and to this same single object. The feature space is a functor which maps from a group representation to a vector space. For a global permutation equivariant model the same map is used for every graph. G e g1 g2 Symmetric Group 3.2 GLOBAL NATURALITY Global natural graph networks (GNGN) consider the condition of naturality, (de Haan et al., 2020). GNGNs require that for each isomorphism class of graphs there is a map that is equivariant to automorphisms. This naturality constraint is given by the condition ρ′(φ) ◦ KG = KG′ ◦ ρ(φ), which must hold for every graph isomorphism φ : G → G′ and linear map KG. While the global permutation equivariance constraint requires that all graphs be processed with the same map, global naturality allows for different, non-isomorphic, graphs to be processed by different maps and as such is a generalisation of global permutation equivariance. As is the case for global permutation equivariant models, GNGNs scale poorly as the constraint is placed over the entire graph and linear layers require global computations on the graphs. Viewing a GNGN from a category theory perspective there is a different object for each concrete graph, which form a groupoid. Then, there is a mosphism or arrow for each graph isomorphism. These can either be automorphisms, if the arrow maps to itself, or isomorphisms if the arrow maps to a different object. The feature spaces are functors which map from this graph category to the category of vector spaces. The GNG layer is a natural transformation between such functors consisting of a different map for each non-isomorphic graph. G1 G2 G3 Groupoid of Concrete Graphs 4 LOCAL EQUIVARIANT GRAPH NETWORKS Local equivariant models have started to receive attention following the successes of global equivariant models and local invariant models. The class of models that are based on the WL test are not in general locally permutation equivariant in that they still use a message passing model with permutation invariant update function. Despite this, many of these models inject permutation equivariant information into the feature space, which improves the expressivity of the models (Bouritsas et al., 2020; Morris et al., 2019a; Bodnar et al., 2021b;a). The information to be injected into the feature space is predetermined in these models by a choice of what structural or topological information to use, whereas our model uses representations of the permutation group, making it a very general model that still guarantees expressivity. In contrast to utilising results from the WL test covariant compositional networks (CCN) look at permutation equivariant functions, but they do not consider the entire basis space as was considered in Maron et al. (2018) and instead consider four equivariant operations (Kondor et al., 2018). This means that the permutation equivariant linear layers are not as expressive as those used in the global permutation equivariant layers. Furthermore, in a CCN the node neighbourhood and feature dimensions grow with each layer, which can be problematic for larger graphs and limits their scalability. Another local equivariant model is that of local natural graph networks (LNGN) (de Haan et al., 2020). An LNGN uses a message passing framework, but instead of using a permutation invariant aggregation function, it specifies the constraint that node features transform under isomophisms of the node neighbourhood and that a different message passing kernel is used on non-isomorphic edges. In practice this leads to little weight sharing in graphs that are quite heterogeneous and as such the layer is re-interpreted such that a message from node p to node q, kpqvp, is given by a function k(Gpq, vp) of the edge neighbourhood Gpq and feature value vp at p. Viewing a LNGN from a category theoretic perspective there is a groupoid of node neighbourhoods where morphisms are isomorphisms between node neighbourhoods and a groupoid of edge neighbourhoods where morphisms are ismorphisms between edge neighbourhoods. In addition, there is a functor mapping from edge neighbourhoods to the node neighbourhood of the start node and a functor mapping similarly but to the tail node of the edge neighbourhood. The node feature spaces are functors mapping from the category of node neighbourhoods to the category of vector spaces. Further, composition of two functors creates a mapping from edge neighbourhoods to the category of vector spaces. A LNG kernel is a natural transformation between these functors. N1 N2 N3 Groupoid of Node Neighbourhoods E1 E2 E3 Groupoid of Edge Neighbourhoods 5 LOCAL PERMUTATION EQUIVARIANCE A local permutation equivariant graph network (LPEGN) improves upon the scalability of global permutation equivariant models by considering permutation equivariance at lower scales. Here, instead of performing the update function on the entire graph, we perform the update function on node neighbourhoods as is done in message passing models. Furthermore, while performing the update functions on node neighbourhoods, we maintain improved expressivity through using korder permutation representations. The intuition behind imposing permutation equivariance on node neighbourhoods rather than the entire graph is that the model can learn expressive features about a part of the sub-graph without requiring knowledge of permutations multiple hops away from the central update node. This framework generalises global permutation equivariant models as it is compatible with all length scales, meaning that, if the graph structure is used to determine node neighbourhoods, then any k value can be chosen to determine the k-hops from the central update node producing the sub-graph which permutation equivariance is required for. Therefore, if the value chosen for the k-hops is sufficiently large then the layer becomes a global permutation update. The basis functions for different order representation spaces are given with the split into different degrees for a 1-hop node neighbourhood in Figure 1. The method therefore requires a choice of k for the number of hops away from the central node to consider in the local update and we discuss this choice in Section 5.2. In addition, the framework then allows for a choice of weight sharing, which we discuss in Section 5.3. 5.1 RESTRICTED REPRESENTATION Given a graph comprised of n nodes, global equivariant models consider the permutation representation of the permutation group G = Sn, namely the representation ρ : G → GL(Rc). Here we consider local updates on sub-graphs with m nodes, where we are interested in the sub-group H = Sm ≤ Sn. Therefore we can consider the restricted representation of the sub-group Sm, where the restricted representation can be seen as dropping some symmetries from the group Sn. The restricted representation is denoted by ρ̃ := ResGH(ρ) : H → GL(Rc). The global equivariance case using representations, ρ, and the case using restricted representations, ρ̃, are shown in Figure 3. Both figures show a basis mapping from order 1 to order 1 permutation representation. The restricted repre- sentation ResS5S4 drops the permutation symmetry associated to node 5. Dropping the permutation symmetry of node 5 results in 3 additional parameters, one for the update of node 5 based on node 5’s features, another for the update of node 5 based on the features of the other nodes in the graph, and a final parameter for the update of the other nodes in the graph based on node 5’s features. We proove that using restricted representations in our framework has no loss of expressivity in Appendix A.8. 5.2 CHOICE OF LOCAL NEIGHBOURHOOD The LPEGN model framework performs the permutation equivariant update on local sub-graphs, although a choice can be made as to how these sub-graphs are created. One option is the use the underlying graph structure and choose a k value to extract local neighbourhoods that include nodes which are at most k-hops from the central node. This method creates a sub-graph for each node in the graph. Here the choice of the k value can be seen as choosing a length scale for which the permutation symmetry should be exploited over. In other words, choosing a value of k = 1 is the shortest length scale and node features will be updated such that they are permutation equivariant to their 1-hop neighbours, but not equivariant to nodes further away in the graph. On the other hand, choosing a k value sufficiently large will create a model equivalent to global permutation equivariant models, where each update is permutation equivariant to permutations of the entire graph. Throughout this work we choose k = 1 unless otherwise stated to take the most local permutation equivariant updates. We show how this choice of k value will impact the method through analysing the MUTAG dataset in Figure 10. 5.3 CHOICE OF WEIGHT SHARING In general when constructing the sub-graphs a variety of different sized sub-graphs are found due to differing degrees of the nodes in the graph. This allows for a further choice, namely the weight sharing method to be used. Given that the permutation equivariance constraint is a strong constraint to place over the linear layers, we perform weight sharing across sub-graphs of the same size. This means that sub-graphs of different sizes do not share weights and can be updated differently. The intuition for this is that sub-graphs of the same size already have some similarity in that they are of the same size, while sub-graphs of a different size are less likely to be similar and hence should be updated differently. Throughout this paper we choose to use weight sharing across local neighbourhoods of the same size degree, although in situations where there is very few local neighbourhoods of a particular size we group these together. 5.4 CHOICE OF REPRESENTATION SPACE In Section 5.1 we considered the restricted representation of a sub-group Sm ≤ Sn and in Section 5.2 we detailed how local sub-graphs are selected. Here we must make a connection between the two to present the representational space used in our LPEGN framework. When focusing in on the nodes that we didn’t drop the permutation symmetry of it can be seen, in Figure 3, that for these nodes the restricted representation is equivalent to the global permutation equivariant representation. Furthermore, given our choice of sub-graph construction we would seek to drop the permutation symmetry from a node in the graph due to the fact it is not connected to the central update node. Therefore the edge features connecting the central node to the node we are dropping the permutation symmetry of are zero. Hence, we are not interested in the additional parameters introduced in the restricted representation connecting the two nodes. Furthermore, as the node we are dropping permutation symmetries for is not connected to the chosen sub-graph we are also not interested in the additional parameters introduced in the restricted representation for this node. As a result, due to the choice of sub-graph construction, the restricted representation for our sub-group has zero features in the position of new parameters introduced and is therefore equivalent to the permutation representation on a lower dimensional space. Therefore where global permutation equivariant updates use representations ρ : G → GL(Rc), our local permutation equivariant model uses representations ρ̃ : H → GL(Rc̄), where c̄ ≤ c. The scheme for creating representations of local neighbourhoods is shown in Figure 1, where some representations of the local neighbourhoods are shown. 5.5 LOCAL PERMUTATION EQUIVARIANT GRAPH NETWORK A LPEGN combines the chosen method of creating sub-graphs as local neighbourhoods with a choice of weight sharing scheme and makes use of permutation representations on these sub-graphs. The process of creating sub-graphs, updating based on the choice of weight sharing using permutation representations, and re-constructing the graph structure is presented in Figure 1. Viewing a LPEGN from a category theoretic perspective, each different size node neighbourhood is a sub-group, H , which is a different object. There are morphisms or arrows for each permutation of the neighbourhood. This forms a groupoid. The sub-group representations are functors from the category of node neighbourhoods to the category of vector spaces. H1 H2 H3 e h1 h2 e h1 h2 e h1 h2 Groupoid of Symmetric Sub-Groups 6 EXPERIMENTS 6.1 GRAPH BENCHMARKS We tested our method on a series of 7 different real-world graph classification problems from the benchmark of (Yanardag & Vishwanathan, 2015). It is noteworthy to point out some interesting features of each dataset. We note that both MUTAG and PTC are very small datasets, with MUTAG only having 18 graphs in the test set when using a 10 % testing split. Further, the Proteins dataset has the largest graphs with an average number of nodes in each graph of 39. Also, NCI1 and NCI109 are the largest datasets having over 4000 graphs each, leading to less spurious results. Finally, IMDBB and IMDB-M generally have smaller graphs, with IMDB-M only having an average number of 13 nodes in each graph. The small size of graphs coupled with having 3 classes appears to make IMBD-M a challenging problem. Table 1 compares our LPEGN model to a range of other methods. This highlights that our method achieves a new state-of-the-art result on the NCI1 dataset and is the second strongest on PTC and NCI109. Furthermore, our method performs competitively across all datasets. We achieve a poor ranking score on the Proteins datasets, although the classification accuracy of the model is competitive with leading results and only falls slightly short of the bulk of other methods. A comparison of the distribution of training accuracy is presented in figure 8 and a ranking based method is presented in Figure 9. 6.2 SCALABILITY We compare global permutation equivariant models with our local permutation equivariant model to assess the improvements in scalability offered by local permutation equivariance. Here we compare the GPU memory required by the model against the average size of graph in the dataset. It is expected that as the computational cost of global methods scales superlinearly with the size of the graph, due to the requirement to treat the entire graph as a single adjacency tensor, that local equivariance will have a lower computational cost as each update only requires local node neighbourhoods to be expressed as adjacency tensors, which are typically much smaller than the size of the graph. Therefore global methods scale with O(n2), for graphs with n nodes, while local methods scale with O(nm2), where m is the number of nodes in a node neighbourhood and typically m n. Figure 4 shows how global and local permutation equivariant models scale with GPU memory usage as the average size of the graphs in the dataset increases. This will allow the LPEGN method to scale to graph datasets that was not possible with global equivariance. 7 FUTURE WORK From Table 1 it is clear that IMDB-M is a dataset for which our method has weaker performance. As stated in Section A.3 between hidden local equivariant graph neural network layers for the experiments in this paper we only make use of order 1 and 2 representations. As it was shown by Maron et al. (2019) that increasing the order of the permutation representation increases the expressivity inline with the k-WL test, the expressivity of our method could be improved through the consideration of higher order permutation representations. Making use of higher order representations, we believe, would improve results on the IMBD-M dataset and therefore makes for an interesting future direction. 8 CONCLUSION We present a graph neural network framework for building models comprising of local permutation equivariant update functions. The method presented is general in that it presents a framework for operating on sub-graphs with permutation equivariant convolutions, where a choice of representation space can be made depending on the expressivity required. This maintains expressivity in the update functions by utilising restricted representations, while improving scalability over global permutation equivariant methods by operating on smaller sub-graphs. We show that this method includes many previous published approaches as specific cases. Using a general approach as our framework does makes it easier to build provably expressive graph neural networks without the need to embed structural information about the task at hand, as is done in other methods. Further, we experimentally validate the method using k = 1 to create the sub-graphs and ρ1 ⊕ ρ2 representations for the local update functions on a set of graph classification datasets. This model produces state-of-the-art results on one of the datasets, achieves second best results on two datasets, and is competitive on the remaining four. In addition, ranking the model against existing methods on each dataset shows that our method is one of the strongest performing methods. Furthermore, when compared to global permutation equivariant models our method offers a significant improvement in terms of the GPU memory usage, improving the scalability of the method. A APPENDIX A.1 ISOMORPHIC GRAPHS An example of two isomporhic and two non-isomorphic graphs are shown in Figure 5. To a permutation invariant message passing update function utilising a max pooling aggregation function the isomorphic and non-isomorphic graphs are equivalent when updating the central node. A.2 MATHEMATICAL BACKGROUND Definition 5 A group is a set G with a binary operation ◦ satisfying the following laws: (G0) (Closure law): For all g, h ∈ G, g ◦ h ∈ G (G1) (Associative law): g ◦ (h ◦ k) = (g ◦ h) ◦ k for all g, h, k ∈ G (G2) (Identity law): There exists e ∈ G such that g ◦ e = e ◦ g = g for all g ∈ G (G3) (Inverse law): For all g ∈ G, there exists h ∈ G with g ◦ h = h ◦ g = e Definition 6 A representation of a finite group on a finite-dimensional complex vector space V is a homomorphism ρ → GL(V ) of the group to automorphisms of V (Fulton & Harris, 2013). This allows group elements to be expressed as invertible matrices and the group operation to be matrix multiplication. A.3 MODEL ARCHITECTURE We consider the input graphs as an input feature space that is an order 2 representation. For each local permutation equivariant linear layer we use order 1 and 2 representations as the feature spaces. This allows for projection down from graph to node feature spaces through the basis for ρ2 → ρ1, projection up from node to graph feature spaces through the basis for ρ1 → ρ2, and mappings across the same order representations through ρ2 → ρ2 and ρ1 → ρ1. The final local permutation equivariant linear layer maps to order 0 representations through ρ2 → ρ0 and ρ1 → ρ0 for the task of graph level classification. In addition to the graph layers, we also add 3 MLP layers to the end of the model. Despite these specific choices which were made to provide a baseline of our method for comparison to existing methods the framework we present is much more general and different representation spaces can be chosen. We present the general framework in Figure 6. This shows how different permutation representation spaces, ρ1 ⊕ ρ2 ⊕ · · · ⊕ ρi, can be chosen for different layers in the model and how different k values can be chosen when creating the sub-graphs in each layer. A.4 EXPRESSIVITY Figure 7 shows the training accuracy achived by the LPEGN model across a range of datasets. (a, b, c, and d) show that the LPEGN model is able to achieve 100% training accuracy on PTC, Proteins, NCI1, and NCI109 datasets. This demonstrates that the model utilising only order 1- and 2-permutation representations is sufficiently expressive. In addition, (e) shows that the model achieves very close to 100% accuracy on the IMDBB dataset. On the other hand, (f) shows that the model training accuracy plateaus above 70% accuracy for the IMDBM dataset. This highlights the model is not sufficiently expressive to achieve 100% accuracy on this datset. As discussed in Section 7 we belive that utilising higher order permutation representations would make the model more expressive and as a result achieve a higher accuracy on IMDBM. A.5 COMPARISON OF RESULTS In addition to the comparison across datasets in Table 1 Figure 8 shows the training accuracy distribution of the LPEGN method and compares to other methods from Table 1. The multimodal distribution of LPEGN for the PTC dataset highlights why it has a large standard deviation. This is likely a result of the fact that the PTC dataset is very small. Given the poor ranking of the LPEGN method in Table 1, comparing the results to other methods here highlights that the LPEGNN result is competitive. For the NCI1 and NCI109 datasets the distribution of results of our method highlight the strong performance of the method. For IMDBB and IMDBM the distribution of results for the LPEGN method also highlight that it is competative on these datasets. Further, we propose an additional method of comparison, namely the counts of wins of our LPEGN method with other methods and the counts of significant wins. The result of comparing the counts of wins shown in Figure 9 highlights that our method is one of the strongest performing across the range of datasets. Where LPEGN under-performs against other methods this can largely be attributed to weaknesses on the PROTEINS and IMDBM datasets, which we suspect using higher order representations could improve. A.6 CHOICE OF LOCAL NEIGHBOURHOOD We show how this choice of k value will impact the method through analysing the MUTAG dataset and comparing the size of sub-graphs found for different k values, ranging from the most local, k = 1, up to equivalence of a global update, k = 15, shown in Figure 10. A.7 COMPARISON OF LOCAL AND GLOBAL FEATURE SPACES We compare the case of global permutation equivariance to our local permutation equivariance, demonstrating how sub-graphs and the choice of representation is made in Figure 11. A.8 PROOF OF NO LOSS OF EXPRESSIVITY WHEN USING RESTRICTED REPRESENTATIONS Restricting the permutation representation, ρn, from a group G with n nodes to a subgroup H with m nodes yields the restricted representation ρ̃m := ResGH(ρn). The bases for the permutation representation ρn from a set of nodes to a set of nodes is given in Figure 3 and has 2 basis elements. We show that the restricted representation adds 3 more basis elements in Figure 3. From definition 2 there are no edge features associated between the nodes in the sub-graph and the nodes outside of the sub-graph. Therefore 2 of the basis elements introduced in the restricted representations are always multiplied by zeros and not required. Further, the extra 3rd basis element introduced is simply weighting the node features not part of the sub-graph by themselves and as our method extracts a sub-graph for each node in the graph this update is subsumed by the sub-graph update of that node. Therefore the restricted representation for our framework is equal to the permutation representation of a lower dimensional space and ρ̃m = ρm. Therefore, the proof of no loss of expressivity when using restricted representations follows from the proof that k-order graph networks are as powerful as k-WL (Maron et al., 2019). A.9 IMPLEMENTING OTHER MODELS WITHIN OUR FRAMEWORK We have re-drawn our model in a step-by-step format to try and highlight the difference to other models and make clear that this is a more general framework for learning permutation equivariant models in Figure 6. In the datasets used, for graph classification benchmark tasks, the input to the model is a graph with node and edge features, this can be represented as 2nd order permutation representation, so the input representation would be j = 2. The convolution can then map from this representation, ρj , to multiple different representation spaces, ρ0 ⊕ ρ1 ⊕ · · · ⊕ ρi. Subsequent convolutions can then map from these multiple permutation representations, ρ0 ⊕ ρ1 ⊕ · · · ⊕ ρi, to multiple different permutation representations, ρ0 ⊕ ρ1 ⊕ · · · ⊕ ρi. The choice of representations used can be made depending on a trade off between expressivity and computational cost, as lower order representation spaces have less expressivity, but also lower computational cost. Local Natural Graph Networks (LNGNs) (de Haan et al., 2020) take the input feature space and embed this into an invariant scalar feature of the edge neighbourhood graph. This is the same as using specific choice k-hop sub-graph creation and permutation representation space for the subgraph convolution. In the case of LNGNs the choice would be k = 1 and mapping the input feature space to representation ρ0 creating a permutation invariant feature space. Then any graph neural network with invariant features can be used, in the paper the choice made is to use a GCN (Kipf & Welling, 2016), which can also be covered by our framework. Here the choice would again be to use k = 1 when creating the subgroups and using a subgraph convolution with representation spaces ρ0 → ρ0. Global Equivariant Graph Networks (EGNs) (Maron et al., 2018) use a choice of k = n, for n-node graphs when creating the sub graphs, which corresponds to not selecting a sub graph and instead operating over the entire graph. They then use the representation space ρ2 → ρ2 mapping from a graph feature space to a graph feature space. Local Permutation Equivariant Graph Networks (LPEGN) (Ours) In our paper we choose to use k = 1 throughout to keep inline with the vast majority of previous work on graph neural networks, but we use a representation space of ρ1 ⊕ ρ2 → ρ1 ⊕ ρ2 in the hidden layers of the model and we note that this was simply a choice that seemed a simple case to present as a comparison with previous work in the benchmark classification task.
1. What is the focus and contribution of the paper on local permutation equivariant networks? 2. What are the strengths of the proposed approach, particularly in terms of efficiency and performance? 3. Do you have any concerns regarding the novelty of the paper compared to other works in graph convolution? 4. How does the reviewer assess the clarity and relevance of the background information provided in the paper? 5. What are the missing baselines in the paper's experiments, and how do they impact the comparison with other methods? 6. Would including experiments on datasets with well-defined local symmetry, such as mesh data, help validate the expressiveness of the proposed update functions?
Summary Of The Paper Review
Summary Of The Paper This paper introduces the local permutation equivariant network (LPEGN). Specifically, this work proposes to apply permutation equivariant update functions locally --- i.e., operating in a local neighborhood. The benefit of doing so is that the large graph is handled through sub-graph, which saves a lot of GPU memory. Moreover, this work handled different sizes of neighborhoods by proposing a heterogeneous weight-sharing mechanism. Weight is shared w.r.t neighborhood size -- local neighborhoods with the same size share weights. While being flexible to input sizes, the local update function is also expressive. The paper demonstrates its superiority in several graph classification benchmarks. Review The proposed method is impressively practical in terms of performance and efficiency. The proposed local permutation equivariant local update function is more GPU memory efficient than the global counterpart. Moreover, this work achieves impressive performance in 7 graph classification networks. However, I have several concerns about the paper which I will detail below: While I appreciate the paper’s providing sufficient background about graph neural networks to improve readiness, I am not sure I clearly understand the relation of the provided background with the proposed method. For example, in section 3, I can understand the mechanism of global equivariance. But I am not sure how section 3 can help elaborate the method. I probably would recommend moving part of those stuff into supplementary. I wonder about the novelty compared with other works in Graph convolution. From my viewpoint, the Graph Conv/ Messaging pass network is also locally permutation-equivariant. To be more specific, LNGN is a method that shares the similar idea of using locally permutation-equivariant update functions. It also shows competing performance in table 1 of the paper. Is this work essentially the extension of LNGN? Missing baselines GPU memory efficiency comparison. I believe other graph neural networks that employ local update functions also benefit from GPU memory efficiency. I thus recommend the paper also provide a comparison with those methods. Experiments for the dataset that has the well-defined local symmetry -- say mesh data. I understand that this work aims to handle natural graph data. But it would be also very interesting to check out the performance of the proposed local permutation equivariant update function in mesh data. Because I think it might be helpful to validate the expressiveness of proposed functions.
ICLR
Title Local Permutation Equivariance For Graph Neural Networks Abstract In this work we develop a new method, named locally permutation-equivariant graph neural networks, which provides a framework for building graph neural networks that operate on local node neighbourhoods, through sub-graphs, while using permutation equivariant update functions. Message passing neural networks have been shown to be limited in their expressive power and recent approaches to over come this either lack scalability or require structural information to be encoded into the feature space. The general framework presented here overcomes the scalability issues associated with global permutation equivariance by operating on sub-graphs through restricted representations. In addition, we prove that there is no loss of expressivity by using restricted representations. Furthermore, the proposed framework only requires a choice of k-hops for creating sub-graphs and a choice of representation space to be used for each layer, which makes the method easily applicable across a range of graph based domains. We experimentally validate the method on a range of graph benchmark classification tasks, demonstrating either state-of-the-art results or very competitive results on all benchmarks. Further, we demonstrate that the use of local update functions offers a significant improvement in GPU memory over global methods. 1 INTRODUCTION Many forms of data are naturally structured as graphs such as molecules, bioinformatics, social, or financial and it is therefore of interest to have algorithms which operate over graphs. Machine learning on graphs has received much interest in recent years, with the general framework of a message passing network providing both a useful inductive bias and scalability across a range of domains (Gilmer et al., 2017). However, Xu et al. (2019) show that a model based on a message passing framework with permutation invariant aggregation functions is limited in expressive power. Therefore, there exists many non isomorphic graphs that a model of this form cannot distinguish between. Figure 5 demonstrates two non-isomorphic graphs for which a message passing framework with max pooling would not be able to distinguish between the two graphs. More expressive graph networks exists and a common measure of expressivity is the WeisfeilerLehman (WL) test. One such method of building more expressive networks is based directly on the WL test. Here a neural network architecture is built based on variants of the WL test. Bouritsas et al. (2020) use a permutation invariant local update function, but incorporate permutation equivariant structural information into the feature space. Morris et al. (2019a) build models based on different WL variants that consider local and global connections. Bodnar et al. (2021b) introduce a WL test on simplicial complexes and incorporate this into a message passing scheme. Bodnar et al. (2021a) extend work on simplicial complexes to cell complexes, which subsume simplicial complexes. On the other hand, rather than trying to directly incorporate techniques from WL tests directly into networks other model make use of permutation symmetries to build permutation equivariant graph neural networks (Maron et al., 2018). This model can be built for k-order feature spaces and it was shown by Maron et al. (2019) that such models can distinguish between non-isomorphic graphs as well as the k-WL test. Natural graph networks are a different class of graph neural network, where the constraint placed upon the linear layer is that of naturality (de Haan et al., 2020). The naturality constraint says that for each isomorphism class a map must be chosen that is equivariant to automorphisms. In general the task of learning on graphs consists of utilising many graphs of different sizes. Current methods for utilising permutation equivariant graph neural networks require that the graph be represented as an adjacency tensor, which limits there scalability. Furthermore, global natural graph networks also perform computations on entire graph features, which leads to a large computational complexity for large graphs. Local gauge symmetries have been considered to build models with local equivariance (Cohen et al., 2019). This approach improves scalability of models by utilising local update functions, however for graphs we do not have a single local symmetry. Currently this is overcome in the majority of graph neural networks presented by utilising some form of message passing, but, in general, all works use a permutation invariant aggregation function leading to good scalability but poor expressivity. Local natural graph networks attempt to overcome the limited expressivity through placing a local naturality constraint on the message passing and having different message passing kernels on non-isomorphic edges. Through considering graph neural networks from an elementary category theory perspective and making use of aspects of group theory we present a framework for building local permutation equivariant models. This allows us to build a graph neural network model with local update functions that are permutation equivariant by considering restricted representations of the representation space of the whole graph. Further, we prove that this does not cause a loss of expressivity of the model and that this maintains the option to have a k-order feature space that ensures expressivity equal to k-WL test. Also, by constraining the kernel space under restricted representations, a natural weight sharing scheme becomes apparent, namely sharing weights across local graph neighbourhoods of the same degree. The approach of building models with a framework based on group theory makes clear the generality of the approach, where choices of representation space can be made for each convolutional layer without requiring prior information such as structural information to be encoded into the feature space. This framework can also be shown to include other leading methods as specific cases. 2 BACKGROUND 2.1 GRAPH NETWORKS Different graph neural networks express graphs in alternative forms. Generally, for a message passing model, a matrix of node features and a matrix of edge features is combined with a sparse edge index array specifying the connectivity of the graph. In other works, the graph is provided in a dense format, where the graph is given as a adjacency tensor with node and edge features held in one tensor. In this work we present the graph as follows: Definition 1 A Concrete Graph G is a finite set of nodes V(G) ⊂ N and a set of edges E(G) ⊂ V(G)× V(G). The set of node ids may be non-contiguous and we make use of this here as we extract overlapping sub-graphs when performing the local updates. The same underlying graph can be given in may forms by a permutation of the ordering of the natural numbers of the nodes. Definition 2 A sub-Concrete Graph H is created by taking a node i ∈ V(G), and extracting the nodes j ∈ V(G) and edges (i, j) ⊂ V(G)× V(G), such that there is a connection between nodes i and j. Once a sub-concrete graph has been extracted, this same underlying sub-graph could be expressed through different permutations of the underlying numbering of the nodes. For brevity we refer to sub-concrete graphs as subgraphs throughout the paper. Definition 3 A Graph isomorphism, φ : G→ G′ is a bijection between the vertex sets of two graphs G andG′, such that two vertices u and v are adjacent inG if and only if φ(u) and φ(v) are adjacent in G′. This mapping is edge preserving, i.e. satisfies for all (i, j) ∈ V(G)× V(G): (i, j) ∈ E(G)⇐⇒ (φ(i), φ(j)) ∈ E(G′) An isomorphism from the graph to itself is known as an automorphism. Relabelling of the graph by a permutation of the nodes is called a graph isomorphism, where an example of two isomorphic graphs is given in Figure 5. We desire that the linear layers of the graph neural network respect the composition of graph isomorphisms. This requires us to define the feature space of the graphs and how feature spaces of isomorphic graphs are related. 2.2 PERMUTATION REPRESENTATIONS The feature space of the graphs is a vector space V , where a representation of the group G is a homomorphism ρ : G → GL(V ) of G to the group of automorphisms of V (Fulton & Harris, 2013). A map KG between two representations of G is a vector space map. The elements of the group g ∈ G can act on a vector v ∈ V by the representation matrix v → ρ(g)v. The symmetric subspace of the representation is the space of solutions to the constraint ∀g ∈ G : ρ(g)v = v. Here we are considering the symmetries of the symmetric group Sn. This constraint can be solved for different order representations (Maron et al., 2018; Finzi et al., 2021). We present the space of linear layers mapping from k-order representations to k′-order representations in Figure 2. In addition, for the linear map KG, we require that if a graph is passed through KG and then transformed by permutation to an isomorphic graph this result is the same as if a graph is transformed by the same permutation to an isomorphic graph and then passed through KG. In short, this requires that permutation equivariance is satisfied. 2.3 CATEGORY THEORY This section does not provide a complete overview of category theory, nor even a full introduction, but aims to provide a sufficient level of understanding to aid the reader with further sections of the paper, where we believe presenting the comparison between models from a category theory perspective makes more clear the distinctions between them. A category, C, consists of a set of objects, Ob(C), and a set of morphisms (structure-preserving mappings) or arrows, f : A → B, A,B ∈ Ob(C). There is a binary operation on morphisms called composition. Each object has an identity morphism. Categories can be constructed from given ones by constructing a subcategory, in which each object, morphism, and identity is from the original category, or by building upon a category, where objects, morphisms, and identities are inherited from the original category. A functor is a mapping from one category to another that preserves the categorical structure. For two categories C and D a functor F : C → D maps each object A ∈ Ob(C) to an object F (A) ∈ Ob(D) and maps each morphism f : A→ B in C to a morphism F (f) : F (A)→ F (B) in D. Definition 4 A groupoid is a category in which each morphism is invertible. A groupoid where there is only one object is usually a group. 3 GLOBAL EQUIVARIANT GRAPH NETWORKS 3.1 GLOBAL PERMUTATION EQUIVARIANCE Global permutation equivariant models have been considered by Hartford et al. (2018); Maron et al. (2018; 2019); Albooyeh et al. (2019), with Maron et al. (2018) demonstrating that for order-2 layers there are 15 operations that span the full basis for an permutation equivariant linear layer. These 15 basis elements are shown in Figure 2 with each basis element given by a different color in the map from representation ρ2 → ρ2. Despite these methods, when solved for the entire basis space, having expressivity as good as the k-WL test, they operate on the entire graph. Operating on the entire graph features limits the scalability of the methods. In addition to poor scalability, global permutation appears to be a strong constraint to place upon the model. In the instance where the graph is flattened and an MLP is used to update node and edge features the model would have n4 trainable parameters, where n is the number of nodes. On the other hand, a permutation equivariant update has only 15 trainable parameters and in general 15 n4. Viewing a global permutation equivariant graph network from a category theory perspective there is one object with a collection of arrows representing the elements of the group. Here the arrows or morphisms go both from and to this same single object. The feature space is a functor which maps from a group representation to a vector space. For a global permutation equivariant model the same map is used for every graph. G e g1 g2 Symmetric Group 3.2 GLOBAL NATURALITY Global natural graph networks (GNGN) consider the condition of naturality, (de Haan et al., 2020). GNGNs require that for each isomorphism class of graphs there is a map that is equivariant to automorphisms. This naturality constraint is given by the condition ρ′(φ) ◦ KG = KG′ ◦ ρ(φ), which must hold for every graph isomorphism φ : G → G′ and linear map KG. While the global permutation equivariance constraint requires that all graphs be processed with the same map, global naturality allows for different, non-isomorphic, graphs to be processed by different maps and as such is a generalisation of global permutation equivariance. As is the case for global permutation equivariant models, GNGNs scale poorly as the constraint is placed over the entire graph and linear layers require global computations on the graphs. Viewing a GNGN from a category theory perspective there is a different object for each concrete graph, which form a groupoid. Then, there is a mosphism or arrow for each graph isomorphism. These can either be automorphisms, if the arrow maps to itself, or isomorphisms if the arrow maps to a different object. The feature spaces are functors which map from this graph category to the category of vector spaces. The GNG layer is a natural transformation between such functors consisting of a different map for each non-isomorphic graph. G1 G2 G3 Groupoid of Concrete Graphs 4 LOCAL EQUIVARIANT GRAPH NETWORKS Local equivariant models have started to receive attention following the successes of global equivariant models and local invariant models. The class of models that are based on the WL test are not in general locally permutation equivariant in that they still use a message passing model with permutation invariant update function. Despite this, many of these models inject permutation equivariant information into the feature space, which improves the expressivity of the models (Bouritsas et al., 2020; Morris et al., 2019a; Bodnar et al., 2021b;a). The information to be injected into the feature space is predetermined in these models by a choice of what structural or topological information to use, whereas our model uses representations of the permutation group, making it a very general model that still guarantees expressivity. In contrast to utilising results from the WL test covariant compositional networks (CCN) look at permutation equivariant functions, but they do not consider the entire basis space as was considered in Maron et al. (2018) and instead consider four equivariant operations (Kondor et al., 2018). This means that the permutation equivariant linear layers are not as expressive as those used in the global permutation equivariant layers. Furthermore, in a CCN the node neighbourhood and feature dimensions grow with each layer, which can be problematic for larger graphs and limits their scalability. Another local equivariant model is that of local natural graph networks (LNGN) (de Haan et al., 2020). An LNGN uses a message passing framework, but instead of using a permutation invariant aggregation function, it specifies the constraint that node features transform under isomophisms of the node neighbourhood and that a different message passing kernel is used on non-isomorphic edges. In practice this leads to little weight sharing in graphs that are quite heterogeneous and as such the layer is re-interpreted such that a message from node p to node q, kpqvp, is given by a function k(Gpq, vp) of the edge neighbourhood Gpq and feature value vp at p. Viewing a LNGN from a category theoretic perspective there is a groupoid of node neighbourhoods where morphisms are isomorphisms between node neighbourhoods and a groupoid of edge neighbourhoods where morphisms are ismorphisms between edge neighbourhoods. In addition, there is a functor mapping from edge neighbourhoods to the node neighbourhood of the start node and a functor mapping similarly but to the tail node of the edge neighbourhood. The node feature spaces are functors mapping from the category of node neighbourhoods to the category of vector spaces. Further, composition of two functors creates a mapping from edge neighbourhoods to the category of vector spaces. A LNG kernel is a natural transformation between these functors. N1 N2 N3 Groupoid of Node Neighbourhoods E1 E2 E3 Groupoid of Edge Neighbourhoods 5 LOCAL PERMUTATION EQUIVARIANCE A local permutation equivariant graph network (LPEGN) improves upon the scalability of global permutation equivariant models by considering permutation equivariance at lower scales. Here, instead of performing the update function on the entire graph, we perform the update function on node neighbourhoods as is done in message passing models. Furthermore, while performing the update functions on node neighbourhoods, we maintain improved expressivity through using korder permutation representations. The intuition behind imposing permutation equivariance on node neighbourhoods rather than the entire graph is that the model can learn expressive features about a part of the sub-graph without requiring knowledge of permutations multiple hops away from the central update node. This framework generalises global permutation equivariant models as it is compatible with all length scales, meaning that, if the graph structure is used to determine node neighbourhoods, then any k value can be chosen to determine the k-hops from the central update node producing the sub-graph which permutation equivariance is required for. Therefore, if the value chosen for the k-hops is sufficiently large then the layer becomes a global permutation update. The basis functions for different order representation spaces are given with the split into different degrees for a 1-hop node neighbourhood in Figure 1. The method therefore requires a choice of k for the number of hops away from the central node to consider in the local update and we discuss this choice in Section 5.2. In addition, the framework then allows for a choice of weight sharing, which we discuss in Section 5.3. 5.1 RESTRICTED REPRESENTATION Given a graph comprised of n nodes, global equivariant models consider the permutation representation of the permutation group G = Sn, namely the representation ρ : G → GL(Rc). Here we consider local updates on sub-graphs with m nodes, where we are interested in the sub-group H = Sm ≤ Sn. Therefore we can consider the restricted representation of the sub-group Sm, where the restricted representation can be seen as dropping some symmetries from the group Sn. The restricted representation is denoted by ρ̃ := ResGH(ρ) : H → GL(Rc). The global equivariance case using representations, ρ, and the case using restricted representations, ρ̃, are shown in Figure 3. Both figures show a basis mapping from order 1 to order 1 permutation representation. The restricted repre- sentation ResS5S4 drops the permutation symmetry associated to node 5. Dropping the permutation symmetry of node 5 results in 3 additional parameters, one for the update of node 5 based on node 5’s features, another for the update of node 5 based on the features of the other nodes in the graph, and a final parameter for the update of the other nodes in the graph based on node 5’s features. We proove that using restricted representations in our framework has no loss of expressivity in Appendix A.8. 5.2 CHOICE OF LOCAL NEIGHBOURHOOD The LPEGN model framework performs the permutation equivariant update on local sub-graphs, although a choice can be made as to how these sub-graphs are created. One option is the use the underlying graph structure and choose a k value to extract local neighbourhoods that include nodes which are at most k-hops from the central node. This method creates a sub-graph for each node in the graph. Here the choice of the k value can be seen as choosing a length scale for which the permutation symmetry should be exploited over. In other words, choosing a value of k = 1 is the shortest length scale and node features will be updated such that they are permutation equivariant to their 1-hop neighbours, but not equivariant to nodes further away in the graph. On the other hand, choosing a k value sufficiently large will create a model equivalent to global permutation equivariant models, where each update is permutation equivariant to permutations of the entire graph. Throughout this work we choose k = 1 unless otherwise stated to take the most local permutation equivariant updates. We show how this choice of k value will impact the method through analysing the MUTAG dataset in Figure 10. 5.3 CHOICE OF WEIGHT SHARING In general when constructing the sub-graphs a variety of different sized sub-graphs are found due to differing degrees of the nodes in the graph. This allows for a further choice, namely the weight sharing method to be used. Given that the permutation equivariance constraint is a strong constraint to place over the linear layers, we perform weight sharing across sub-graphs of the same size. This means that sub-graphs of different sizes do not share weights and can be updated differently. The intuition for this is that sub-graphs of the same size already have some similarity in that they are of the same size, while sub-graphs of a different size are less likely to be similar and hence should be updated differently. Throughout this paper we choose to use weight sharing across local neighbourhoods of the same size degree, although in situations where there is very few local neighbourhoods of a particular size we group these together. 5.4 CHOICE OF REPRESENTATION SPACE In Section 5.1 we considered the restricted representation of a sub-group Sm ≤ Sn and in Section 5.2 we detailed how local sub-graphs are selected. Here we must make a connection between the two to present the representational space used in our LPEGN framework. When focusing in on the nodes that we didn’t drop the permutation symmetry of it can be seen, in Figure 3, that for these nodes the restricted representation is equivalent to the global permutation equivariant representation. Furthermore, given our choice of sub-graph construction we would seek to drop the permutation symmetry from a node in the graph due to the fact it is not connected to the central update node. Therefore the edge features connecting the central node to the node we are dropping the permutation symmetry of are zero. Hence, we are not interested in the additional parameters introduced in the restricted representation connecting the two nodes. Furthermore, as the node we are dropping permutation symmetries for is not connected to the chosen sub-graph we are also not interested in the additional parameters introduced in the restricted representation for this node. As a result, due to the choice of sub-graph construction, the restricted representation for our sub-group has zero features in the position of new parameters introduced and is therefore equivalent to the permutation representation on a lower dimensional space. Therefore where global permutation equivariant updates use representations ρ : G → GL(Rc), our local permutation equivariant model uses representations ρ̃ : H → GL(Rc̄), where c̄ ≤ c. The scheme for creating representations of local neighbourhoods is shown in Figure 1, where some representations of the local neighbourhoods are shown. 5.5 LOCAL PERMUTATION EQUIVARIANT GRAPH NETWORK A LPEGN combines the chosen method of creating sub-graphs as local neighbourhoods with a choice of weight sharing scheme and makes use of permutation representations on these sub-graphs. The process of creating sub-graphs, updating based on the choice of weight sharing using permutation representations, and re-constructing the graph structure is presented in Figure 1. Viewing a LPEGN from a category theoretic perspective, each different size node neighbourhood is a sub-group, H , which is a different object. There are morphisms or arrows for each permutation of the neighbourhood. This forms a groupoid. The sub-group representations are functors from the category of node neighbourhoods to the category of vector spaces. H1 H2 H3 e h1 h2 e h1 h2 e h1 h2 Groupoid of Symmetric Sub-Groups 6 EXPERIMENTS 6.1 GRAPH BENCHMARKS We tested our method on a series of 7 different real-world graph classification problems from the benchmark of (Yanardag & Vishwanathan, 2015). It is noteworthy to point out some interesting features of each dataset. We note that both MUTAG and PTC are very small datasets, with MUTAG only having 18 graphs in the test set when using a 10 % testing split. Further, the Proteins dataset has the largest graphs with an average number of nodes in each graph of 39. Also, NCI1 and NCI109 are the largest datasets having over 4000 graphs each, leading to less spurious results. Finally, IMDBB and IMDB-M generally have smaller graphs, with IMDB-M only having an average number of 13 nodes in each graph. The small size of graphs coupled with having 3 classes appears to make IMBD-M a challenging problem. Table 1 compares our LPEGN model to a range of other methods. This highlights that our method achieves a new state-of-the-art result on the NCI1 dataset and is the second strongest on PTC and NCI109. Furthermore, our method performs competitively across all datasets. We achieve a poor ranking score on the Proteins datasets, although the classification accuracy of the model is competitive with leading results and only falls slightly short of the bulk of other methods. A comparison of the distribution of training accuracy is presented in figure 8 and a ranking based method is presented in Figure 9. 6.2 SCALABILITY We compare global permutation equivariant models with our local permutation equivariant model to assess the improvements in scalability offered by local permutation equivariance. Here we compare the GPU memory required by the model against the average size of graph in the dataset. It is expected that as the computational cost of global methods scales superlinearly with the size of the graph, due to the requirement to treat the entire graph as a single adjacency tensor, that local equivariance will have a lower computational cost as each update only requires local node neighbourhoods to be expressed as adjacency tensors, which are typically much smaller than the size of the graph. Therefore global methods scale with O(n2), for graphs with n nodes, while local methods scale with O(nm2), where m is the number of nodes in a node neighbourhood and typically m n. Figure 4 shows how global and local permutation equivariant models scale with GPU memory usage as the average size of the graphs in the dataset increases. This will allow the LPEGN method to scale to graph datasets that was not possible with global equivariance. 7 FUTURE WORK From Table 1 it is clear that IMDB-M is a dataset for which our method has weaker performance. As stated in Section A.3 between hidden local equivariant graph neural network layers for the experiments in this paper we only make use of order 1 and 2 representations. As it was shown by Maron et al. (2019) that increasing the order of the permutation representation increases the expressivity inline with the k-WL test, the expressivity of our method could be improved through the consideration of higher order permutation representations. Making use of higher order representations, we believe, would improve results on the IMBD-M dataset and therefore makes for an interesting future direction. 8 CONCLUSION We present a graph neural network framework for building models comprising of local permutation equivariant update functions. The method presented is general in that it presents a framework for operating on sub-graphs with permutation equivariant convolutions, where a choice of representation space can be made depending on the expressivity required. This maintains expressivity in the update functions by utilising restricted representations, while improving scalability over global permutation equivariant methods by operating on smaller sub-graphs. We show that this method includes many previous published approaches as specific cases. Using a general approach as our framework does makes it easier to build provably expressive graph neural networks without the need to embed structural information about the task at hand, as is done in other methods. Further, we experimentally validate the method using k = 1 to create the sub-graphs and ρ1 ⊕ ρ2 representations for the local update functions on a set of graph classification datasets. This model produces state-of-the-art results on one of the datasets, achieves second best results on two datasets, and is competitive on the remaining four. In addition, ranking the model against existing methods on each dataset shows that our method is one of the strongest performing methods. Furthermore, when compared to global permutation equivariant models our method offers a significant improvement in terms of the GPU memory usage, improving the scalability of the method. A APPENDIX A.1 ISOMORPHIC GRAPHS An example of two isomporhic and two non-isomorphic graphs are shown in Figure 5. To a permutation invariant message passing update function utilising a max pooling aggregation function the isomorphic and non-isomorphic graphs are equivalent when updating the central node. A.2 MATHEMATICAL BACKGROUND Definition 5 A group is a set G with a binary operation ◦ satisfying the following laws: (G0) (Closure law): For all g, h ∈ G, g ◦ h ∈ G (G1) (Associative law): g ◦ (h ◦ k) = (g ◦ h) ◦ k for all g, h, k ∈ G (G2) (Identity law): There exists e ∈ G such that g ◦ e = e ◦ g = g for all g ∈ G (G3) (Inverse law): For all g ∈ G, there exists h ∈ G with g ◦ h = h ◦ g = e Definition 6 A representation of a finite group on a finite-dimensional complex vector space V is a homomorphism ρ → GL(V ) of the group to automorphisms of V (Fulton & Harris, 2013). This allows group elements to be expressed as invertible matrices and the group operation to be matrix multiplication. A.3 MODEL ARCHITECTURE We consider the input graphs as an input feature space that is an order 2 representation. For each local permutation equivariant linear layer we use order 1 and 2 representations as the feature spaces. This allows for projection down from graph to node feature spaces through the basis for ρ2 → ρ1, projection up from node to graph feature spaces through the basis for ρ1 → ρ2, and mappings across the same order representations through ρ2 → ρ2 and ρ1 → ρ1. The final local permutation equivariant linear layer maps to order 0 representations through ρ2 → ρ0 and ρ1 → ρ0 for the task of graph level classification. In addition to the graph layers, we also add 3 MLP layers to the end of the model. Despite these specific choices which were made to provide a baseline of our method for comparison to existing methods the framework we present is much more general and different representation spaces can be chosen. We present the general framework in Figure 6. This shows how different permutation representation spaces, ρ1 ⊕ ρ2 ⊕ · · · ⊕ ρi, can be chosen for different layers in the model and how different k values can be chosen when creating the sub-graphs in each layer. A.4 EXPRESSIVITY Figure 7 shows the training accuracy achived by the LPEGN model across a range of datasets. (a, b, c, and d) show that the LPEGN model is able to achieve 100% training accuracy on PTC, Proteins, NCI1, and NCI109 datasets. This demonstrates that the model utilising only order 1- and 2-permutation representations is sufficiently expressive. In addition, (e) shows that the model achieves very close to 100% accuracy on the IMDBB dataset. On the other hand, (f) shows that the model training accuracy plateaus above 70% accuracy for the IMDBM dataset. This highlights the model is not sufficiently expressive to achieve 100% accuracy on this datset. As discussed in Section 7 we belive that utilising higher order permutation representations would make the model more expressive and as a result achieve a higher accuracy on IMDBM. A.5 COMPARISON OF RESULTS In addition to the comparison across datasets in Table 1 Figure 8 shows the training accuracy distribution of the LPEGN method and compares to other methods from Table 1. The multimodal distribution of LPEGN for the PTC dataset highlights why it has a large standard deviation. This is likely a result of the fact that the PTC dataset is very small. Given the poor ranking of the LPEGN method in Table 1, comparing the results to other methods here highlights that the LPEGNN result is competitive. For the NCI1 and NCI109 datasets the distribution of results of our method highlight the strong performance of the method. For IMDBB and IMDBM the distribution of results for the LPEGN method also highlight that it is competative on these datasets. Further, we propose an additional method of comparison, namely the counts of wins of our LPEGN method with other methods and the counts of significant wins. The result of comparing the counts of wins shown in Figure 9 highlights that our method is one of the strongest performing across the range of datasets. Where LPEGN under-performs against other methods this can largely be attributed to weaknesses on the PROTEINS and IMDBM datasets, which we suspect using higher order representations could improve. A.6 CHOICE OF LOCAL NEIGHBOURHOOD We show how this choice of k value will impact the method through analysing the MUTAG dataset and comparing the size of sub-graphs found for different k values, ranging from the most local, k = 1, up to equivalence of a global update, k = 15, shown in Figure 10. A.7 COMPARISON OF LOCAL AND GLOBAL FEATURE SPACES We compare the case of global permutation equivariance to our local permutation equivariance, demonstrating how sub-graphs and the choice of representation is made in Figure 11. A.8 PROOF OF NO LOSS OF EXPRESSIVITY WHEN USING RESTRICTED REPRESENTATIONS Restricting the permutation representation, ρn, from a group G with n nodes to a subgroup H with m nodes yields the restricted representation ρ̃m := ResGH(ρn). The bases for the permutation representation ρn from a set of nodes to a set of nodes is given in Figure 3 and has 2 basis elements. We show that the restricted representation adds 3 more basis elements in Figure 3. From definition 2 there are no edge features associated between the nodes in the sub-graph and the nodes outside of the sub-graph. Therefore 2 of the basis elements introduced in the restricted representations are always multiplied by zeros and not required. Further, the extra 3rd basis element introduced is simply weighting the node features not part of the sub-graph by themselves and as our method extracts a sub-graph for each node in the graph this update is subsumed by the sub-graph update of that node. Therefore the restricted representation for our framework is equal to the permutation representation of a lower dimensional space and ρ̃m = ρm. Therefore, the proof of no loss of expressivity when using restricted representations follows from the proof that k-order graph networks are as powerful as k-WL (Maron et al., 2019). A.9 IMPLEMENTING OTHER MODELS WITHIN OUR FRAMEWORK We have re-drawn our model in a step-by-step format to try and highlight the difference to other models and make clear that this is a more general framework for learning permutation equivariant models in Figure 6. In the datasets used, for graph classification benchmark tasks, the input to the model is a graph with node and edge features, this can be represented as 2nd order permutation representation, so the input representation would be j = 2. The convolution can then map from this representation, ρj , to multiple different representation spaces, ρ0 ⊕ ρ1 ⊕ · · · ⊕ ρi. Subsequent convolutions can then map from these multiple permutation representations, ρ0 ⊕ ρ1 ⊕ · · · ⊕ ρi, to multiple different permutation representations, ρ0 ⊕ ρ1 ⊕ · · · ⊕ ρi. The choice of representations used can be made depending on a trade off between expressivity and computational cost, as lower order representation spaces have less expressivity, but also lower computational cost. Local Natural Graph Networks (LNGNs) (de Haan et al., 2020) take the input feature space and embed this into an invariant scalar feature of the edge neighbourhood graph. This is the same as using specific choice k-hop sub-graph creation and permutation representation space for the subgraph convolution. In the case of LNGNs the choice would be k = 1 and mapping the input feature space to representation ρ0 creating a permutation invariant feature space. Then any graph neural network with invariant features can be used, in the paper the choice made is to use a GCN (Kipf & Welling, 2016), which can also be covered by our framework. Here the choice would again be to use k = 1 when creating the subgroups and using a subgraph convolution with representation spaces ρ0 → ρ0. Global Equivariant Graph Networks (EGNs) (Maron et al., 2018) use a choice of k = n, for n-node graphs when creating the sub graphs, which corresponds to not selecting a sub graph and instead operating over the entire graph. They then use the representation space ρ2 → ρ2 mapping from a graph feature space to a graph feature space. Local Permutation Equivariant Graph Networks (LPEGN) (Ours) In our paper we choose to use k = 1 throughout to keep inline with the vast majority of previous work on graph neural networks, but we use a representation space of ρ1 ⊕ ρ2 → ρ1 ⊕ ρ2 in the hidden layers of the model and we note that this was simply a choice that seemed a simple case to present as a comparison with previous work in the benchmark classification task.
1. What is the focus and contribution of the paper on locally permutation equivariant graph neural networks? 2. What are the strengths of the proposed approach, particularly regarding scalability and performance? 3. What are the weaknesses of the paper, especially regarding motivation and theory? 4. How does the reviewer seek clarification regarding applicability to new graphs and the framework's description?
Summary Of The Paper Review
Summary Of The Paper The paper introduces the framework of locally permutation equivariant graph neural networks. This framework applies permutation equivariant layers [Maron et. al. 2018] to local node neighborhoods by treating them as separate subgraphs and using a weight sharing scheme for subgraphs of the same size. The authors build their framework by discussing the different choices made -- local neighborhoods, weight sharing, and representation space. The authors also provide a category theory point of view of their framework. Review Strengths Scalability - the proposed framework introduces a scalable version of global equivariant graph networks [Maron et.al. 2018]. Performance - on the selected datasets the model has been evaluated on, the proposed model performs relatively well across all datasets. Weaknesses Motivation - the motivation for the proposed framework is a bit unclear to me. It seems like a less restricted instantiation of local natural graph networks which choses to work with equivariant layers instead of message passing layers in the local updates, but the claims and justifications feel rather weak and unsupported. Theory - the paper claims that the framework maintains expressivity or even can achieve improved expressivity (Section 5), but it is not clear over what other models? And there are no theoretical results which show that, other then stating it several times in the paper. Clarifications Applicability to new graphs - how does the network handle subgraphs of unseen sizes during training? Framework description - I find it confusing that the framework is only illustrated in a figure which is not close to the text describing it.
ICLR
Title Automated Channel Pruning with Learned Importance Abstract Neural network pruning allows for significant reduction of model size and latency. How1 ever, most of the current network pruning methods do not consider channel interdepen2 dencies and a lot of manual adjustments are required before they can be applied to new 3 network architectures. Moreover, these algorithms are often based on hand-picked, some4 times complicated heuristics and can require thousands of GPU computation hours. In 5 this paper, we introduce a simple neural network pruning and fine-tuning framework that 6 requires no manual heuristics, is highly efficient to train (2-6 times speed up compared to 7 NAS-based competitors) and produces comparable performance. The framework contains 8 1) an automatic channel detection algorithm that groups the interdependent blocks of 9 channels; 2) a non-iterative pruning algorithm that learns channel importance directly from 10 feature maps while masking the coupled computational blocks using Gumbel-Softmax 11 sampling and 3) a hierarchical knowledge distillation approach to fine-tune the pruned 12 neural networks. We validate our pipeline on ImageNet classification, human segmentation 13 and image denoising, creating lightweight and low latency models, easy to deploy on 14 mobile devices. Using our pruning algorithm and hierarchical knowledge distillation for 15 fine-tuning we are able to prune EfficientNet B0, EfficientNetV2 B0 and MobileNetV2 16 to 75% of their original FLOPs with no loss of accuracy on ImageNet. We release a set 17 pruned backbones as Keras models all of them proved beneficial when deployed in other 18 projects. 19 N/A Neural network pruning allows for significant reduction of model size and latency. How-1 ever, most of the current network pruning methods do not consider channel interdepen-2 dencies and a lot of manual adjustments are required before they can be applied to new3 network architectures. Moreover, these algorithms are often based on hand-picked, some-4 times complicated heuristics and can require thousands of GPU computation hours. In5 this paper, we introduce a simple neural network pruning and fine-tuning framework that6 requires no manual heuristics, is highly efficient to train (2-6 times speed up compared to7 NAS-based competitors) and produces comparable performance. The framework contains8 1) an automatic channel detection algorithm that groups the interdependent blocks of9 channels; 2) a non-iterative pruning algorithm that learns channel importance directly from10 feature maps while masking the coupled computational blocks using Gumbel-Softmax11 sampling and 3) a hierarchical knowledge distillation approach to fine-tune the pruned12 neural networks. We validate our pipeline on ImageNet classification, human segmentation13 and image denoising, creating lightweight and low latency models, easy to deploy on14 mobile devices. Using our pruning algorithm and hierarchical knowledge distillation for15 fine-tuning we are able to prune EfficientNet B0, EfficientNetV2 B0 and MobileNetV216 to 75% of their original FLOPs with no loss of accuracy on ImageNet. We release a set17 pruned backbones as Keras models - all of them proved beneficial when deployed in other18 projects.19 1 INTRODUCTION20 Efforts directed towards deployment of neural networks on low-performance devices such as mobile phones21 or TVs, created a demand for smaller and faster models. This has led to advances in neural network22 compression techniques, which allow us to minimize existing large-scale architectures and adjust them to23 fit specific hardware requirements. Some techniques have been especially successful in this area. Neural24 network quantization approaches (Nagel et al., 2021) not only decreased the size of the models, but also25 enabled us to utilize specialized computing accelerators like DSPs. Unfortunately, other techniques, such as26 network pruning (Liu et al., 2020), are not equally effective in low-resource environments.27 Early attempts of naive weight pruning introduced sparse computations, which render them inefficient in28 practical scenarios (Han et al., 2015; Guo et al., 2016). Channel pruning (Li et al., 2016; Liu et al., 2017;29 2021a; Herrmann et al., 2020; Liu et al., 2019b) delivers significant improvements in terms of both memory30 consumption and execution speed, and is the preferred approach if we want to deploy our models on mobile31 devices.32 However, the majority of existing approaches to channel pruning share several drawbacks:33 1. Little effort has been made to address channel interdependencies that occur in the majority of the34 architectures, with Liu et al. (2021a) being a notable exception. Many popular network architectures35 contain residual connections inspired by ResNet (He et al., 2015). Feature maps added in residual36 connections must hold the same shapes, which is likely to be violated when channels are removed37 independently. We refer to channels involved in this kind of dependency as coupled. Automating38 the process of adding pruning logic to the network in consideration of channel interdependencies is39 extremely important in practical considerations.40 2. Most methods require an expensive and time-consuming fine-tuning process after channels are41 removed. Some authors use an iterative approach, where channels are removed in a number of steps,42 and fine-tuning is performed between these steps. Either way, the fine-tuning process often requires43 a significant number of GPU hours to complete.44 3. Channels in any given convolution are being considered independently. However, some target45 platforms, e.g. SNPE (Qualcomm), are optimized for specific numbers of input and output channels46 and pruning channels independently can give little to no speed-up.47 In order to overcome these issues we introduce an end-to-end channel pruning pipeline which can be48 deployed on a wide array of neural networks in an automated way. Our main insights are that: (1) Channel49 Similar, but a more affordable approach, is to periodically prune channels throughout a single training90 procedure (Liu et al., 2021a; Guo et al., 2020; Chen et al., 2020). Ye et al. (2020) and Hou et al. (2021)91 point out flaws in the idea of greedy channel removal and propose to selectively restore channels in the92 pruned network. Liu et al. (2019b) trains an auxiliary neural network to quickly evaluate pruned networks93 and select the best one using an evolutionary algorithm. Other methods jointly train a neural network and94 learn importance scores for its channels using channel gating mechanism. In (Chen et al., 2020), this is95 achieved by randomly enable and disable channels during each iteration of the training. Gradient descent96 was used to update the importance scores in Herrmann et al. (2020); Lin et al. (2020); Ye et al. (2020) and is97 based on the idea for optimizing hyperparameters in neural architecture search in Liu et al. (2019a) and Xie98 et al. (2018). These gradient-based methods rely on Gumbel-Softmax reparametrization trick (Jang et al.,99 2016) to enable back-propagating through the gates distribution. Herrmann et al. (2020) proposes a variant100 of such a method where the logits of the channel gates are trainable parameters, as well as a variant where101 the logits are produced by an auxiliary neural network that accepts a feature map. Selecting channels based102 network input introduces an overhead that is unacceptable on resource-limited devices. Our solution contains103 a similar idea, but we ensured that the auxiliary networks can be safely removed after the training.104 Channel coupling. The channel coupling pattern occurs in many modern architectures inspired by ResNet105 (He et al., 2015), such as MobileNet (Sandler et al., 2018), EfficientNet (Tan & Le, 2019; 2021) or FBNet106 (Wan et al., 2020). Many studies seem to ignore this issue (Herrmann et al., 2020; Lin et al., 2020; Ye et al.,107 2020); other resolve this issue by manually grouping interdependent layers or providing model-specific108 heuristics (Shao et al., 2021; Hou et al., 2021; Guo et al., 2020; Liu et al., 2021b). Independently to our109 efforts, an automated solution for grouping channels has been proposed in Liu et al. (2021a). We propose a110 similar algorithm (see section 4), and additionally offer an extension for handling concatenations.111 Measuring speed-up. Many pruning methods are parametrised by a fraction of channels to prune, either112 globally or per-layer (Lin et al., 2020; Ye et al., 2020; Herrmann et al., 2020). Overall network FLOPs1113 better corresponds to the usual business requirements. In Chen et al. (2020) and Liu et al. (2021a), the114 maximal FLOPs parameter is included in their stopping criteria and importance scores of channels are115 adjusted according to their computation cost. Similarly to Guo et al. (2020), we construct a loss function that116 introduce a penalty for exceeding the provided FLOPs budget and use it as a part differentiable importance117 optimization.118 Knowledge distillation. It has been noted that Knowledge distillation can perform poorly when there is119 a large discrepancy in complexity between student and teacher networks (Cho & Hariharan, 2019). Cho120 & Hariharan (2019) evaluate a step-wise approach, in which the intermediate teacher networks are trained121 by distilling knowledge from the original large teacher and then find it ineffective. Mirzadeh et al. (2020)122 propose using a teacher assistant to bridge the complexity gap. Hou et al. (2021) apply knowledge distillation123 to fine-tune pruned network, but do not address aforementioned issues. We propose an inverted version of124 the step-wise approach from Cho & Hariharan (2019), and train train our pruned network with increasingly125 larger teachers. Such chains can be naturally formed for model families like EfficientNet (Tan & Le, 2019)126 and EfficientNetV2 (Tan & Le, 2021). We also observe that in case of generic knowledge distillation, the127 final results can be improved by (even slightly) disturbing the student model with channel pruning before128 starting the distillation.129 3 PRUNING METHOD130 The basic idea behind our channel pruning algorithm is to set up a scheme in which the importance of131 channels is being learned from the feature maps generated by convolutions in neural networks. We assign132 each channel a score corresponding to its importance that is updated at each training step and used to133 approximate behavior of the pruned network by appropriate masking (Liu et al., 2017; Herrmann et al.,134 2020). Similarly to Herrmann et al. (2020) we apply a probabilistic approach where channels in feature135 maps are masked with samples from random variables with values in (0, 1). This is a continuous relaxation136 approach to solving a discrete problem. The distributions of these random variables depend on the values of137 corresponding logits (which can be though of as proxies for channel scores and have values in R). These138 logits are learned during the pruning stage. More precisely, given a feature map of size (B,H,W,C) (B139 is batch size, H and W are spatial dimension and C is the number of channels) and a logits variable, for140 each channel separately we sample — using Gumbel-Softmax (Jang et al., 2016) — the random variable141 parametrized by the corresponding logit in logits. We mask the feature map by multiplying it by the142 sampled values.143 We do not consider each feature map individually — instead, we extend our understanding of channels from144 a single feature map to a series of operations occurring within a network. The intuition is that element-wise145 operations, like activation functions, propagate channels forward throughout the network, while convolutional146 layers consume their input channels and create new ones. Pruning sequential models is trivial but in more147 complicated cases, like models with residual connections, there exist additional couplings between channels,148 introduced by operations that accept multiple inputs, e.g. element-wise sum, multiplication (Fig. 2). Because149 1a number of floating-point operations coupled channels must be pruned jointly to ensure valid shapes, we use a single random variable to mask150 each set of coupled channels (see Section 4 for details about automatic detection of coupled channels).151 Although logits can be treated as standalone trainable variables, we choose to learn them from the feature152 maps in a feedback-loop mechanism. This is because the latter approach is faster to train, results in logits153 which (once converted to probabilities) have lower entropy and produces better results. Once we decide on154 the feature maps from which we will learn the optimal logits values, we place simple neural networks called155 logit predictor modules that take these feature maps as inputs. These modules are build of 3x3 depthwise156 convolution followed by 1x1 convolution and global mean pooling along spatial dimensions. The output157 output vector of each such module is later used to update the value of the corresponding logits variable158 (using exponential moving average) as in Figure 2.159 The masking operations should always be placed just before the convolution operations that absorb the160 channels (see Figure 2). The placement of logit predictors is more involved and in cases more complicated161 than the relatively simple one presented in Figure 2, we choose to follow a simple heuristic to place them162 after convolutions with largest kernel sizes.163 During the pruning phase we augment the task-specific loss with an auxiliary latency-based loss. It is based164 on the expected number of FLOPs in the pruned network, which is computed by using all the logits we have165 attached to the network. We train network weights and logit predictor modules jointly so that the network166 can adjust to channels being phased out.167 3.1 PRUNING LARGER BLOCKS OF CHANNELS168 We allow for blocks of channels (instead of just individual channels) to be treated jointly, so that blocks169 of a predefined size will be chosen or discarded together. This is especially important for platforms where170 convolutions are optimized with a specific block size o channels in mind, e.g., for SNPE (Qualcomm) this171 number is 32 and pruning individual channels often makes little sense.172 4 LAYER GROUPING ALGORITHM173 Although channel coupling has been observed in the literature, relevant groups of operations seem to be174 usually established via network-specific heuristics or manual annotation. A notable exception is Liu et al.175 (2021a) where the problem is described at length and an algorithm for finding the groups is derived. The176 algorithm is then tested on architectures based on ResNet. However, unlike our solution, it does not support177 concatenation operations. For clarity, we focus on convolutional neural networks, but the proposed strategy178 can be extended to other kinds of architectures.179 4.1 SOLUTION180 To overcome the issues delineated in Section 3 and make channel pruning available for most off-the-181 shelf architectures we have developed an algorithm that is capable of automatically detecting channel182 interdependencies between feature maps generated by operations in the network.183 To keep track of all the places where channels have to be considered in a synchronised way, we introduce the184 concept of an orbit. An orbit can be thought as subset of operations that are interdependent from the point of185 view of channel pruning. Operations in the same orbit need to be considered jointly when removing channels.186 Naively removing channels without taking into account these interdependencies may result in an invalid187 network. For example, if we remove an output channel from one of the convolutions on the left in Figure 2,188 the number of channels will no longer match for the Sum operation. A typical network has multiple orbits.189 It is easiest to understand this concept by seeing how orbits are build, which we delineate in Algorithm 1190 below.191 First, we fix some notation to make matters more intuitive. All the operations in a typical convolutional192 neural network can be described as being of the following types:193 1. sources are the operation where new channels are being created, namely regular convolution layers194 (not depthwise!) and dense layers;195 2. sinks are the operation where channels are being absorbed, namely regular convolution layers (not196 depthwise!) and dense layers;197 3. continuators are all the operations with a single input tensor that simply pass on the channels198 forward, e.g., batch normalization, mean pooling, resize, activations;199 4. joiners are operations with multiple input tensors of the same shape which join these tensors200 without altering the shape, namely element-wise addition and multiplication;201 Typically, continuator operations are not problematic since they do not alter the channels structure and have202 a single predecessor and a single output. It is the joiner operations that introduce interdependencies between203 channels. For brevity, from now on we will only speak of convolutions as sources and sinks, but everything204 applies just as well to dense layers.205 Note that some sources can be sinks at the same time and vice versa. We refer to operations that are either206 sinks or sources as source-sinks. To identify all the subgraphs in the network where channels have to be207 considered jointly we run an exhaustive-search type algorithm which has two distinct phases:208 In the fist phase we search for extended orbits, where the coupled operations are brought together. In209 Algorithm 1 we describe how extended orbits are created. The input is a neural network directed acyclic210 graph (DAG). The algorithm amounts to removing all inbound edges from convolution nodes and finding all211 weakly connected components in the resulting graph. The extended orbits are then these weakly connected212 components once we restore the inbound edges in convolution nodes.213 The second phase is similar to the first one. For all extended orbits found in phase one we do the following:214 take the extended orbit and then mark concatenation nodes (which play a special role, since they group215 channels from separate sources) inside as sinks and repeat the process. Most notably, we discard extended216 orbits in which there are concatenation nodes followed by joiner nodes, as it makes the whole process much217 more difficult to implement. We do not prune channels within such orbits. In Figure 3 we give an example of218 an extended orbit and how is broken up into final orbits.219 Algorithm 1 Searching for extended orbits Input: network DAG with layers represented as nodes 1: P := {p : p is a path starting and ending with a convolution with no convolutions inside the path } 2: for each path p in P remove the last node 3: for every distinct node ni on paths in P , create an empty color set for the node Cni = {} 4: X := {x : x is the initial node of a path in P } 5: for x in X do 6: pick an unused color c 7: add color c to color sets of all the nodes on all the paths in P starting in x 8: end for 9: while there exist nodes with multiple colors do 10: pick a node with multiple colors {c1, c2, . . . , ck} at random 11: if any node in the DAG has a color in {c2, . . . , ck} switch the color to c1 12: end while 5 PRUNING, FINE-TUNING AND HIERARCHICAL KNOWLEDGE DISTILLATION220 5.1 PRUNING STAGE221 The pruning workflow is the same for all types of tasks. We first find all final orbits in the network and attach222 logit predictors. Final orbits determine both: which parts of the network are being pruned and which of them223 are pruned jointly. The FLOPs per pixel can be automatically computed (and are differentiable with respect224 to the channel logits as in (Fig. 2). We can compute FLOPs for the original network and then set some225 FLOPs target. In practice we compute kFPP (FLOPs per pixel of the input tensor divided by 1000), to have226 a value that is independent of the input size. The latency loss is then given by ReLU(kFPP/target_kFPP−1).227 We add this loss to the quality loss related to the task, e.g., cross entropy in classification. To avoid an228 overly aggressive reduction of kFPP , we anneal the loss using exponential decay so that at the beginning of229 training the annealing multiplier is 0. and approaches 1. as the training progresses.230 Once the pruning phase is over we retain or discard output channels in convolutions based on channel231 interdependence discovered by applying Algorithm 1 and the values of logits variables learned by logit232 predictors.233 5.2 FINE-TUNING AND HIERARCHICAL KNOWLEDGE DISTILLATION234 We propose to fine-tune pruned models with a method we call hierarchical knowledge distillation. This235 approach relies on increasing the complexity of the teacher network in discrete steps. Given a fine-tuning236 budget of K GPU hours, and N teacher networks we train the network for K/N GPU hours with each of237 these teacher networks, starting with the smallest one. Our loss is Lce + 5Lkd where Lce is the standard238 cross entropy loss and Lkd is the distillation loss. Using higher weight term for the Lkd is crucial to prevent239 overfitting and produce better results.240 Hierarchical knowledge distillation consistently performs much better than just using the original model as241 the teacher. The comparisons can be seen in Section 6.2. Given an array of models with increasing FLOPs242 requirements, like EfficientNet Tan & Le (2019) and EfficientNetV2 Tan & Le (2021), it is possible to cheaply243 train new models for missing FLOPs values. This may produce better results in terms of FLOPs/accuracy244 trade-off and require less computational resources.245 It is perplexing that trying to use hierarchical knowledge distillation on an unpruned network does not work246 anywhere near as well. Our intuition is that pruning provides some kind of initial perturbation to network247 weights and architecture which prove beneficial from the point of view of gradient descent optimization.248 Are there any other types of model perturbations which boost the effectiveness of this type of knowledge249 distillation? These are the questions we could try to address as our future research. It would be also250 interesting to see how this approach performs when applied to recent state-of-the-art methods based on251 neural architecture search Wang et al. (2021).252 6 EXPERIMENTS253 All the experiments we perform adhere to the same schedule: (1) We first run the pruning algorithm with254 additional latency losses (usually 1-10 epochs, depending on the task). (2) We then fine-tune the pruned255 model (without resetting its weights). The experiments for classification on ImageNet are presented in256 Section 6.2. Experiments for image denoising and human segmentation are presented in Sections A.2.1 and257 A.2.2, respectively.258 6.1 HYPERPARAMETERS FOR THE PRUNING PHASE259 For the pruning phase, during which channels to be removed are being chosen, the setup is roughly the same260 for each task. The logits predictor is always a two layer network with 3× 3 depthwise convolution followed261 by 1× 1 convolution and global mean pooling. We set the batch size to 16 and run the training updating the262 channel gates distributions as described in section 3. The initial value of channel logits is set to 3.0 so that263 initially there little to no masking. There is an additional loss that penalizes the entropy of all the logits so264 that at the end of the pruning phase the channel enabling probabilities (which we get by applying softmax to265 logits) are far away from 0.5. The temperature for Gumbel-Softmax is constant - 0.5.266 6.2 CLASSIFICATION ON IMAGENET267 We prune EfficientNet B0, EfficientNet B1 (Tan & Le, 2019), MobileNetV2 (Sandler et al., 2018), and268 EfficientNetV2 (Tan & Le, 2021). We choose these since they are already highly optimized for mobile devices269 and relatively small. EfficientNetV2 is a recent state-of-the-art architecture optimized for mobile GPUs and270 DSPs. All the models are taken from their official Keras implementations2 except for EfficientNetV2. Larger271 2https://www.tensorflow.org/api_docs/python/tf/keras/applications networks like the VGG19 or the ResNet family has been predominant in channel pruning literature, but are272 rarely suitable for resource-limited devices, where the need for optimization is biggest. The phase where273 channels are chosen usually lasts a little more than a single epoch on ImageNet. We split the ImageNet train274 data into two parts, leaving about 5% of the data for early-stopping.275 Following Section 5.2 we use multiple teacher networks. The details are as follows:276 • EfficientNet B0: fine-tune the models for 40 epochs with B0 as teacher and then we further277 fine-tune with a B1 for another 40 epochs;278 • EfficientNet B1: fine-tune the models for 25 epochs with B1 as teacher and then we further279 fine-tune with a B2 for another 25 epochs.280 • MobileNetV2: fine-tune the models for 40 epochs with MobileNetV2 as teacher and then we281 further fine-tune with a EfficientNet B0 for another 40 epochs.282 • EfficientNetV2 B0: fine-tune the models for 16 epochs with B0V2 as, then fine-tune the models283 for 16 epochs with B1V2 as teacher and finally fine-tune the models for 16 epochs with B2V2 as284 teacher.285 The interesting thing we noticed is that using knowledge distillation without pruning does not help at all.286 For example we tried fine-tuning MobileNetV2 with EfficientNet B0 teacher right away and top 1 Imagenet287 accuracy fell from 71.52% to 71.12%. We conjecture that some kind of initial perturbation is needed for288 knowledge distillation to work. In our case this perturbation is channel pruning.289 Batch size is set to 192 for B0 and MobileNetV2 fine-tuning. For B1 and EfficientNetV2 B0 batch size is290 128. The input image resolution is (224, 224). We use only random crop and flip as augmentations. For291 training we use one NVidia RTX3090 GPU. For the pruning phase we set the batch size to 16 and, quite292 importantly, we freeze all batch normalization layers. We use Adam optimizer for all the training runs.293 During mask-learning phase the learning rate is set to 0.0001. For fine-tuning we use exponential decay with294 learning rate initially set to 0.0001 and the decay rate set to 0.001.295 6.2.1 COMPARISONS AND DISCUSSION296 Few authors have attempted to prune EfficientNet (Tan & Le, 2019). We can compare our results with Hou297 et al. (2021), where only one model is presented, which was also fine-tuned with knowledge distillation. We298 provide a much wider FLOPs spectrum for B0 and prune B1 as well. It is interesting to see that B1 pruned to299 the FLOPs level of B0 outperforms B0 by a wide margin. The results are in Table 1.300 Comparisons for MobileNetV2 are quite difficult due the inconsistencies between different versions of the301 model taken by different authors as their baseline. For instance in Hou et al. (2021) the authors first take an302 over-pruned backbone which they proceed to prune. In Liu et al. (2019b) the largest version of MobileNetV2303 is taken (585M FLOPs) and then pruned. Some of the authors run the fine-tuning for much longer than we do.304 Notably, in Ye et al. (2020) the fine-tuning is run on 4 GPUs with batch size 512 and for 250 epochs which is305 considerably more expensive than our approach. Detailed results are in Table 2 and Figure 5a. Again using306 hierarchical knowledge distillation we are able to fine-tune the model pruned to 75% of original FLOPs so307 that it has 0.7% higher accuracy than the original.308 When it comes to EfficientNetV2, we are able to outperform the original model’s results on ImageNet with309 the help of hierarchical EKD, inasmuch as the pruned version of B0 (70% of the FLOPs of the original310 model) has higher top 1 accuracy than the original. See Table 3 and Figure 5b.311 7 CONCLUSION312 Using an automated solution to process coupled channels in neural network architectures and a simple313 scheme to learn channel importance, we are able to prune models with varying architectures for different314 underlying tasks. For fine-tuning pruned classification networks we use hierarchical knowledge distillation315 which produces much better results than just using the original model as a teacher. The whole pruning316 pipeline requires much less computational resources than some of the state-of-the-art NAS based solutions317 for finding efficient FLOPs / accuracy trade-offs like Wang et al. (2021).318 REFERENCES319 Zhiqiang Chen, Ting-Bing Xu, Changde Du, Cheng-Lin Liu, and Huiguang He. Dynamical channel pruning320 by conditional accuracy change for deep neural networks. IEEE transactions on neural networks and321 learning systems, 32(2):799–813, 2020.322 J. Cho and B. Hariharan. On the efficacy of knowledge distillation. pp. 4793–4801, nov 2019. doi: 10.323 1109/ICCV.2019.00489. URL https://doi.ieeecomputersociety.org/10.1109/ICCV.324 2019.00489.325 Ke Gong, Xiaodan Liang, Dongyu Zhang, Xiaohui Shen, and Liang Lin. Look into person: Self-supervised326 structure-sensitive learning and a new benchmark for human parsing. In Proceedings of the IEEE327 Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.328 Shaopeng Guo, Yujie Wang, Quanquan Li, and Junjie Yan. Dmcp: Differentiable markov channel pruning329 for neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern330 Recognition, pp. 1539–1547, 2020.331 Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. arXiv preprint332 arXiv:1608.04493, 2016.333 Song Han, Jeff Pool, John Tran, and William J Dally. Learning both weights and connections for efficient334 neural networks. arXiv preprint arXiv:1506.02626, 2015.335 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition,336 2015.337 Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In338 Proceedings of the IEEE international conference on computer vision, pp. 1389–1397, 2017.339 Charles Herrmann, Richard Strong Bowen, and Ramin Zabih. Channel selection using gumbel softmax. In340 European Conference on Computer Vision, pp. 241–257. Springer, 2020.341 Yuenan Hou, Zheng Ma, Chunxiao Liu, Zhe Wang, and Chen Change Loy. Network pruning via resource342 reallocation. CoRR, abs/2103.01847, 2021. URL https://arxiv.org/abs/2103.01847.343 Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint344 arXiv:1611.01144, 2016.345 Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient346 convnets. arXiv preprint arXiv:1608.08710, 2016.347 Mingbao Lin, Rongrong Ji, Yan Wang, Yichen Zhang, Baochang Zhang, Yonghong Tian, and Ling Shao.348 Hrank: Filter pruning using high-rank feature map. CoRR, abs/2002.10179, 2020. URL https:349 //arxiv.org/abs/2002.10179.350 Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional351 neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.352 806–814, 2015.353 Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search, 2019a.354 Jiayi Liu, Samarth Tripathi, Unmesh Kurup, and Mohak Shah. Pruning algorithms to accelerate convolutional355 neural networks for edge applications: A survey. arXiv preprint arXiv:2005.04275, 2020.356 Liyang Liu, Shilong Zhang, Zhanghui Kuang, Aojun Zhou, Jing-Hao Xue, Xinjiang Wang, Yimin Chen,357 Wenming Yang, Qingmin Liao, and Wayne Zhang. Group fisher pruning for practical network compression.358 In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine359 Learning, volume 139 of Proceedings of Machine Learning Research, pp. 7021–7032. PMLR, 18–24 Jul360 2021a. URL https://proceedings.mlr.press/v139/liu21ab.html.361 Xiangcheng Liu, Jian Cao, Hongyi Yao, Wenyu Sun, and Yuan Zhang. Adapruner: Adaptive channel pruning362 and effective weights inheritance. arXiv preprint arXiv:2109.06397, 2021b.363 Zechun Liu, Haoyuan Mu, X. Zhang, Zichao Guo, X. Yang, K. Cheng, and Jian Sun. Metapruning: Meta364 learning for automatic neural network channel pruning. 2019 IEEE/CVF International Conference on365 Computer Vision (ICCV), pp. 3295–3304, 2019b.366 Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning367 efficient convolutional networks through network slimming. In Proceedings of the IEEE international368 conference on computer vision, pp. 2736–2744, 2017.369 Jian-Hao Luo, Hao Zhang, Hong-Yu Zhou, Chen-Wei Xie, Jianxin Wu, and Weiyao Lin. Thinet: pruning370 cnn filters for a thinner net. IEEE transactions on pattern analysis and machine intelligence, 41(10):371 2525–2538, 2018.372 Seyed Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh.373 Improved knowledge distillation via teacher assistant. Proceedings of the AAAI Conference on Artificial374 Intelligence, 34:5191–5198, 04 2020. doi: 10.1609/aaai.v34i04.5963.375 Markus Nagel, Marios Fournarakis, Rana Ali Amjad, Yelysei Bondarenko, Mart van Baalen, and Tijmen376 Blankevoort. A white paper on neural network quantization. arXiv preprint arXiv:2106.08295, 2021.377 Qualcomm. Snpe: Snapdragon neural processing engine. https://developer.qualcomm.com/378 sites/default/files/docs/snpe/.379 Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Inverted380 residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. CoRR,381 abs/1801.04381, 2018. URL http://arxiv.org/abs/1801.04381.382 Wenqi Shao, Hang Yu, Zhaoyang Zhang, Hang Xu, Zhenguo Li, and Ping Luo. Bwcp: Probabilistic383 learning-to-prune channels for convnets via batch whitening. arXiv preprint arXiv:2105.06423, 2021.384 Mennatullah Siam, Heba Mahgoub, Mohamed Zahran, Senthil Yogamani, Martin Jagersand, and Ahmad El-385 Sallab. Modnet: Motion and appearance based moving object detection network for autonomous driving.386 In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 2859–2864, 2018.387 doi: 10.1109/ITSC.2018.8569744.388 Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural networks.389 CoRR, abs/1905.11946, 2019. URL http://arxiv.org/abs/1905.11946.390 Mingxing Tan and Quoc V. Le. Efficientnetv2: Smaller models and faster training. CoRR, abs/2104.00298,391 2021. URL https://arxiv.org/abs/2104.00298.392 Mingxing Tan, Ruoming Pang, and Quoc V. Le. Efficientdet: Scalable and efficient object detection. CoRR,393 abs/1911.09070, 2019. URL http://arxiv.org/abs/1911.09070.394 Alvin Wan, Xiaoliang Dai, Peizhao Zhang, Zijian He, Yuandong Tian, Saining Xie, Bichen Wu, Matthew395 Yu, Tao Xu, Kan Chen, Peter Vajda, and Joseph E. Gonzalez. Fbnetv2: Differentiable neural architecture396 search for spatial and channel dimensions. In Proceedings of the IEEE/CVF Conference on Computer397 Vision and Pattern Recognition (CVPR), June 2020.398 Dilin Wang, Chengyue Gong, Meng Li, Qiang Liu, and Vikas Chandra. Alphanet: Improved training of399 supernet with alpha-divergence. arXiv preprint arXiv:2102.07954, 2021.400 Yuzhi Wang, Haibin Huang, Qin Xu, Jiaming Liu, Yiqun Liu, and Jue Wang. Practical deep raw image401 denoising on mobile devices, 2020.402 Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. Snas: stochastic neural architecture search. arXiv403 preprint arXiv:1812.09926, 2018.404 Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, and Qiang Liu. Good subnetworks405 provably exist: Pruning via greedy forward selection. CoRR, abs/2003.01794, 2020. URL https:406 //arxiv.org/abs/2003.01794.407 A APPENDIX408 A.1 LAYER WIDTHS VISUALIZATION409 It is quite interesting to see how layer width looks like after pruning. The pattern that emerge are quite410 telling. EfficientNets are build of a series of meta-blocks, .e.g, 2, 3, . . . , 7 in EfficientNet B0, where each411 meta-block consists of a number of MBCONV blocks at the same spatial resolution. It appears that in each412 such meta-block the most important block is usually the first one, and block importance decays proportionally413 to the depth of the block inside the meta-block. See Figure 6 in the Appendix.414 A.2 FURTHER RESULTS415 A.2.1 RAWRGB IMAGE DENOISING416 We prune a recent state-of-the-art network for RawRGB image denoising on mobile devices introduced417 in Wang et al. (2020). We train the models on SIDD Medium dataset https://www.eecs.yorku.418 ca/~kamel/sidd/dataset.php. We first extract 256x256 patches for training and validation and419 then test the networks on SIDD validation dataset https://www.eecs.yorku.ca/~kamel/sidd/420 benchmark.php. The batch size is set to 16, learning rate is 0.0001 and we use Adam optimizer. The421 loss is mean absolute error. We train the original model for 150 epochs, prune it and then train the original422 model for another 150 epochs. The pruned models are fine-tuned for 150 epochs as well. For comparison we423 also train from scratch smaller (linearly scaled down) versions of the original model. The results can be seen424 in Table 4 and Figure 7.425 A.2.2 HUMAN SEGMENTATION426 For semantic segmentation we use a private dataset for training human segmentation models for real time427 prediction in video bokeh task. This is dictated by the need to have superior edge quality which is missing428 in publicly available data for segmentation. The dataset consists of 120k real image/mask pair and 50k429 synthetic ones. Apart from IoU we also compute edge IoU, which pays attention only to the edges of the430 masks and can be thought of as a proxy for edge quality. The baseline architecture consists of an EfficientNet431 B0 (Tan & Le, 2019) backbone, EfficientDet (Tan et al., 2019) (modified slightly to allow for easier channel432 pruning) fusion block and a detail branch (Siam et al., 2018) to preserve edge quality. The backbone network433 is pretrained on ImageNet. We train the original model for 70 epochs, prune and then fine-tune the pruned434 models for 50 epochs. The validation results are presented in Table 4. The validation dataset is a split of a435 modified version of LIP dataset (Gong et al., 2017), where objects belonging to people (such as handbags,436 etc.) are also considered part of these people. This is done, so that we can train models for video bokeh437 effect. The results are in Table 4b and are visualized in Figures 8a and 8b.438 Notice that the smallest pruned model is compressed to around 10% of the size of the original one. Even in439 these extreme compression scenario our approach produces a model with IoU higher than 90%. IoU starts440 dropping only after we have removed more than 60% of the original FLOPs. This is an observation which, in441 our experience, is true for many more architectures for segmentation, the one being presented here is just442 one example. Edge IoU starts falling much more quickly, perhaps beacause we employ no edge-specific loss.443
1. What is the focus of the paper, and what are its contributions to neural network pruning? 2. What are the strengths and weaknesses of the proposed hierarchical knowledge distillation method? 3. How does the reviewer assess the novelty and effectiveness of the method compared to prior works? 4. Are there any concerns or suggestions regarding the experimental design and comparisons with other methods? 5. Can the author provide more analysis and discussion on the differences between the two parts of the proposed method?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a hierarchical knowledge distillation method for neural network pruning. Experiments demonstrate that the whole pruning pipeline requires much less computational resources than some of the state-of-the-art NAS based solutions for finding efficient FLOPs / accuracy trade-offs. Review The research about channel pruning is insufficient, with only six related works, and the author should summarize the differences with them. Hierarchical distillation of knowledge seems to be a traditional approach, which should not be a contribution to this work. In the experiments, the authors should verify the effectiveness of the proposed method by comparing it with various channel pruning and knowledge distillation methods. The author should analyze the differences between the two parts, eg. New pruning algorithm and Hierarchical knowledge distillation, with other methods in the ablation study.
ICLR
Title Automated Channel Pruning with Learned Importance Abstract Neural network pruning allows for significant reduction of model size and latency. How1 ever, most of the current network pruning methods do not consider channel interdepen2 dencies and a lot of manual adjustments are required before they can be applied to new 3 network architectures. Moreover, these algorithms are often based on hand-picked, some4 times complicated heuristics and can require thousands of GPU computation hours. In 5 this paper, we introduce a simple neural network pruning and fine-tuning framework that 6 requires no manual heuristics, is highly efficient to train (2-6 times speed up compared to 7 NAS-based competitors) and produces comparable performance. The framework contains 8 1) an automatic channel detection algorithm that groups the interdependent blocks of 9 channels; 2) a non-iterative pruning algorithm that learns channel importance directly from 10 feature maps while masking the coupled computational blocks using Gumbel-Softmax 11 sampling and 3) a hierarchical knowledge distillation approach to fine-tune the pruned 12 neural networks. We validate our pipeline on ImageNet classification, human segmentation 13 and image denoising, creating lightweight and low latency models, easy to deploy on 14 mobile devices. Using our pruning algorithm and hierarchical knowledge distillation for 15 fine-tuning we are able to prune EfficientNet B0, EfficientNetV2 B0 and MobileNetV2 16 to 75% of their original FLOPs with no loss of accuracy on ImageNet. We release a set 17 pruned backbones as Keras models all of them proved beneficial when deployed in other 18 projects. 19 N/A Neural network pruning allows for significant reduction of model size and latency. How-1 ever, most of the current network pruning methods do not consider channel interdepen-2 dencies and a lot of manual adjustments are required before they can be applied to new3 network architectures. Moreover, these algorithms are often based on hand-picked, some-4 times complicated heuristics and can require thousands of GPU computation hours. In5 this paper, we introduce a simple neural network pruning and fine-tuning framework that6 requires no manual heuristics, is highly efficient to train (2-6 times speed up compared to7 NAS-based competitors) and produces comparable performance. The framework contains8 1) an automatic channel detection algorithm that groups the interdependent blocks of9 channels; 2) a non-iterative pruning algorithm that learns channel importance directly from10 feature maps while masking the coupled computational blocks using Gumbel-Softmax11 sampling and 3) a hierarchical knowledge distillation approach to fine-tune the pruned12 neural networks. We validate our pipeline on ImageNet classification, human segmentation13 and image denoising, creating lightweight and low latency models, easy to deploy on14 mobile devices. Using our pruning algorithm and hierarchical knowledge distillation for15 fine-tuning we are able to prune EfficientNet B0, EfficientNetV2 B0 and MobileNetV216 to 75% of their original FLOPs with no loss of accuracy on ImageNet. We release a set17 pruned backbones as Keras models - all of them proved beneficial when deployed in other18 projects.19 1 INTRODUCTION20 Efforts directed towards deployment of neural networks on low-performance devices such as mobile phones21 or TVs, created a demand for smaller and faster models. This has led to advances in neural network22 compression techniques, which allow us to minimize existing large-scale architectures and adjust them to23 fit specific hardware requirements. Some techniques have been especially successful in this area. Neural24 network quantization approaches (Nagel et al., 2021) not only decreased the size of the models, but also25 enabled us to utilize specialized computing accelerators like DSPs. Unfortunately, other techniques, such as26 network pruning (Liu et al., 2020), are not equally effective in low-resource environments.27 Early attempts of naive weight pruning introduced sparse computations, which render them inefficient in28 practical scenarios (Han et al., 2015; Guo et al., 2016). Channel pruning (Li et al., 2016; Liu et al., 2017;29 2021a; Herrmann et al., 2020; Liu et al., 2019b) delivers significant improvements in terms of both memory30 consumption and execution speed, and is the preferred approach if we want to deploy our models on mobile31 devices.32 However, the majority of existing approaches to channel pruning share several drawbacks:33 1. Little effort has been made to address channel interdependencies that occur in the majority of the34 architectures, with Liu et al. (2021a) being a notable exception. Many popular network architectures35 contain residual connections inspired by ResNet (He et al., 2015). Feature maps added in residual36 connections must hold the same shapes, which is likely to be violated when channels are removed37 independently. We refer to channels involved in this kind of dependency as coupled. Automating38 the process of adding pruning logic to the network in consideration of channel interdependencies is39 extremely important in practical considerations.40 2. Most methods require an expensive and time-consuming fine-tuning process after channels are41 removed. Some authors use an iterative approach, where channels are removed in a number of steps,42 and fine-tuning is performed between these steps. Either way, the fine-tuning process often requires43 a significant number of GPU hours to complete.44 3. Channels in any given convolution are being considered independently. However, some target45 platforms, e.g. SNPE (Qualcomm), are optimized for specific numbers of input and output channels46 and pruning channels independently can give little to no speed-up.47 In order to overcome these issues we introduce an end-to-end channel pruning pipeline which can be48 deployed on a wide array of neural networks in an automated way. Our main insights are that: (1) Channel49 Similar, but a more affordable approach, is to periodically prune channels throughout a single training90 procedure (Liu et al., 2021a; Guo et al., 2020; Chen et al., 2020). Ye et al. (2020) and Hou et al. (2021)91 point out flaws in the idea of greedy channel removal and propose to selectively restore channels in the92 pruned network. Liu et al. (2019b) trains an auxiliary neural network to quickly evaluate pruned networks93 and select the best one using an evolutionary algorithm. Other methods jointly train a neural network and94 learn importance scores for its channels using channel gating mechanism. In (Chen et al., 2020), this is95 achieved by randomly enable and disable channels during each iteration of the training. Gradient descent96 was used to update the importance scores in Herrmann et al. (2020); Lin et al. (2020); Ye et al. (2020) and is97 based on the idea for optimizing hyperparameters in neural architecture search in Liu et al. (2019a) and Xie98 et al. (2018). These gradient-based methods rely on Gumbel-Softmax reparametrization trick (Jang et al.,99 2016) to enable back-propagating through the gates distribution. Herrmann et al. (2020) proposes a variant100 of such a method where the logits of the channel gates are trainable parameters, as well as a variant where101 the logits are produced by an auxiliary neural network that accepts a feature map. Selecting channels based102 network input introduces an overhead that is unacceptable on resource-limited devices. Our solution contains103 a similar idea, but we ensured that the auxiliary networks can be safely removed after the training.104 Channel coupling. The channel coupling pattern occurs in many modern architectures inspired by ResNet105 (He et al., 2015), such as MobileNet (Sandler et al., 2018), EfficientNet (Tan & Le, 2019; 2021) or FBNet106 (Wan et al., 2020). Many studies seem to ignore this issue (Herrmann et al., 2020; Lin et al., 2020; Ye et al.,107 2020); other resolve this issue by manually grouping interdependent layers or providing model-specific108 heuristics (Shao et al., 2021; Hou et al., 2021; Guo et al., 2020; Liu et al., 2021b). Independently to our109 efforts, an automated solution for grouping channels has been proposed in Liu et al. (2021a). We propose a110 similar algorithm (see section 4), and additionally offer an extension for handling concatenations.111 Measuring speed-up. Many pruning methods are parametrised by a fraction of channels to prune, either112 globally or per-layer (Lin et al., 2020; Ye et al., 2020; Herrmann et al., 2020). Overall network FLOPs1113 better corresponds to the usual business requirements. In Chen et al. (2020) and Liu et al. (2021a), the114 maximal FLOPs parameter is included in their stopping criteria and importance scores of channels are115 adjusted according to their computation cost. Similarly to Guo et al. (2020), we construct a loss function that116 introduce a penalty for exceeding the provided FLOPs budget and use it as a part differentiable importance117 optimization.118 Knowledge distillation. It has been noted that Knowledge distillation can perform poorly when there is119 a large discrepancy in complexity between student and teacher networks (Cho & Hariharan, 2019). Cho120 & Hariharan (2019) evaluate a step-wise approach, in which the intermediate teacher networks are trained121 by distilling knowledge from the original large teacher and then find it ineffective. Mirzadeh et al. (2020)122 propose using a teacher assistant to bridge the complexity gap. Hou et al. (2021) apply knowledge distillation123 to fine-tune pruned network, but do not address aforementioned issues. We propose an inverted version of124 the step-wise approach from Cho & Hariharan (2019), and train train our pruned network with increasingly125 larger teachers. Such chains can be naturally formed for model families like EfficientNet (Tan & Le, 2019)126 and EfficientNetV2 (Tan & Le, 2021). We also observe that in case of generic knowledge distillation, the127 final results can be improved by (even slightly) disturbing the student model with channel pruning before128 starting the distillation.129 3 PRUNING METHOD130 The basic idea behind our channel pruning algorithm is to set up a scheme in which the importance of131 channels is being learned from the feature maps generated by convolutions in neural networks. We assign132 each channel a score corresponding to its importance that is updated at each training step and used to133 approximate behavior of the pruned network by appropriate masking (Liu et al., 2017; Herrmann et al.,134 2020). Similarly to Herrmann et al. (2020) we apply a probabilistic approach where channels in feature135 maps are masked with samples from random variables with values in (0, 1). This is a continuous relaxation136 approach to solving a discrete problem. The distributions of these random variables depend on the values of137 corresponding logits (which can be though of as proxies for channel scores and have values in R). These138 logits are learned during the pruning stage. More precisely, given a feature map of size (B,H,W,C) (B139 is batch size, H and W are spatial dimension and C is the number of channels) and a logits variable, for140 each channel separately we sample — using Gumbel-Softmax (Jang et al., 2016) — the random variable141 parametrized by the corresponding logit in logits. We mask the feature map by multiplying it by the142 sampled values.143 We do not consider each feature map individually — instead, we extend our understanding of channels from144 a single feature map to a series of operations occurring within a network. The intuition is that element-wise145 operations, like activation functions, propagate channels forward throughout the network, while convolutional146 layers consume their input channels and create new ones. Pruning sequential models is trivial but in more147 complicated cases, like models with residual connections, there exist additional couplings between channels,148 introduced by operations that accept multiple inputs, e.g. element-wise sum, multiplication (Fig. 2). Because149 1a number of floating-point operations coupled channels must be pruned jointly to ensure valid shapes, we use a single random variable to mask150 each set of coupled channels (see Section 4 for details about automatic detection of coupled channels).151 Although logits can be treated as standalone trainable variables, we choose to learn them from the feature152 maps in a feedback-loop mechanism. This is because the latter approach is faster to train, results in logits153 which (once converted to probabilities) have lower entropy and produces better results. Once we decide on154 the feature maps from which we will learn the optimal logits values, we place simple neural networks called155 logit predictor modules that take these feature maps as inputs. These modules are build of 3x3 depthwise156 convolution followed by 1x1 convolution and global mean pooling along spatial dimensions. The output157 output vector of each such module is later used to update the value of the corresponding logits variable158 (using exponential moving average) as in Figure 2.159 The masking operations should always be placed just before the convolution operations that absorb the160 channels (see Figure 2). The placement of logit predictors is more involved and in cases more complicated161 than the relatively simple one presented in Figure 2, we choose to follow a simple heuristic to place them162 after convolutions with largest kernel sizes.163 During the pruning phase we augment the task-specific loss with an auxiliary latency-based loss. It is based164 on the expected number of FLOPs in the pruned network, which is computed by using all the logits we have165 attached to the network. We train network weights and logit predictor modules jointly so that the network166 can adjust to channels being phased out.167 3.1 PRUNING LARGER BLOCKS OF CHANNELS168 We allow for blocks of channels (instead of just individual channels) to be treated jointly, so that blocks169 of a predefined size will be chosen or discarded together. This is especially important for platforms where170 convolutions are optimized with a specific block size o channels in mind, e.g., for SNPE (Qualcomm) this171 number is 32 and pruning individual channels often makes little sense.172 4 LAYER GROUPING ALGORITHM173 Although channel coupling has been observed in the literature, relevant groups of operations seem to be174 usually established via network-specific heuristics or manual annotation. A notable exception is Liu et al.175 (2021a) where the problem is described at length and an algorithm for finding the groups is derived. The176 algorithm is then tested on architectures based on ResNet. However, unlike our solution, it does not support177 concatenation operations. For clarity, we focus on convolutional neural networks, but the proposed strategy178 can be extended to other kinds of architectures.179 4.1 SOLUTION180 To overcome the issues delineated in Section 3 and make channel pruning available for most off-the-181 shelf architectures we have developed an algorithm that is capable of automatically detecting channel182 interdependencies between feature maps generated by operations in the network.183 To keep track of all the places where channels have to be considered in a synchronised way, we introduce the184 concept of an orbit. An orbit can be thought as subset of operations that are interdependent from the point of185 view of channel pruning. Operations in the same orbit need to be considered jointly when removing channels.186 Naively removing channels without taking into account these interdependencies may result in an invalid187 network. For example, if we remove an output channel from one of the convolutions on the left in Figure 2,188 the number of channels will no longer match for the Sum operation. A typical network has multiple orbits.189 It is easiest to understand this concept by seeing how orbits are build, which we delineate in Algorithm 1190 below.191 First, we fix some notation to make matters more intuitive. All the operations in a typical convolutional192 neural network can be described as being of the following types:193 1. sources are the operation where new channels are being created, namely regular convolution layers194 (not depthwise!) and dense layers;195 2. sinks are the operation where channels are being absorbed, namely regular convolution layers (not196 depthwise!) and dense layers;197 3. continuators are all the operations with a single input tensor that simply pass on the channels198 forward, e.g., batch normalization, mean pooling, resize, activations;199 4. joiners are operations with multiple input tensors of the same shape which join these tensors200 without altering the shape, namely element-wise addition and multiplication;201 Typically, continuator operations are not problematic since they do not alter the channels structure and have202 a single predecessor and a single output. It is the joiner operations that introduce interdependencies between203 channels. For brevity, from now on we will only speak of convolutions as sources and sinks, but everything204 applies just as well to dense layers.205 Note that some sources can be sinks at the same time and vice versa. We refer to operations that are either206 sinks or sources as source-sinks. To identify all the subgraphs in the network where channels have to be207 considered jointly we run an exhaustive-search type algorithm which has two distinct phases:208 In the fist phase we search for extended orbits, where the coupled operations are brought together. In209 Algorithm 1 we describe how extended orbits are created. The input is a neural network directed acyclic210 graph (DAG). The algorithm amounts to removing all inbound edges from convolution nodes and finding all211 weakly connected components in the resulting graph. The extended orbits are then these weakly connected212 components once we restore the inbound edges in convolution nodes.213 The second phase is similar to the first one. For all extended orbits found in phase one we do the following:214 take the extended orbit and then mark concatenation nodes (which play a special role, since they group215 channels from separate sources) inside as sinks and repeat the process. Most notably, we discard extended216 orbits in which there are concatenation nodes followed by joiner nodes, as it makes the whole process much217 more difficult to implement. We do not prune channels within such orbits. In Figure 3 we give an example of218 an extended orbit and how is broken up into final orbits.219 Algorithm 1 Searching for extended orbits Input: network DAG with layers represented as nodes 1: P := {p : p is a path starting and ending with a convolution with no convolutions inside the path } 2: for each path p in P remove the last node 3: for every distinct node ni on paths in P , create an empty color set for the node Cni = {} 4: X := {x : x is the initial node of a path in P } 5: for x in X do 6: pick an unused color c 7: add color c to color sets of all the nodes on all the paths in P starting in x 8: end for 9: while there exist nodes with multiple colors do 10: pick a node with multiple colors {c1, c2, . . . , ck} at random 11: if any node in the DAG has a color in {c2, . . . , ck} switch the color to c1 12: end while 5 PRUNING, FINE-TUNING AND HIERARCHICAL KNOWLEDGE DISTILLATION220 5.1 PRUNING STAGE221 The pruning workflow is the same for all types of tasks. We first find all final orbits in the network and attach222 logit predictors. Final orbits determine both: which parts of the network are being pruned and which of them223 are pruned jointly. The FLOPs per pixel can be automatically computed (and are differentiable with respect224 to the channel logits as in (Fig. 2). We can compute FLOPs for the original network and then set some225 FLOPs target. In practice we compute kFPP (FLOPs per pixel of the input tensor divided by 1000), to have226 a value that is independent of the input size. The latency loss is then given by ReLU(kFPP/target_kFPP−1).227 We add this loss to the quality loss related to the task, e.g., cross entropy in classification. To avoid an228 overly aggressive reduction of kFPP , we anneal the loss using exponential decay so that at the beginning of229 training the annealing multiplier is 0. and approaches 1. as the training progresses.230 Once the pruning phase is over we retain or discard output channels in convolutions based on channel231 interdependence discovered by applying Algorithm 1 and the values of logits variables learned by logit232 predictors.233 5.2 FINE-TUNING AND HIERARCHICAL KNOWLEDGE DISTILLATION234 We propose to fine-tune pruned models with a method we call hierarchical knowledge distillation. This235 approach relies on increasing the complexity of the teacher network in discrete steps. Given a fine-tuning236 budget of K GPU hours, and N teacher networks we train the network for K/N GPU hours with each of237 these teacher networks, starting with the smallest one. Our loss is Lce + 5Lkd where Lce is the standard238 cross entropy loss and Lkd is the distillation loss. Using higher weight term for the Lkd is crucial to prevent239 overfitting and produce better results.240 Hierarchical knowledge distillation consistently performs much better than just using the original model as241 the teacher. The comparisons can be seen in Section 6.2. Given an array of models with increasing FLOPs242 requirements, like EfficientNet Tan & Le (2019) and EfficientNetV2 Tan & Le (2021), it is possible to cheaply243 train new models for missing FLOPs values. This may produce better results in terms of FLOPs/accuracy244 trade-off and require less computational resources.245 It is perplexing that trying to use hierarchical knowledge distillation on an unpruned network does not work246 anywhere near as well. Our intuition is that pruning provides some kind of initial perturbation to network247 weights and architecture which prove beneficial from the point of view of gradient descent optimization.248 Are there any other types of model perturbations which boost the effectiveness of this type of knowledge249 distillation? These are the questions we could try to address as our future research. It would be also250 interesting to see how this approach performs when applied to recent state-of-the-art methods based on251 neural architecture search Wang et al. (2021).252 6 EXPERIMENTS253 All the experiments we perform adhere to the same schedule: (1) We first run the pruning algorithm with254 additional latency losses (usually 1-10 epochs, depending on the task). (2) We then fine-tune the pruned255 model (without resetting its weights). The experiments for classification on ImageNet are presented in256 Section 6.2. Experiments for image denoising and human segmentation are presented in Sections A.2.1 and257 A.2.2, respectively.258 6.1 HYPERPARAMETERS FOR THE PRUNING PHASE259 For the pruning phase, during which channels to be removed are being chosen, the setup is roughly the same260 for each task. The logits predictor is always a two layer network with 3× 3 depthwise convolution followed261 by 1× 1 convolution and global mean pooling. We set the batch size to 16 and run the training updating the262 channel gates distributions as described in section 3. The initial value of channel logits is set to 3.0 so that263 initially there little to no masking. There is an additional loss that penalizes the entropy of all the logits so264 that at the end of the pruning phase the channel enabling probabilities (which we get by applying softmax to265 logits) are far away from 0.5. The temperature for Gumbel-Softmax is constant - 0.5.266 6.2 CLASSIFICATION ON IMAGENET267 We prune EfficientNet B0, EfficientNet B1 (Tan & Le, 2019), MobileNetV2 (Sandler et al., 2018), and268 EfficientNetV2 (Tan & Le, 2021). We choose these since they are already highly optimized for mobile devices269 and relatively small. EfficientNetV2 is a recent state-of-the-art architecture optimized for mobile GPUs and270 DSPs. All the models are taken from their official Keras implementations2 except for EfficientNetV2. Larger271 2https://www.tensorflow.org/api_docs/python/tf/keras/applications networks like the VGG19 or the ResNet family has been predominant in channel pruning literature, but are272 rarely suitable for resource-limited devices, where the need for optimization is biggest. The phase where273 channels are chosen usually lasts a little more than a single epoch on ImageNet. We split the ImageNet train274 data into two parts, leaving about 5% of the data for early-stopping.275 Following Section 5.2 we use multiple teacher networks. The details are as follows:276 • EfficientNet B0: fine-tune the models for 40 epochs with B0 as teacher and then we further277 fine-tune with a B1 for another 40 epochs;278 • EfficientNet B1: fine-tune the models for 25 epochs with B1 as teacher and then we further279 fine-tune with a B2 for another 25 epochs.280 • MobileNetV2: fine-tune the models for 40 epochs with MobileNetV2 as teacher and then we281 further fine-tune with a EfficientNet B0 for another 40 epochs.282 • EfficientNetV2 B0: fine-tune the models for 16 epochs with B0V2 as, then fine-tune the models283 for 16 epochs with B1V2 as teacher and finally fine-tune the models for 16 epochs with B2V2 as284 teacher.285 The interesting thing we noticed is that using knowledge distillation without pruning does not help at all.286 For example we tried fine-tuning MobileNetV2 with EfficientNet B0 teacher right away and top 1 Imagenet287 accuracy fell from 71.52% to 71.12%. We conjecture that some kind of initial perturbation is needed for288 knowledge distillation to work. In our case this perturbation is channel pruning.289 Batch size is set to 192 for B0 and MobileNetV2 fine-tuning. For B1 and EfficientNetV2 B0 batch size is290 128. The input image resolution is (224, 224). We use only random crop and flip as augmentations. For291 training we use one NVidia RTX3090 GPU. For the pruning phase we set the batch size to 16 and, quite292 importantly, we freeze all batch normalization layers. We use Adam optimizer for all the training runs.293 During mask-learning phase the learning rate is set to 0.0001. For fine-tuning we use exponential decay with294 learning rate initially set to 0.0001 and the decay rate set to 0.001.295 6.2.1 COMPARISONS AND DISCUSSION296 Few authors have attempted to prune EfficientNet (Tan & Le, 2019). We can compare our results with Hou297 et al. (2021), where only one model is presented, which was also fine-tuned with knowledge distillation. We298 provide a much wider FLOPs spectrum for B0 and prune B1 as well. It is interesting to see that B1 pruned to299 the FLOPs level of B0 outperforms B0 by a wide margin. The results are in Table 1.300 Comparisons for MobileNetV2 are quite difficult due the inconsistencies between different versions of the301 model taken by different authors as their baseline. For instance in Hou et al. (2021) the authors first take an302 over-pruned backbone which they proceed to prune. In Liu et al. (2019b) the largest version of MobileNetV2303 is taken (585M FLOPs) and then pruned. Some of the authors run the fine-tuning for much longer than we do.304 Notably, in Ye et al. (2020) the fine-tuning is run on 4 GPUs with batch size 512 and for 250 epochs which is305 considerably more expensive than our approach. Detailed results are in Table 2 and Figure 5a. Again using306 hierarchical knowledge distillation we are able to fine-tune the model pruned to 75% of original FLOPs so307 that it has 0.7% higher accuracy than the original.308 When it comes to EfficientNetV2, we are able to outperform the original model’s results on ImageNet with309 the help of hierarchical EKD, inasmuch as the pruned version of B0 (70% of the FLOPs of the original310 model) has higher top 1 accuracy than the original. See Table 3 and Figure 5b.311 7 CONCLUSION312 Using an automated solution to process coupled channels in neural network architectures and a simple313 scheme to learn channel importance, we are able to prune models with varying architectures for different314 underlying tasks. For fine-tuning pruned classification networks we use hierarchical knowledge distillation315 which produces much better results than just using the original model as a teacher. The whole pruning316 pipeline requires much less computational resources than some of the state-of-the-art NAS based solutions317 for finding efficient FLOPs / accuracy trade-offs like Wang et al. (2021).318 REFERENCES319 Zhiqiang Chen, Ting-Bing Xu, Changde Du, Cheng-Lin Liu, and Huiguang He. Dynamical channel pruning320 by conditional accuracy change for deep neural networks. IEEE transactions on neural networks and321 learning systems, 32(2):799–813, 2020.322 J. Cho and B. Hariharan. On the efficacy of knowledge distillation. pp. 4793–4801, nov 2019. doi: 10.323 1109/ICCV.2019.00489. URL https://doi.ieeecomputersociety.org/10.1109/ICCV.324 2019.00489.325 Ke Gong, Xiaodan Liang, Dongyu Zhang, Xiaohui Shen, and Liang Lin. Look into person: Self-supervised326 structure-sensitive learning and a new benchmark for human parsing. In Proceedings of the IEEE327 Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.328 Shaopeng Guo, Yujie Wang, Quanquan Li, and Junjie Yan. Dmcp: Differentiable markov channel pruning329 for neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern330 Recognition, pp. 1539–1547, 2020.331 Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. arXiv preprint332 arXiv:1608.04493, 2016.333 Song Han, Jeff Pool, John Tran, and William J Dally. Learning both weights and connections for efficient334 neural networks. arXiv preprint arXiv:1506.02626, 2015.335 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition,336 2015.337 Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In338 Proceedings of the IEEE international conference on computer vision, pp. 1389–1397, 2017.339 Charles Herrmann, Richard Strong Bowen, and Ramin Zabih. Channel selection using gumbel softmax. In340 European Conference on Computer Vision, pp. 241–257. Springer, 2020.341 Yuenan Hou, Zheng Ma, Chunxiao Liu, Zhe Wang, and Chen Change Loy. Network pruning via resource342 reallocation. CoRR, abs/2103.01847, 2021. URL https://arxiv.org/abs/2103.01847.343 Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint344 arXiv:1611.01144, 2016.345 Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient346 convnets. arXiv preprint arXiv:1608.08710, 2016.347 Mingbao Lin, Rongrong Ji, Yan Wang, Yichen Zhang, Baochang Zhang, Yonghong Tian, and Ling Shao.348 Hrank: Filter pruning using high-rank feature map. CoRR, abs/2002.10179, 2020. URL https:349 //arxiv.org/abs/2002.10179.350 Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional351 neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.352 806–814, 2015.353 Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search, 2019a.354 Jiayi Liu, Samarth Tripathi, Unmesh Kurup, and Mohak Shah. Pruning algorithms to accelerate convolutional355 neural networks for edge applications: A survey. arXiv preprint arXiv:2005.04275, 2020.356 Liyang Liu, Shilong Zhang, Zhanghui Kuang, Aojun Zhou, Jing-Hao Xue, Xinjiang Wang, Yimin Chen,357 Wenming Yang, Qingmin Liao, and Wayne Zhang. Group fisher pruning for practical network compression.358 In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine359 Learning, volume 139 of Proceedings of Machine Learning Research, pp. 7021–7032. PMLR, 18–24 Jul360 2021a. URL https://proceedings.mlr.press/v139/liu21ab.html.361 Xiangcheng Liu, Jian Cao, Hongyi Yao, Wenyu Sun, and Yuan Zhang. Adapruner: Adaptive channel pruning362 and effective weights inheritance. arXiv preprint arXiv:2109.06397, 2021b.363 Zechun Liu, Haoyuan Mu, X. Zhang, Zichao Guo, X. Yang, K. Cheng, and Jian Sun. Metapruning: Meta364 learning for automatic neural network channel pruning. 2019 IEEE/CVF International Conference on365 Computer Vision (ICCV), pp. 3295–3304, 2019b.366 Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning367 efficient convolutional networks through network slimming. In Proceedings of the IEEE international368 conference on computer vision, pp. 2736–2744, 2017.369 Jian-Hao Luo, Hao Zhang, Hong-Yu Zhou, Chen-Wei Xie, Jianxin Wu, and Weiyao Lin. Thinet: pruning370 cnn filters for a thinner net. IEEE transactions on pattern analysis and machine intelligence, 41(10):371 2525–2538, 2018.372 Seyed Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh.373 Improved knowledge distillation via teacher assistant. Proceedings of the AAAI Conference on Artificial374 Intelligence, 34:5191–5198, 04 2020. doi: 10.1609/aaai.v34i04.5963.375 Markus Nagel, Marios Fournarakis, Rana Ali Amjad, Yelysei Bondarenko, Mart van Baalen, and Tijmen376 Blankevoort. A white paper on neural network quantization. arXiv preprint arXiv:2106.08295, 2021.377 Qualcomm. Snpe: Snapdragon neural processing engine. https://developer.qualcomm.com/378 sites/default/files/docs/snpe/.379 Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Inverted380 residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. CoRR,381 abs/1801.04381, 2018. URL http://arxiv.org/abs/1801.04381.382 Wenqi Shao, Hang Yu, Zhaoyang Zhang, Hang Xu, Zhenguo Li, and Ping Luo. Bwcp: Probabilistic383 learning-to-prune channels for convnets via batch whitening. arXiv preprint arXiv:2105.06423, 2021.384 Mennatullah Siam, Heba Mahgoub, Mohamed Zahran, Senthil Yogamani, Martin Jagersand, and Ahmad El-385 Sallab. Modnet: Motion and appearance based moving object detection network for autonomous driving.386 In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 2859–2864, 2018.387 doi: 10.1109/ITSC.2018.8569744.388 Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural networks.389 CoRR, abs/1905.11946, 2019. URL http://arxiv.org/abs/1905.11946.390 Mingxing Tan and Quoc V. Le. Efficientnetv2: Smaller models and faster training. CoRR, abs/2104.00298,391 2021. URL https://arxiv.org/abs/2104.00298.392 Mingxing Tan, Ruoming Pang, and Quoc V. Le. Efficientdet: Scalable and efficient object detection. CoRR,393 abs/1911.09070, 2019. URL http://arxiv.org/abs/1911.09070.394 Alvin Wan, Xiaoliang Dai, Peizhao Zhang, Zijian He, Yuandong Tian, Saining Xie, Bichen Wu, Matthew395 Yu, Tao Xu, Kan Chen, Peter Vajda, and Joseph E. Gonzalez. Fbnetv2: Differentiable neural architecture396 search for spatial and channel dimensions. In Proceedings of the IEEE/CVF Conference on Computer397 Vision and Pattern Recognition (CVPR), June 2020.398 Dilin Wang, Chengyue Gong, Meng Li, Qiang Liu, and Vikas Chandra. Alphanet: Improved training of399 supernet with alpha-divergence. arXiv preprint arXiv:2102.07954, 2021.400 Yuzhi Wang, Haibin Huang, Qin Xu, Jiaming Liu, Yiqun Liu, and Jue Wang. Practical deep raw image401 denoising on mobile devices, 2020.402 Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. Snas: stochastic neural architecture search. arXiv403 preprint arXiv:1812.09926, 2018.404 Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, and Qiang Liu. Good subnetworks405 provably exist: Pruning via greedy forward selection. CoRR, abs/2003.01794, 2020. URL https:406 //arxiv.org/abs/2003.01794.407 A APPENDIX408 A.1 LAYER WIDTHS VISUALIZATION409 It is quite interesting to see how layer width looks like after pruning. The pattern that emerge are quite410 telling. EfficientNets are build of a series of meta-blocks, .e.g, 2, 3, . . . , 7 in EfficientNet B0, where each411 meta-block consists of a number of MBCONV blocks at the same spatial resolution. It appears that in each412 such meta-block the most important block is usually the first one, and block importance decays proportionally413 to the depth of the block inside the meta-block. See Figure 6 in the Appendix.414 A.2 FURTHER RESULTS415 A.2.1 RAWRGB IMAGE DENOISING416 We prune a recent state-of-the-art network for RawRGB image denoising on mobile devices introduced417 in Wang et al. (2020). We train the models on SIDD Medium dataset https://www.eecs.yorku.418 ca/~kamel/sidd/dataset.php. We first extract 256x256 patches for training and validation and419 then test the networks on SIDD validation dataset https://www.eecs.yorku.ca/~kamel/sidd/420 benchmark.php. The batch size is set to 16, learning rate is 0.0001 and we use Adam optimizer. The421 loss is mean absolute error. We train the original model for 150 epochs, prune it and then train the original422 model for another 150 epochs. The pruned models are fine-tuned for 150 epochs as well. For comparison we423 also train from scratch smaller (linearly scaled down) versions of the original model. The results can be seen424 in Table 4 and Figure 7.425 A.2.2 HUMAN SEGMENTATION426 For semantic segmentation we use a private dataset for training human segmentation models for real time427 prediction in video bokeh task. This is dictated by the need to have superior edge quality which is missing428 in publicly available data for segmentation. The dataset consists of 120k real image/mask pair and 50k429 synthetic ones. Apart from IoU we also compute edge IoU, which pays attention only to the edges of the430 masks and can be thought of as a proxy for edge quality. The baseline architecture consists of an EfficientNet431 B0 (Tan & Le, 2019) backbone, EfficientDet (Tan et al., 2019) (modified slightly to allow for easier channel432 pruning) fusion block and a detail branch (Siam et al., 2018) to preserve edge quality. The backbone network433 is pretrained on ImageNet. We train the original model for 70 epochs, prune and then fine-tune the pruned434 models for 50 epochs. The validation results are presented in Table 4. The validation dataset is a split of a435 modified version of LIP dataset (Gong et al., 2017), where objects belonging to people (such as handbags,436 etc.) are also considered part of these people. This is done, so that we can train models for video bokeh437 effect. The results are in Table 4b and are visualized in Figures 8a and 8b.438 Notice that the smallest pruned model is compressed to around 10% of the size of the original one. Even in439 these extreme compression scenario our approach produces a model with IoU higher than 90%. IoU starts440 dropping only after we have removed more than 60% of the original FLOPs. This is an observation which, in441 our experience, is true for many more architectures for segmentation, the one being presented here is just442 one example. Edge IoU starts falling much more quickly, perhaps beacause we employ no edge-specific loss.443
1. What is the focus of the paper regarding channel pruning in neural networks? 2. What are the strengths of the proposed approach, particularly in addressing coupled channels and using hierarchical knowledge distillation? 3. What are the weaknesses of the paper, such as limited comparisons with other baseline methods and unclear writing? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the methodology, results, or claims made in the paper?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a new channel pruning method, which firstly finds the coupled channels in the network, then prunes channels with learned logit predictors, and finally uses hierarchical knowledge distillation (KD) to fine-tune the pruned network. Empirical evaluations have been conducted with EfficientNets and MobileNets on ImageNet, an image denoising dataset, and a private human segmentation dataset. Review Pros This paper aims at solving the coupled channels in channel pruning, which is important in practice and needs more investigations. The evaluated tasks (image denoising and human segmentation) seem to be quite interesting and seldomly discussed in related pruning works. It is good to observe and conclude that KD only works for pruned networks but not for unpruned (original) networks. However, more investigations are welcomed to make confident claims and hypotheses. Cons There are only a few baseline pruning methods compared. Meanwhile, results on ResNet should be compared. Otherwise, it is difficult to judge the effectiveness of the proposed method. The tables show merely the same results as those in the figures (e.g., Table 1 & Figure 4, Tables 2/3 & Figure 5. No ablation studies. It seems that the most performance gain is from the hierarchical KD, which is not exclusive to other pruning methods. Applying hierarchical KD seems not to have enough methodological novelty or contributions. The writing clarity should be moderately addressed. For example, the descriptions of the two phases in Section 4.1 are quite informal and vague, and the relation between Algorithm 1 and the described steps in the main text is also unclear. Also, it is worthy to explain why "concatenation" is unique to others and important in your pruning method (if so). Details It is not clear that how the proposed method, compared with baselines, uniquely solves the issue of pruning "specific numbers of input and output channels". The claim that the proposed method is "easy to scale and be deployed for segmentation, detection or image denoising" needs more explanations. It is not fully correct to claim "between 200M and 300M our pruned models outperform FBNetV2" from Figure 1 since FBNetV2 obtains an accuracy of ~76 with fewer FLOPs. For the four operation types introduced by the authors, which one does the "concatenation" belong to? Minor issues In Abstract: "We release a set pruned ... " -> "a set of" In Section 2: "Selecting channels based network ..." -> "based on" In Section 2: "and train train our pruned network ..." -> "and train our pruned network" In Section 3: "The output output vector of ..." -> "The output vector of" Figure 2: "An subset of ..." -> "A subset" In Section 3.1: "... a specific block size o channels ..." -> "of" In Section 4.1: "In the fist phase" -> "first"
ICLR
Title Automated Channel Pruning with Learned Importance Abstract Neural network pruning allows for significant reduction of model size and latency. How1 ever, most of the current network pruning methods do not consider channel interdepen2 dencies and a lot of manual adjustments are required before they can be applied to new 3 network architectures. Moreover, these algorithms are often based on hand-picked, some4 times complicated heuristics and can require thousands of GPU computation hours. In 5 this paper, we introduce a simple neural network pruning and fine-tuning framework that 6 requires no manual heuristics, is highly efficient to train (2-6 times speed up compared to 7 NAS-based competitors) and produces comparable performance. The framework contains 8 1) an automatic channel detection algorithm that groups the interdependent blocks of 9 channels; 2) a non-iterative pruning algorithm that learns channel importance directly from 10 feature maps while masking the coupled computational blocks using Gumbel-Softmax 11 sampling and 3) a hierarchical knowledge distillation approach to fine-tune the pruned 12 neural networks. We validate our pipeline on ImageNet classification, human segmentation 13 and image denoising, creating lightweight and low latency models, easy to deploy on 14 mobile devices. Using our pruning algorithm and hierarchical knowledge distillation for 15 fine-tuning we are able to prune EfficientNet B0, EfficientNetV2 B0 and MobileNetV2 16 to 75% of their original FLOPs with no loss of accuracy on ImageNet. We release a set 17 pruned backbones as Keras models all of them proved beneficial when deployed in other 18 projects. 19 N/A Neural network pruning allows for significant reduction of model size and latency. How-1 ever, most of the current network pruning methods do not consider channel interdepen-2 dencies and a lot of manual adjustments are required before they can be applied to new3 network architectures. Moreover, these algorithms are often based on hand-picked, some-4 times complicated heuristics and can require thousands of GPU computation hours. In5 this paper, we introduce a simple neural network pruning and fine-tuning framework that6 requires no manual heuristics, is highly efficient to train (2-6 times speed up compared to7 NAS-based competitors) and produces comparable performance. The framework contains8 1) an automatic channel detection algorithm that groups the interdependent blocks of9 channels; 2) a non-iterative pruning algorithm that learns channel importance directly from10 feature maps while masking the coupled computational blocks using Gumbel-Softmax11 sampling and 3) a hierarchical knowledge distillation approach to fine-tune the pruned12 neural networks. We validate our pipeline on ImageNet classification, human segmentation13 and image denoising, creating lightweight and low latency models, easy to deploy on14 mobile devices. Using our pruning algorithm and hierarchical knowledge distillation for15 fine-tuning we are able to prune EfficientNet B0, EfficientNetV2 B0 and MobileNetV216 to 75% of their original FLOPs with no loss of accuracy on ImageNet. We release a set17 pruned backbones as Keras models - all of them proved beneficial when deployed in other18 projects.19 1 INTRODUCTION20 Efforts directed towards deployment of neural networks on low-performance devices such as mobile phones21 or TVs, created a demand for smaller and faster models. This has led to advances in neural network22 compression techniques, which allow us to minimize existing large-scale architectures and adjust them to23 fit specific hardware requirements. Some techniques have been especially successful in this area. Neural24 network quantization approaches (Nagel et al., 2021) not only decreased the size of the models, but also25 enabled us to utilize specialized computing accelerators like DSPs. Unfortunately, other techniques, such as26 network pruning (Liu et al., 2020), are not equally effective in low-resource environments.27 Early attempts of naive weight pruning introduced sparse computations, which render them inefficient in28 practical scenarios (Han et al., 2015; Guo et al., 2016). Channel pruning (Li et al., 2016; Liu et al., 2017;29 2021a; Herrmann et al., 2020; Liu et al., 2019b) delivers significant improvements in terms of both memory30 consumption and execution speed, and is the preferred approach if we want to deploy our models on mobile31 devices.32 However, the majority of existing approaches to channel pruning share several drawbacks:33 1. Little effort has been made to address channel interdependencies that occur in the majority of the34 architectures, with Liu et al. (2021a) being a notable exception. Many popular network architectures35 contain residual connections inspired by ResNet (He et al., 2015). Feature maps added in residual36 connections must hold the same shapes, which is likely to be violated when channels are removed37 independently. We refer to channels involved in this kind of dependency as coupled. Automating38 the process of adding pruning logic to the network in consideration of channel interdependencies is39 extremely important in practical considerations.40 2. Most methods require an expensive and time-consuming fine-tuning process after channels are41 removed. Some authors use an iterative approach, where channels are removed in a number of steps,42 and fine-tuning is performed between these steps. Either way, the fine-tuning process often requires43 a significant number of GPU hours to complete.44 3. Channels in any given convolution are being considered independently. However, some target45 platforms, e.g. SNPE (Qualcomm), are optimized for specific numbers of input and output channels46 and pruning channels independently can give little to no speed-up.47 In order to overcome these issues we introduce an end-to-end channel pruning pipeline which can be48 deployed on a wide array of neural networks in an automated way. Our main insights are that: (1) Channel49 Similar, but a more affordable approach, is to periodically prune channels throughout a single training90 procedure (Liu et al., 2021a; Guo et al., 2020; Chen et al., 2020). Ye et al. (2020) and Hou et al. (2021)91 point out flaws in the idea of greedy channel removal and propose to selectively restore channels in the92 pruned network. Liu et al. (2019b) trains an auxiliary neural network to quickly evaluate pruned networks93 and select the best one using an evolutionary algorithm. Other methods jointly train a neural network and94 learn importance scores for its channels using channel gating mechanism. In (Chen et al., 2020), this is95 achieved by randomly enable and disable channels during each iteration of the training. Gradient descent96 was used to update the importance scores in Herrmann et al. (2020); Lin et al. (2020); Ye et al. (2020) and is97 based on the idea for optimizing hyperparameters in neural architecture search in Liu et al. (2019a) and Xie98 et al. (2018). These gradient-based methods rely on Gumbel-Softmax reparametrization trick (Jang et al.,99 2016) to enable back-propagating through the gates distribution. Herrmann et al. (2020) proposes a variant100 of such a method where the logits of the channel gates are trainable parameters, as well as a variant where101 the logits are produced by an auxiliary neural network that accepts a feature map. Selecting channels based102 network input introduces an overhead that is unacceptable on resource-limited devices. Our solution contains103 a similar idea, but we ensured that the auxiliary networks can be safely removed after the training.104 Channel coupling. The channel coupling pattern occurs in many modern architectures inspired by ResNet105 (He et al., 2015), such as MobileNet (Sandler et al., 2018), EfficientNet (Tan & Le, 2019; 2021) or FBNet106 (Wan et al., 2020). Many studies seem to ignore this issue (Herrmann et al., 2020; Lin et al., 2020; Ye et al.,107 2020); other resolve this issue by manually grouping interdependent layers or providing model-specific108 heuristics (Shao et al., 2021; Hou et al., 2021; Guo et al., 2020; Liu et al., 2021b). Independently to our109 efforts, an automated solution for grouping channels has been proposed in Liu et al. (2021a). We propose a110 similar algorithm (see section 4), and additionally offer an extension for handling concatenations.111 Measuring speed-up. Many pruning methods are parametrised by a fraction of channels to prune, either112 globally or per-layer (Lin et al., 2020; Ye et al., 2020; Herrmann et al., 2020). Overall network FLOPs1113 better corresponds to the usual business requirements. In Chen et al. (2020) and Liu et al. (2021a), the114 maximal FLOPs parameter is included in their stopping criteria and importance scores of channels are115 adjusted according to their computation cost. Similarly to Guo et al. (2020), we construct a loss function that116 introduce a penalty for exceeding the provided FLOPs budget and use it as a part differentiable importance117 optimization.118 Knowledge distillation. It has been noted that Knowledge distillation can perform poorly when there is119 a large discrepancy in complexity between student and teacher networks (Cho & Hariharan, 2019). Cho120 & Hariharan (2019) evaluate a step-wise approach, in which the intermediate teacher networks are trained121 by distilling knowledge from the original large teacher and then find it ineffective. Mirzadeh et al. (2020)122 propose using a teacher assistant to bridge the complexity gap. Hou et al. (2021) apply knowledge distillation123 to fine-tune pruned network, but do not address aforementioned issues. We propose an inverted version of124 the step-wise approach from Cho & Hariharan (2019), and train train our pruned network with increasingly125 larger teachers. Such chains can be naturally formed for model families like EfficientNet (Tan & Le, 2019)126 and EfficientNetV2 (Tan & Le, 2021). We also observe that in case of generic knowledge distillation, the127 final results can be improved by (even slightly) disturbing the student model with channel pruning before128 starting the distillation.129 3 PRUNING METHOD130 The basic idea behind our channel pruning algorithm is to set up a scheme in which the importance of131 channels is being learned from the feature maps generated by convolutions in neural networks. We assign132 each channel a score corresponding to its importance that is updated at each training step and used to133 approximate behavior of the pruned network by appropriate masking (Liu et al., 2017; Herrmann et al.,134 2020). Similarly to Herrmann et al. (2020) we apply a probabilistic approach where channels in feature135 maps are masked with samples from random variables with values in (0, 1). This is a continuous relaxation136 approach to solving a discrete problem. The distributions of these random variables depend on the values of137 corresponding logits (which can be though of as proxies for channel scores and have values in R). These138 logits are learned during the pruning stage. More precisely, given a feature map of size (B,H,W,C) (B139 is batch size, H and W are spatial dimension and C is the number of channels) and a logits variable, for140 each channel separately we sample — using Gumbel-Softmax (Jang et al., 2016) — the random variable141 parametrized by the corresponding logit in logits. We mask the feature map by multiplying it by the142 sampled values.143 We do not consider each feature map individually — instead, we extend our understanding of channels from144 a single feature map to a series of operations occurring within a network. The intuition is that element-wise145 operations, like activation functions, propagate channels forward throughout the network, while convolutional146 layers consume their input channels and create new ones. Pruning sequential models is trivial but in more147 complicated cases, like models with residual connections, there exist additional couplings between channels,148 introduced by operations that accept multiple inputs, e.g. element-wise sum, multiplication (Fig. 2). Because149 1a number of floating-point operations coupled channels must be pruned jointly to ensure valid shapes, we use a single random variable to mask150 each set of coupled channels (see Section 4 for details about automatic detection of coupled channels).151 Although logits can be treated as standalone trainable variables, we choose to learn them from the feature152 maps in a feedback-loop mechanism. This is because the latter approach is faster to train, results in logits153 which (once converted to probabilities) have lower entropy and produces better results. Once we decide on154 the feature maps from which we will learn the optimal logits values, we place simple neural networks called155 logit predictor modules that take these feature maps as inputs. These modules are build of 3x3 depthwise156 convolution followed by 1x1 convolution and global mean pooling along spatial dimensions. The output157 output vector of each such module is later used to update the value of the corresponding logits variable158 (using exponential moving average) as in Figure 2.159 The masking operations should always be placed just before the convolution operations that absorb the160 channels (see Figure 2). The placement of logit predictors is more involved and in cases more complicated161 than the relatively simple one presented in Figure 2, we choose to follow a simple heuristic to place them162 after convolutions with largest kernel sizes.163 During the pruning phase we augment the task-specific loss with an auxiliary latency-based loss. It is based164 on the expected number of FLOPs in the pruned network, which is computed by using all the logits we have165 attached to the network. We train network weights and logit predictor modules jointly so that the network166 can adjust to channels being phased out.167 3.1 PRUNING LARGER BLOCKS OF CHANNELS168 We allow for blocks of channels (instead of just individual channels) to be treated jointly, so that blocks169 of a predefined size will be chosen or discarded together. This is especially important for platforms where170 convolutions are optimized with a specific block size o channels in mind, e.g., for SNPE (Qualcomm) this171 number is 32 and pruning individual channels often makes little sense.172 4 LAYER GROUPING ALGORITHM173 Although channel coupling has been observed in the literature, relevant groups of operations seem to be174 usually established via network-specific heuristics or manual annotation. A notable exception is Liu et al.175 (2021a) where the problem is described at length and an algorithm for finding the groups is derived. The176 algorithm is then tested on architectures based on ResNet. However, unlike our solution, it does not support177 concatenation operations. For clarity, we focus on convolutional neural networks, but the proposed strategy178 can be extended to other kinds of architectures.179 4.1 SOLUTION180 To overcome the issues delineated in Section 3 and make channel pruning available for most off-the-181 shelf architectures we have developed an algorithm that is capable of automatically detecting channel182 interdependencies between feature maps generated by operations in the network.183 To keep track of all the places where channels have to be considered in a synchronised way, we introduce the184 concept of an orbit. An orbit can be thought as subset of operations that are interdependent from the point of185 view of channel pruning. Operations in the same orbit need to be considered jointly when removing channels.186 Naively removing channels without taking into account these interdependencies may result in an invalid187 network. For example, if we remove an output channel from one of the convolutions on the left in Figure 2,188 the number of channels will no longer match for the Sum operation. A typical network has multiple orbits.189 It is easiest to understand this concept by seeing how orbits are build, which we delineate in Algorithm 1190 below.191 First, we fix some notation to make matters more intuitive. All the operations in a typical convolutional192 neural network can be described as being of the following types:193 1. sources are the operation where new channels are being created, namely regular convolution layers194 (not depthwise!) and dense layers;195 2. sinks are the operation where channels are being absorbed, namely regular convolution layers (not196 depthwise!) and dense layers;197 3. continuators are all the operations with a single input tensor that simply pass on the channels198 forward, e.g., batch normalization, mean pooling, resize, activations;199 4. joiners are operations with multiple input tensors of the same shape which join these tensors200 without altering the shape, namely element-wise addition and multiplication;201 Typically, continuator operations are not problematic since they do not alter the channels structure and have202 a single predecessor and a single output. It is the joiner operations that introduce interdependencies between203 channels. For brevity, from now on we will only speak of convolutions as sources and sinks, but everything204 applies just as well to dense layers.205 Note that some sources can be sinks at the same time and vice versa. We refer to operations that are either206 sinks or sources as source-sinks. To identify all the subgraphs in the network where channels have to be207 considered jointly we run an exhaustive-search type algorithm which has two distinct phases:208 In the fist phase we search for extended orbits, where the coupled operations are brought together. In209 Algorithm 1 we describe how extended orbits are created. The input is a neural network directed acyclic210 graph (DAG). The algorithm amounts to removing all inbound edges from convolution nodes and finding all211 weakly connected components in the resulting graph. The extended orbits are then these weakly connected212 components once we restore the inbound edges in convolution nodes.213 The second phase is similar to the first one. For all extended orbits found in phase one we do the following:214 take the extended orbit and then mark concatenation nodes (which play a special role, since they group215 channels from separate sources) inside as sinks and repeat the process. Most notably, we discard extended216 orbits in which there are concatenation nodes followed by joiner nodes, as it makes the whole process much217 more difficult to implement. We do not prune channels within such orbits. In Figure 3 we give an example of218 an extended orbit and how is broken up into final orbits.219 Algorithm 1 Searching for extended orbits Input: network DAG with layers represented as nodes 1: P := {p : p is a path starting and ending with a convolution with no convolutions inside the path } 2: for each path p in P remove the last node 3: for every distinct node ni on paths in P , create an empty color set for the node Cni = {} 4: X := {x : x is the initial node of a path in P } 5: for x in X do 6: pick an unused color c 7: add color c to color sets of all the nodes on all the paths in P starting in x 8: end for 9: while there exist nodes with multiple colors do 10: pick a node with multiple colors {c1, c2, . . . , ck} at random 11: if any node in the DAG has a color in {c2, . . . , ck} switch the color to c1 12: end while 5 PRUNING, FINE-TUNING AND HIERARCHICAL KNOWLEDGE DISTILLATION220 5.1 PRUNING STAGE221 The pruning workflow is the same for all types of tasks. We first find all final orbits in the network and attach222 logit predictors. Final orbits determine both: which parts of the network are being pruned and which of them223 are pruned jointly. The FLOPs per pixel can be automatically computed (and are differentiable with respect224 to the channel logits as in (Fig. 2). We can compute FLOPs for the original network and then set some225 FLOPs target. In practice we compute kFPP (FLOPs per pixel of the input tensor divided by 1000), to have226 a value that is independent of the input size. The latency loss is then given by ReLU(kFPP/target_kFPP−1).227 We add this loss to the quality loss related to the task, e.g., cross entropy in classification. To avoid an228 overly aggressive reduction of kFPP , we anneal the loss using exponential decay so that at the beginning of229 training the annealing multiplier is 0. and approaches 1. as the training progresses.230 Once the pruning phase is over we retain or discard output channels in convolutions based on channel231 interdependence discovered by applying Algorithm 1 and the values of logits variables learned by logit232 predictors.233 5.2 FINE-TUNING AND HIERARCHICAL KNOWLEDGE DISTILLATION234 We propose to fine-tune pruned models with a method we call hierarchical knowledge distillation. This235 approach relies on increasing the complexity of the teacher network in discrete steps. Given a fine-tuning236 budget of K GPU hours, and N teacher networks we train the network for K/N GPU hours with each of237 these teacher networks, starting with the smallest one. Our loss is Lce + 5Lkd where Lce is the standard238 cross entropy loss and Lkd is the distillation loss. Using higher weight term for the Lkd is crucial to prevent239 overfitting and produce better results.240 Hierarchical knowledge distillation consistently performs much better than just using the original model as241 the teacher. The comparisons can be seen in Section 6.2. Given an array of models with increasing FLOPs242 requirements, like EfficientNet Tan & Le (2019) and EfficientNetV2 Tan & Le (2021), it is possible to cheaply243 train new models for missing FLOPs values. This may produce better results in terms of FLOPs/accuracy244 trade-off and require less computational resources.245 It is perplexing that trying to use hierarchical knowledge distillation on an unpruned network does not work246 anywhere near as well. Our intuition is that pruning provides some kind of initial perturbation to network247 weights and architecture which prove beneficial from the point of view of gradient descent optimization.248 Are there any other types of model perturbations which boost the effectiveness of this type of knowledge249 distillation? These are the questions we could try to address as our future research. It would be also250 interesting to see how this approach performs when applied to recent state-of-the-art methods based on251 neural architecture search Wang et al. (2021).252 6 EXPERIMENTS253 All the experiments we perform adhere to the same schedule: (1) We first run the pruning algorithm with254 additional latency losses (usually 1-10 epochs, depending on the task). (2) We then fine-tune the pruned255 model (without resetting its weights). The experiments for classification on ImageNet are presented in256 Section 6.2. Experiments for image denoising and human segmentation are presented in Sections A.2.1 and257 A.2.2, respectively.258 6.1 HYPERPARAMETERS FOR THE PRUNING PHASE259 For the pruning phase, during which channels to be removed are being chosen, the setup is roughly the same260 for each task. The logits predictor is always a two layer network with 3× 3 depthwise convolution followed261 by 1× 1 convolution and global mean pooling. We set the batch size to 16 and run the training updating the262 channel gates distributions as described in section 3. The initial value of channel logits is set to 3.0 so that263 initially there little to no masking. There is an additional loss that penalizes the entropy of all the logits so264 that at the end of the pruning phase the channel enabling probabilities (which we get by applying softmax to265 logits) are far away from 0.5. The temperature for Gumbel-Softmax is constant - 0.5.266 6.2 CLASSIFICATION ON IMAGENET267 We prune EfficientNet B0, EfficientNet B1 (Tan & Le, 2019), MobileNetV2 (Sandler et al., 2018), and268 EfficientNetV2 (Tan & Le, 2021). We choose these since they are already highly optimized for mobile devices269 and relatively small. EfficientNetV2 is a recent state-of-the-art architecture optimized for mobile GPUs and270 DSPs. All the models are taken from their official Keras implementations2 except for EfficientNetV2. Larger271 2https://www.tensorflow.org/api_docs/python/tf/keras/applications networks like the VGG19 or the ResNet family has been predominant in channel pruning literature, but are272 rarely suitable for resource-limited devices, where the need for optimization is biggest. The phase where273 channels are chosen usually lasts a little more than a single epoch on ImageNet. We split the ImageNet train274 data into two parts, leaving about 5% of the data for early-stopping.275 Following Section 5.2 we use multiple teacher networks. The details are as follows:276 • EfficientNet B0: fine-tune the models for 40 epochs with B0 as teacher and then we further277 fine-tune with a B1 for another 40 epochs;278 • EfficientNet B1: fine-tune the models for 25 epochs with B1 as teacher and then we further279 fine-tune with a B2 for another 25 epochs.280 • MobileNetV2: fine-tune the models for 40 epochs with MobileNetV2 as teacher and then we281 further fine-tune with a EfficientNet B0 for another 40 epochs.282 • EfficientNetV2 B0: fine-tune the models for 16 epochs with B0V2 as, then fine-tune the models283 for 16 epochs with B1V2 as teacher and finally fine-tune the models for 16 epochs with B2V2 as284 teacher.285 The interesting thing we noticed is that using knowledge distillation without pruning does not help at all.286 For example we tried fine-tuning MobileNetV2 with EfficientNet B0 teacher right away and top 1 Imagenet287 accuracy fell from 71.52% to 71.12%. We conjecture that some kind of initial perturbation is needed for288 knowledge distillation to work. In our case this perturbation is channel pruning.289 Batch size is set to 192 for B0 and MobileNetV2 fine-tuning. For B1 and EfficientNetV2 B0 batch size is290 128. The input image resolution is (224, 224). We use only random crop and flip as augmentations. For291 training we use one NVidia RTX3090 GPU. For the pruning phase we set the batch size to 16 and, quite292 importantly, we freeze all batch normalization layers. We use Adam optimizer for all the training runs.293 During mask-learning phase the learning rate is set to 0.0001. For fine-tuning we use exponential decay with294 learning rate initially set to 0.0001 and the decay rate set to 0.001.295 6.2.1 COMPARISONS AND DISCUSSION296 Few authors have attempted to prune EfficientNet (Tan & Le, 2019). We can compare our results with Hou297 et al. (2021), where only one model is presented, which was also fine-tuned with knowledge distillation. We298 provide a much wider FLOPs spectrum for B0 and prune B1 as well. It is interesting to see that B1 pruned to299 the FLOPs level of B0 outperforms B0 by a wide margin. The results are in Table 1.300 Comparisons for MobileNetV2 are quite difficult due the inconsistencies between different versions of the301 model taken by different authors as their baseline. For instance in Hou et al. (2021) the authors first take an302 over-pruned backbone which they proceed to prune. In Liu et al. (2019b) the largest version of MobileNetV2303 is taken (585M FLOPs) and then pruned. Some of the authors run the fine-tuning for much longer than we do.304 Notably, in Ye et al. (2020) the fine-tuning is run on 4 GPUs with batch size 512 and for 250 epochs which is305 considerably more expensive than our approach. Detailed results are in Table 2 and Figure 5a. Again using306 hierarchical knowledge distillation we are able to fine-tune the model pruned to 75% of original FLOPs so307 that it has 0.7% higher accuracy than the original.308 When it comes to EfficientNetV2, we are able to outperform the original model’s results on ImageNet with309 the help of hierarchical EKD, inasmuch as the pruned version of B0 (70% of the FLOPs of the original310 model) has higher top 1 accuracy than the original. See Table 3 and Figure 5b.311 7 CONCLUSION312 Using an automated solution to process coupled channels in neural network architectures and a simple313 scheme to learn channel importance, we are able to prune models with varying architectures for different314 underlying tasks. For fine-tuning pruned classification networks we use hierarchical knowledge distillation315 which produces much better results than just using the original model as a teacher. The whole pruning316 pipeline requires much less computational resources than some of the state-of-the-art NAS based solutions317 for finding efficient FLOPs / accuracy trade-offs like Wang et al. (2021).318 REFERENCES319 Zhiqiang Chen, Ting-Bing Xu, Changde Du, Cheng-Lin Liu, and Huiguang He. Dynamical channel pruning320 by conditional accuracy change for deep neural networks. IEEE transactions on neural networks and321 learning systems, 32(2):799–813, 2020.322 J. Cho and B. Hariharan. On the efficacy of knowledge distillation. pp. 4793–4801, nov 2019. doi: 10.323 1109/ICCV.2019.00489. URL https://doi.ieeecomputersociety.org/10.1109/ICCV.324 2019.00489.325 Ke Gong, Xiaodan Liang, Dongyu Zhang, Xiaohui Shen, and Liang Lin. Look into person: Self-supervised326 structure-sensitive learning and a new benchmark for human parsing. In Proceedings of the IEEE327 Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.328 Shaopeng Guo, Yujie Wang, Quanquan Li, and Junjie Yan. Dmcp: Differentiable markov channel pruning329 for neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern330 Recognition, pp. 1539–1547, 2020.331 Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. arXiv preprint332 arXiv:1608.04493, 2016.333 Song Han, Jeff Pool, John Tran, and William J Dally. Learning both weights and connections for efficient334 neural networks. arXiv preprint arXiv:1506.02626, 2015.335 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition,336 2015.337 Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In338 Proceedings of the IEEE international conference on computer vision, pp. 1389–1397, 2017.339 Charles Herrmann, Richard Strong Bowen, and Ramin Zabih. Channel selection using gumbel softmax. In340 European Conference on Computer Vision, pp. 241–257. Springer, 2020.341 Yuenan Hou, Zheng Ma, Chunxiao Liu, Zhe Wang, and Chen Change Loy. Network pruning via resource342 reallocation. CoRR, abs/2103.01847, 2021. URL https://arxiv.org/abs/2103.01847.343 Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint344 arXiv:1611.01144, 2016.345 Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient346 convnets. arXiv preprint arXiv:1608.08710, 2016.347 Mingbao Lin, Rongrong Ji, Yan Wang, Yichen Zhang, Baochang Zhang, Yonghong Tian, and Ling Shao.348 Hrank: Filter pruning using high-rank feature map. CoRR, abs/2002.10179, 2020. URL https:349 //arxiv.org/abs/2002.10179.350 Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional351 neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.352 806–814, 2015.353 Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search, 2019a.354 Jiayi Liu, Samarth Tripathi, Unmesh Kurup, and Mohak Shah. Pruning algorithms to accelerate convolutional355 neural networks for edge applications: A survey. arXiv preprint arXiv:2005.04275, 2020.356 Liyang Liu, Shilong Zhang, Zhanghui Kuang, Aojun Zhou, Jing-Hao Xue, Xinjiang Wang, Yimin Chen,357 Wenming Yang, Qingmin Liao, and Wayne Zhang. Group fisher pruning for practical network compression.358 In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine359 Learning, volume 139 of Proceedings of Machine Learning Research, pp. 7021–7032. PMLR, 18–24 Jul360 2021a. URL https://proceedings.mlr.press/v139/liu21ab.html.361 Xiangcheng Liu, Jian Cao, Hongyi Yao, Wenyu Sun, and Yuan Zhang. Adapruner: Adaptive channel pruning362 and effective weights inheritance. arXiv preprint arXiv:2109.06397, 2021b.363 Zechun Liu, Haoyuan Mu, X. Zhang, Zichao Guo, X. Yang, K. Cheng, and Jian Sun. Metapruning: Meta364 learning for automatic neural network channel pruning. 2019 IEEE/CVF International Conference on365 Computer Vision (ICCV), pp. 3295–3304, 2019b.366 Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning367 efficient convolutional networks through network slimming. In Proceedings of the IEEE international368 conference on computer vision, pp. 2736–2744, 2017.369 Jian-Hao Luo, Hao Zhang, Hong-Yu Zhou, Chen-Wei Xie, Jianxin Wu, and Weiyao Lin. Thinet: pruning370 cnn filters for a thinner net. IEEE transactions on pattern analysis and machine intelligence, 41(10):371 2525–2538, 2018.372 Seyed Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh.373 Improved knowledge distillation via teacher assistant. Proceedings of the AAAI Conference on Artificial374 Intelligence, 34:5191–5198, 04 2020. doi: 10.1609/aaai.v34i04.5963.375 Markus Nagel, Marios Fournarakis, Rana Ali Amjad, Yelysei Bondarenko, Mart van Baalen, and Tijmen376 Blankevoort. A white paper on neural network quantization. arXiv preprint arXiv:2106.08295, 2021.377 Qualcomm. Snpe: Snapdragon neural processing engine. https://developer.qualcomm.com/378 sites/default/files/docs/snpe/.379 Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Inverted380 residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. CoRR,381 abs/1801.04381, 2018. URL http://arxiv.org/abs/1801.04381.382 Wenqi Shao, Hang Yu, Zhaoyang Zhang, Hang Xu, Zhenguo Li, and Ping Luo. Bwcp: Probabilistic383 learning-to-prune channels for convnets via batch whitening. arXiv preprint arXiv:2105.06423, 2021.384 Mennatullah Siam, Heba Mahgoub, Mohamed Zahran, Senthil Yogamani, Martin Jagersand, and Ahmad El-385 Sallab. Modnet: Motion and appearance based moving object detection network for autonomous driving.386 In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 2859–2864, 2018.387 doi: 10.1109/ITSC.2018.8569744.388 Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural networks.389 CoRR, abs/1905.11946, 2019. URL http://arxiv.org/abs/1905.11946.390 Mingxing Tan and Quoc V. Le. Efficientnetv2: Smaller models and faster training. CoRR, abs/2104.00298,391 2021. URL https://arxiv.org/abs/2104.00298.392 Mingxing Tan, Ruoming Pang, and Quoc V. Le. Efficientdet: Scalable and efficient object detection. CoRR,393 abs/1911.09070, 2019. URL http://arxiv.org/abs/1911.09070.394 Alvin Wan, Xiaoliang Dai, Peizhao Zhang, Zijian He, Yuandong Tian, Saining Xie, Bichen Wu, Matthew395 Yu, Tao Xu, Kan Chen, Peter Vajda, and Joseph E. Gonzalez. Fbnetv2: Differentiable neural architecture396 search for spatial and channel dimensions. In Proceedings of the IEEE/CVF Conference on Computer397 Vision and Pattern Recognition (CVPR), June 2020.398 Dilin Wang, Chengyue Gong, Meng Li, Qiang Liu, and Vikas Chandra. Alphanet: Improved training of399 supernet with alpha-divergence. arXiv preprint arXiv:2102.07954, 2021.400 Yuzhi Wang, Haibin Huang, Qin Xu, Jiaming Liu, Yiqun Liu, and Jue Wang. Practical deep raw image401 denoising on mobile devices, 2020.402 Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. Snas: stochastic neural architecture search. arXiv403 preprint arXiv:1812.09926, 2018.404 Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, and Qiang Liu. Good subnetworks405 provably exist: Pruning via greedy forward selection. CoRR, abs/2003.01794, 2020. URL https:406 //arxiv.org/abs/2003.01794.407 A APPENDIX408 A.1 LAYER WIDTHS VISUALIZATION409 It is quite interesting to see how layer width looks like after pruning. The pattern that emerge are quite410 telling. EfficientNets are build of a series of meta-blocks, .e.g, 2, 3, . . . , 7 in EfficientNet B0, where each411 meta-block consists of a number of MBCONV blocks at the same spatial resolution. It appears that in each412 such meta-block the most important block is usually the first one, and block importance decays proportionally413 to the depth of the block inside the meta-block. See Figure 6 in the Appendix.414 A.2 FURTHER RESULTS415 A.2.1 RAWRGB IMAGE DENOISING416 We prune a recent state-of-the-art network for RawRGB image denoising on mobile devices introduced417 in Wang et al. (2020). We train the models on SIDD Medium dataset https://www.eecs.yorku.418 ca/~kamel/sidd/dataset.php. We first extract 256x256 patches for training and validation and419 then test the networks on SIDD validation dataset https://www.eecs.yorku.ca/~kamel/sidd/420 benchmark.php. The batch size is set to 16, learning rate is 0.0001 and we use Adam optimizer. The421 loss is mean absolute error. We train the original model for 150 epochs, prune it and then train the original422 model for another 150 epochs. The pruned models are fine-tuned for 150 epochs as well. For comparison we423 also train from scratch smaller (linearly scaled down) versions of the original model. The results can be seen424 in Table 4 and Figure 7.425 A.2.2 HUMAN SEGMENTATION426 For semantic segmentation we use a private dataset for training human segmentation models for real time427 prediction in video bokeh task. This is dictated by the need to have superior edge quality which is missing428 in publicly available data for segmentation. The dataset consists of 120k real image/mask pair and 50k429 synthetic ones. Apart from IoU we also compute edge IoU, which pays attention only to the edges of the430 masks and can be thought of as a proxy for edge quality. The baseline architecture consists of an EfficientNet431 B0 (Tan & Le, 2019) backbone, EfficientDet (Tan et al., 2019) (modified slightly to allow for easier channel432 pruning) fusion block and a detail branch (Siam et al., 2018) to preserve edge quality. The backbone network433 is pretrained on ImageNet. We train the original model for 70 epochs, prune and then fine-tune the pruned434 models for 50 epochs. The validation results are presented in Table 4. The validation dataset is a split of a435 modified version of LIP dataset (Gong et al., 2017), where objects belonging to people (such as handbags,436 etc.) are also considered part of these people. This is done, so that we can train models for video bokeh437 effect. The results are in Table 4b and are visualized in Figures 8a and 8b.438 Notice that the smallest pruned model is compressed to around 10% of the size of the original one. Even in439 these extreme compression scenario our approach produces a model with IoU higher than 90%. IoU starts440 dropping only after we have removed more than 60% of the original FLOPs. This is an observation which, in441 our experience, is true for many more architectures for segmentation, the one being presented here is just442 one example. Edge IoU starts falling much more quickly, perhaps beacause we employ no edge-specific loss.443
1. What is the focus and contribution of the paper on neural network pruning and fine-tuning? 2. What are the strengths of the proposed approach, particularly in addressing the dimension mismatching problem and using hierarchical knowledge distillation? 3. What are the weaknesses of the paper, especially regarding the cumbersome nature of the algorithm, lack of technical details, and limited experimental results? 4. Do you have any concerns about the novelty of the proposed method, considering that it consists of multiple stages and lacks thorough formulations? 5. How do you assess the clarity and quality of the paper's writing, considering the presence of typos and errors in the text?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a neural network pruning and fine-tuning framework for model compression. It can automatically prune the channels by learning the channel importance. The contributions are: 1) A new pruning scheme is proposed by learning the channel importance; 2) The pruning logic is introduced in the pruning scheme. Thus, some grouping operations are pruned jointly. The pruning problems of residual connections in ResNet can be solved; 3) Hierarchical knowledge distillation is added in the fine-tuning phase to speed up training. Experimental results show effectiveness of the proposed method. Review Strengths: The proposed pruning scheme solves the dimension mismatching problem, when pruning some specific model architecture (such as ResNet). The hierarchical KD can promote the performance, which avoids the problem: “too large teacher does not always make better student”. Weakness: 1) Although each part of the proposed method is effective, the overall algorithm is still cumbersome. It has multiple stages. In contrast, many of existing pruning methods do not need fine-tuning. 2) Technical details and formulations are limited. It seems that the main novelty reflected in the scheme or procedure novelty. 3) The experimental results are not convincing. The compared methods are few. Although few authors have attempted to prune EfficientNet, other networks can be compressed in experiments such as ResNet. In addition, the performance gains compared with SOTAs are also marginal, which are also within 1%. 4) The paper is poorly written. There are many typos and some are listed as follows: --In caption of Figure 2, “An subset of a network” should be “A subset of a network”. --In Line157 of Page4, “The output output vector” should be “The output vector”. --In Line283 of Page7, “B0V2 as,” should be “B0V2 as teacher,”. --In Line301 of Page7, “due the inconsistencies” should be “due to the inconsistencies”.
ICLR
Title Stochastic Latent Residual Video Prediction Abstract Video prediction is a challenging task: models have to account for the inherent uncertainty of the future. Most works in the literature are based on stochastic imageautoregressive recurrent networks, raising several performance and applicability issues. An alternative is to use fully latent temporal models which untie frame synthesis and dynamics. However, no such model for video prediction has been proposed in the literature yet, due to design and training difficulties. In this paper, we overcome these difficulties by introducing a novel stochastic temporal model. It is based on residual updates of a latent state, motivated by discretization schemes of differential equations. This first-order principle naturally models video dynamics as it allows our simpler, lightweight, interpretable, latent model to outperform prior state-of-the-art methods on challenging datasets. 1 INTRODUCTION Being able to predict the future of a video from a few conditioning frames in a self-supervised manner has many applications in fields such as reinforcement learning (Gregor et al., 2019) or robotics (Babaeizadeh et al., 2018). More generally, it challenges the ability of a model to capture visual and dynamic representations of the world. Video prediction has received a lot of attention from the computer vision community. However, most proposed methods are deterministic, reducing their ability to capture video dynamics, which are intrinsically stochastic (Denton & Fergus, 2018). Stochastic video prediction is a challenging task which has been tackled by recent works. Most state-of-the-art approaches are based on image-autoregressive models (Denton & Fergus, 2018; Babaeizadeh et al., 2018), built around Recurrent Neural Networks (RNNs), where each generated frame is fed back to the model to produce the next frame. However, performances of their temporal models innately depend on the capacity of their encoder and decoder, as each generated frame has to be re-encoded in a latent space. Such autoregressive processes induce a high computational cost, and strongly tie the frame synthesis and temporal models, which may hurt the performance of the generation process and limit its applicability (Gregor et al., 2019; Rubanova et al., 2019). An alternative approach consists in separating the dynamic of the state representations from the generated frames, which are independently decoded from the latent space. In addition to removing the aforementioned link between frame synthesis and temporal dynamics, this is computationally appealing when coupled with a low-dimensional latent-space. Moreover, such models can be used to shape a complete representation of the state of a system, e.g. for reinforcement learning applications (Gregor et al., 2019), and more interpretable than autoregressive models (Rubanova et al., 2019). Yet, these State-Space Models (SSMs) are more difficult to train as they require non-trivial latent state inference schemes (Krishnan et al., 2017) and a careful design of the dynamic model (Karl et al., 2017). This leads most successful SSMs to only be evaluated on small or artificial toy tasks. In this work, we propose a novel stochastic dynamic model for the task of video prediction which successfully leverages structural and computational advantages of SSMs that operate on low-dimensional latent spaces. The dynamic component determines the evolution through residual updates of the latent state, conditioned on learned stochastic variables. This formulation allows us to implement an efficient training strategy and process in an interpretable manner complex high-dimensional data such as videos. This residual principle can be linked to recent advances relating residual networks and Ordinary Differential Equations (ODEs) (Chen et al., 2018). This interpretation opens new perspectives such as generating videos at different frame rates, as demonstrated in our experiments. Overall, this approach outperforms current state-of-the-art models on the task of stochastic video prediction, as demonstrated by comparisons with competitive baselines on representative benchmarks. 2 RELATED WORK Video synthesis covers a range of different tasks, such as video-to-video translation (Wang et al., 2018), super-resolution (Caballero et al., 2017), interpolation between frames (Jiang et al., 2018), unconditonal generation (Tulyakov et al., 2018), or video prediction, which is the focus of this paper. Deterministic models. Inspired by prior sequence generation models using RNNs (Graves, 2013), a number of video prediction methods (Srivastava et al., 2015; Villegas et al., 2017; Wichers et al., 2018) rely on LSTMs (Hochreiter & Schmidhuber, 1997), or, like Ranzato et al. (2014) and Jia et al. (2016), on derived networks such as ConvLSTMs (Shi et al., 2015) taking advantage of Convolutional Neural Networks (CNNs). Indeed, computer vision approaches are usually tailored to high-dimensional video sequences and propose domain-specific techniques as they often use pixel-level transformations and optical flow (Shi et al., 2015; Walker et al., 2015; Finn et al., 2016; Jia et al., 2016; Vondrick & Torralba, 2017; Liang et al., 2017; Liu et al., 2017; Lotter et al., 2017; Lu et al., 2017a; Fan et al., 2019) that help to produce high-quality predictions. Such predictions are, however, deterministic, thus hurting their performance as they fail to generate sharp long-term video frames (Babaeizadeh et al., 2018; Denton & Fergus, 2018). Following Mathieu et al. (2016), some works proposed to use an adversarial loss (Goodfellow et al., 2014) on the predictions of their model to sharpen the generated frames (Vondrick & Torralba, 2017; Liang et al., 2017; Lu et al., 2017a; Xu et al., 2018). Nonetheless, adversarial losses are notoriously hard to train, and lead to mode collapse, preventing diversity of generations. Stochastic and image-autoregressive models. Some approaches rely on exact likelihood maximization, using pixel-level autoregressive generation (van den Oord et al., 2016; Kalchbrenner et al., 2017) or normalizing flows through invertible transformations between the observation space and a latent space (Kingma & Dhariwal, 2018; Kumar et al., 2019). However, they require careful design of complex temporal generation schemes manipulating high-dimensional data, thus inducing a prohibitive temporal generation cost. More efficient continuous models rely on Variational Auto-Encoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014) for the inference of lowdimensional latent state variables. Except Xue et al. (2016) who learn a one-frame-ahead VAE, they model sequence stochasticity by incorporating a random latent variable per frame into a deterministic RNN-based image-autoregressive model. Babaeizadeh et al. (2018) integrate stochastic variables into the ConvLSTM architecture of Finn et al. (2016). Concurrently with He et al. (2018), Denton & Fergus (2018), with Castrejon et al. (2019) in a follow-up, use a prior LSTM conditioned on previously generated frames in order to sample random variables that are fed to a predictor LSTM. Finally, Lee et al. (2018) combine the ConvLSTM architecture and this learned prior, adding an adversarial loss on the predicted videos to sharpen them at the cost of a diversity drop. Yet, all these methods are image-autoregressive, as they feed their predictions back into the latent space, thus tying the frame synthesis and temporal models and increasing their computational cost. Concurrently to our work, Minderer et al. (2019) propose to use the autoregressive VRNN model (Chung et al., 2015) on learned image key-points instead of raw frames. While this change could mitigate the aforementioned problems, the extent of such mitigation is unclear. We follow a complementary approach by directly proposing a dynamic model that is state-space and acts on a small latent state, tackling these issues. State-space models. Many latent state-space models have been proposed for sequence modelization (Bayer & Osendorfer, 2014; Fraccaro et al., 2016; 2017; Krishnan et al., 2017; Karl et al., 2017; Hafner et al., 2019), usually trained by deep Variational Inference (VI). These methods, which use locally linear temporal transition functions or RNN-based dynamics, are designed for and tested on low-dimensional data, as learning such models on complex data is challenging, or focus on control or planning tasks. In contrast, our fully latent method is the first one to be successfully applied to complex high-dimensional data such as videos, thanks to a temporal model based on residual updates of its latent state. It thus falls within the scope of a recent trend linking differential equations with neural networks (Lu et al., 2017b; Long et al., 2018), leading to the integration of ODEs, that are seen as continuous residual networks, in neural network architectures (Chen et al., 2018). However, the latter work and follow-ups (Rubanova et al., 2019; Yıldız et al., 2019) are either limited to low-dimensional data, prone to overfitting or unable to handle stochasticity within a sequence. Another line of works considers stochastic differential equations (SDEs) with neural networks (Ryder et al., 2018; De Brouwer et al., 2019), but are limited to continuous Brownian noise, whereas video prediction additionally requires to model punctual stochastic events. 3 MODEL We consider the task of stochastic video prediction, consisting in approaching, given a number of conditioning video frames, the distribution of possible future frames given this conditioning. 3.1 LATENT RESIDUAL DYNAMIC MODEL Let x1:T be a sequence of T video frames. We model their evolution by introducing latent variables y that are driven by a dynamic temporal model. Each frame xt is then generated from the corresponding latent state yt only, making the dynamics independent from the previously generated frames. We propose to model the transition function of the latent dynamic of y with a stochastic residual network. State yt+1 is chosen to deterministically depend on the previous state yt, conditionally to an auxiliary random variable zt+1. These auxiliary variables encapsulate the randomness of the video dynamics. They have a learned factorized Gaussian prior that depends on the previous state only. The model is depicted in Figure 1a, and defined as follows: y1 ∼ N (0, I), zt+1 ∼ N ( µθ(yt), σθ(yt)I ) , yt+1 = yt + fθ(yt, zt+1), xt ∼ G ( gθ(yt) ) , (1) where µθ, σθ, fθ and gθ are neural networks, and G ( gθ(yt) ) is a probability distribution parameterized by gθ(yt). In our experiments, G is a normal distribution with fixed diagonal variance and mean gθ(yt). Note that y1 is assumed to have a standard Gaussian prior, and, in our VAE setting, will be inferred from conditioning frames for the prediction task, as shown in Section 3.3. The residual update rule takes inspiration in the Euler discretization scheme of differential equations. The state of the system yt is updated by its first-order movement, i.e., the residual fθ(yt, zt+1). Compared to a regular RNN, this simple principle makes our temporal model lighter and more interpretable. Equation (1), however, differs from a discretized ODE because of the introduction of the stochastic discrete-time variables z. Nonetheless, we propose to allow the Euler step size ∆t to be smaller than 1, as a way to make the temporal model closer to a continuous dynamics. The updated dynamics becomes, with 1∆t ∈ N to synchronize the step size with the video frame rate: yt+∆t = yt + ∆t · fθ ( yt, zbtc+1 ) . (2) For this formulation, the auxiliary variable zt is kept constant between two integer time steps. Note that a different ∆t can be used during training or testing. This allows our model to generate videos at an arbitrary frame rate since each intermediate latent state can be decoded in the observation space. This ability enables us to observe the quality of the learned dynamic as well as challenge its ODE inspiration by testing its generalization to the continuous limit in Section 4. In the following, we consider ∆t as a hyperparameter. For the sake of clarity, we consider that ∆t = 1 in the following; generalizing to smaller ∆t is straightforward as Figure 1a remains unchanged. 3.2 CONTENT VARIABLE Some components of video sequences can be static, such as the background or shapes of moving objects. They may not impact the dynamics; we therefore model them separately, in the same spirit as Denton & Birodkar (2017) and Yingzhen & Mandt (2018). We compute a content variable w that remains constant throughout the whole generation process and is fed together with yt into the frame generator. It enables the dynamical part of the model to focus only on movement, hence being lighter and more stable. Moreover, it allows us to leverage architectural advances in neural networks, such as skip connections (Ronneberger et al., 2015), to produce more realistic frames. This content variable is a deterministic function cψ of a fixed number k < T of frames x (k) c : x(k)c = xi1 , . . . ,xik , w = cψ ( x(k)c ) = cψ ( xi1 , . . . ,xik ) , xt ∼ G ( gθ(yt,w) ) . (3) During testing, x(k)c are the last k conditioning frames (usually between 2 and 5). This content variable is not endowed with any probabilistic prior, contrary to the dynamic variables y and z. Hence, the information it contains is not constrained in the loss function (see Section 3.3), but only architecturally. To prevent temporal information from leaking in w, we propose to uniformly sample these k frames within x1:T during training. We also design cψ as a permutation-invariant function (Zaheer et al., 2017), which is done by using an MLP fed with the sum of individual frame representations, similarly to Santoro et al. (2017). This absence of prior and its architectural constraint allows w to contain as much non-temporal information as possible, while preventing it from containing dynamic information. On the other hand, due to their strong standard Gaussian priors, y and z are encouraged to discard unnecessary information. Therefore, y and z should only contain temporal information that could not be captured by w. Note that this content variable can be removed from our model, yielding a more classical deep state-space model. An experiment in this setting is presented in Appendix E. 3.3 VARIATIONAL INFERENCE AND ARCHITECTURE Following the generative process depicted in Figure 1a, the conditional joint probability of the full model, given a content variable w, can be written as: p(x1:T , z2:T ,y1:T | w) = p(y1) T−1∏ t=1 p(zt+1 | yt)p(yt+1 | yt, zt+1) T∏ t=1 p(xt | yt,w), (4) where p(yt+1 | yt, zt+1) = δ ( yt + fθ(yt, zt+1)− yt+1 ) and δ is the Dirac delta function centered on 0, according to the expression of yt+1 in Equation (1). Thus, in order to optimize the likelihood of the observed videos p(x1:T | w), we need to infer latent variables y1 and z2:T . This is done by deep Variational Inference using the inference model parameterized by φ and shown in Figure 1b, which comes down to consider a variational distribution qZ,Y defined and factorized as follows: qZ,Y , q(z2:T ,y1:T | x1:T ,w) = q(y1 | x1:k) T∏ t=2 q(zt | x1:t)δ ( yt−1 + fθ(yt−1, zt)− yt ) . (5) This yields the following evidence lower bound (ELBO), whose full derivation is given in Appendix A: log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) − E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=2 DKL ( q(zt | x1:t) ∥∥ p(zt | ỹt−1)) , L(x1:T ;w, θ, φ). (6) The sum of KL divergence expectations implies to consider the full past sequence of inferred states for each time step, due to the dependence on conditionally deterministic variables y2:T . However, optimizing L(x1:T ;w, θ, φ) with respect to model parameters θ and variational parameters φ can be done efficiently by sampling a single full sequence of states from qZ,Y per example, and computing gradients by backpropagation (Rumelhart et al., 1988) trough all inferred variables, using the reparametrization trick (Kingma & Welling, 2014; Rezende et al., 2014). We classically choose q(y1 | x1:k) and q(zt | x1:t) to be factorized Gaussian so that all KLDs can be computed analytically. We include an `2 regularization term on residuals fθ which stabilizes the temporal dynamics of the residual network, as noted by Behrmann et al. (2019) and Rousseau et al. (2019). Given a set of videos X , the full optimization problem, where L is defined as in Equation (6), is then given as: arg max θ,φ,ψ ∑ x∈X E x (k) c L ( x1:T ; cψ ( x(k)c ) , θ, φ ) − λ · E(z2:T ,y1:T )∼qZ,Y T∑ t=2 ∥∥fθ(yt−1, zt)∥∥2 . (7) Figure 1c depicts the full architecture of our temporal model, corresponding to how the model is applied during testing. The first latent variables are inferred with the conditioning framed and are then predicted with the dynamic model. In contrast, during training, each frame of the input sequence is considered for inference, which is done as follows. Firstly, each frame xt is independently encoded into a vector-valued representation x̃t, with x̃t = hφ(xt). y1 is then inferred using an MLP on the first k encoded frames x̃1:k. Each zt is inferred in a feed-forward fashion with an LSTM on the encoded frames. Inferring z this way experimentally performs better than, e.g., inferring them from the whole sequence x1:T ; we hypothesize that this follows from the fact that this filtering scheme is closer to the prediction setting, where the future is not available. 4 EXPERIMENTS This section exposes the experimental results of our method on three standard stochastic video prediction datasets.1 We compare our method with state-of-the-art baselines on stochastic video prediction. Furthermore, we qualitatively study the dynamics and latent space learned by our model. Training details are described in Appendix C. The stochastic nature and novelty of the task of stochastic video prediction make it challenging to evaluate (Lee et al., 2018): since videos and models are stochastic, comparing the ground truth and a predicted video is not adequate. We thus adopt the common approach (Denton & Fergus, 2018; Lee et al., 2018) consisting in, for each test sequence, sampling from the tested model a given number (here, 100) of possible futures and reporting the best performing sample against the true video. We report this discrepancy for three commonly used metrics: Peak Signal-to-Noise Ratio (PSNR, higher is better), Structured Similarity (SSIM, higher is better), and Learned Perceptual Image Patch Similarity (LPIPS, lower is better) (Zhang et al., 2018). PSNR tends to promote blurry predictions, as it is a pixel-level measure derived from the `2 distance, but greatly penalizes errors in predicted positions of objects in the scenes. SSIM is a similarity metric between image patches. LPIPS is a learned distance between activations of deep CNNs trained on image classification tasks, and have been shown to better correlate with human judgment on real images. While these three metrics are computed frame-wise, the recently proposed Fréchet Video Distance (FVD, lower is better) (Unterthiner et al., 2018) aims at directly comparing the distribution of predicted videos with the ground truth distribution through the representations computed by a deep CNN trained on action 1Code, video samples, and datasets are available at https://sites.google.com/view/srvp/. recognition tasks. It has been shown, independently from LPIPS, to better correlate with human judgment than PSNR and SSIM. We treat all four metrics as complementary, as they capture different modalities. PSNR challenges the dynamics of the predicted videos, while SSIM rather compares local frame patches but loses some dynamics information. LPIPS and FVD both measure the realism of the predictions compared to the ground truth. FVD considers videos as a whole, making it more capable of detecting temporal inconsistencies. On the other hand, the frame-wise LPIPS metric penalizes more the temporal drifts of videos, since it directly compares each predicted and ground truth frame. We present experimental results on a simulated dataset and two real-world datasets, that we briefly present in the following and detail in Appendix B. The corresponding numerical results can be found in Appendix D. For the sake of concision, we only display a handful of qualitative samples in this section, and refer to Appendix H for additional samples. We compare our model against several state-of-the-art models: SV2P (Babaeizadeh et al., 2018), SVG (Denton & Fergus, 2018) and SAVP (Lee et al., 2018). All baseline results were obtained with pretrained models released by the authors. Note that we use the same neural architecture as SVG for our encoders and decoders in order to perform fair comparisons with this method, which is the closest to ours among the state of the art. Unless specified otherwise, our model is tested with the same ∆t as in training (see Equation (2)). Stochastic Moving MNIST (SM-MNIST). This dataset consists of one or two MNIST digits (LeCun et al., 1998) moving linearly and randomly bouncing on walls with new direction and velocity sampled randomly at each bounce (Denton & Fergus, 2018). As SV2P and SAVP were not tested on this dataset (in particular, with no pretrain model, code or hyperparameters), we only report scores for SVG as state-of-the-art model on SM-MNIST. Figure 2a shows quantitative results with two digits. Our model outperforms SVG on both PSNR and SSIM; LPIPS and FVD are not reported as they are not relevant for this synthetic task. Decoupling dynamics from image synthesis allows our method to maintain temporal consistency despite highuncertainty frames where crossing digits become indistinguishable. For instance in Figure 3, the digits shape changes after they cross in the SVG prediction, while our model predicts the correct digits. To evaluate the predictive ability on a longer horizon, we perform experiments on the classic deterministic version of the dataset (Srivastava et al., 2015). We show the results up to t + 95 in Figure 2b. We can see that our model better captures the dynamics of the problem compared to SVG as its performance decreases significantly less, even at a long-term horizon. We also compare to two alternative versions of our model in Figure 2, where the residual dynamic function is replaced by an MLP or a GRU network (Cho et al., 2014). Our residual model outperforms both versions on the stochastic, and especially on the deterministic version of the dataset, showing its intrinsic advantage at modeling dynamics. Finally, on the deterministic version of Moving MNIST, we compare to an alternative where z is entirely removed, resulting in a temporal model very close to the one presented in Chen et al. (2018). The loss of performance of this alternative model is significant, especially in SSIM, showing that our stochastic residual model offers a substantial advantage even when used in a deterministic environment. KTH Action dataset (KTH). This dataset is composed of real-world videos of people performing a single action per video in front of different backgrounds (Schüldt et al., 2004). Uncertainty lies in the appearance of subjects, the actions they perform and how they are performed. We outperform on this dataset every considered baseline for each metric, as depicted in Figure 4 and Table 2. In some videos, the subject only appears after the conditioning frames, requiring the model to sample the moment and location of the subject appearance, as well as its action. This critical case is illustrated in Figure 5. There, SVG fails to even generate a moving person; only SAVP and our model manage to do so, and our best sample is closer to the subject’s poses compared to SAVP. Moreover, the worst and a random sample of our model demonstrate that it captures the diversity of the dataset by making a person appear at different time steps and with different speeds. An additional experiment on this dataset is included in Appendix G, studying the influence of the encoder and decoder architecture on SVG and our model. Finally, Table 2 compares our method to its MLP and GRU alternative versions, leading to two conclusions. Firstly, it confirms the structural advantage of residual dynamics observed on Moving MNIST. On one hand, MLP better captures dynamics than GRU on KTH according to PSNR and SSIM, but loses in terms of realism according to LPIPS and FVD. On the other hand, the residual version shows a slight dynamics improvement with respect to both MLP and GRU, while substantially pushing further prediction realism. Secondly, all three versions of our model (residual, MLP, GRU) outperform prior methods. Therefore, this improvement is due to their common inference method, latent nature and content variable, strengthening our motivation to propose a non-autoregressive model. BAIR robot pushing dataset (BAIR). This dataset contains videos of a Sawyer robotic arm pushing objects on a tabletop (Ebert et al., 2017). It is highly stochastic as the arm can change its direction at any moment. We achieve similar or better results compared to state-of-the-art models, as Figure 6 and Table 3 shows, and second-best PSNR behind SV2P, but the latter produces very blurry samples, which can be seen in Appendix H, yielding prohibitive LPIPS and FVD scores. In contrast, we achieve the highest SSIM overall, as well as state-of-the-art LPIPS and competitive FVD among these models. Note that we could not add VideoFlow to our experiments, due to the unavailability of pretrained models and numerical results. However, compared to PSNR, SSIM and LPIPS results reported by Kumar et al. (2019) for BAIR (the only tested dataset and metrics in their paper), our model appears to behave better than VideoFlow, which is on par with SAVP on these metrics. Varying frame rate in testing. We challenge the ability of our model to use a different Euler step size than the one used in training (see Equation (2)). Figures 4 and 6 include corresponding results with a halved ∆t. Prediction performances remain stable while generating twice as many frames (cf. Appendix F for further discussion). Our model is thus robust to the refinement of the Euler approximation, showing the quality of the learned dynamic which is close to continuous. In particular, this shows that our model learned a dynamic driven by a piecewise ODE, i.e., the learned dynamic of each interval between two consecutive frames is an ODE, as a constant z is given on such interval. This can be used to generate frames at a higher frame rate than the training videos without supervision. We show in Figure 7 and Appendix F frames generated at a double and quadruple frame rate on BAIR and KTH. Both figures show smooth intermediate generated frames. Disentangling dynamics and content. Let us show that the proposed model actually separates content from dynamics as discussed in Section 3.2. To this end, two sequences xs and xt are drawn from the BAIR test set. While xs is used for extracting our content variable ws, dynamic states yt are inferred with our model from xt. New frame sequences x̂ are finally generated from the fusion of the content vector and the dynamics. This results in a content corresponding to the first sequence xs while moving according to the dynamics of the second sequence xt, as observed in Figure 8. More samples for BAIR and KTH can be seen in Appendix H. Interpolation of dynamics. Our state-space structure allows us to learn semantic representations in yt. To highlight this feature, we test whether two Moving MNIST trajectories can be interpolated by linearly interpolating their inferred latent initial conditions. We begin by generating two trajectories xs and xt of a single moving digit. We infer their respective latent initial conditions ys1 and y t 1. We then use our model to generate frame sequences from latent initial conditions linearly interpolated between ys1 and y t 1. If it learned a meaningful latent space, the resulting trajectories should also be smooth interpolations between the directions of reference trajectories xs and xt, and this is what we observe in Figure 9. Additional examples can be found in Appendix H. 5 CONCLUSION We introduce a novel dynamic latent model for stochastic video prediction which, unlike prior imageautoregressive models, decouples frame synthesis and dynamics. This temporal model is based on residual updates of a small latent state that is showed to perform better than RNN-based models. This endows our method with several desirable properties, such as temporal efficiency and latent space interpretability. We experimentally demonstrate the performance and advantages of the proposed model, which outperforms prior state-of-the-art methods for stochastic video prediction. This work is, to the best of our knowledge, the first to propose a latent dynamic model scaling for video prediction. The proposed model is also novel with respect to the recent line of work dealing with neural networks and ODEs for temporal modeling; it is the first such residual model to scale to complex stochastic data such as videos. We believe that the general principles of our model (state-space, residual dynamic, static content variable) can be generally applied to other models as well. Interesting future works include replacing the VRNN model of Minderer et al. (2019) in order to model the evolution of key-points, or leveraging the state-space nature of our model in model-based reinforcement learning. A EVIDENCE LOWER BOUND We develop in this section the computations of the variational lower bound for the proposed model. Using the original variational lower bound of Kingma & Welling (2014) in Equation (8): log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y log p(x1:T | z̃2:T , ỹ1:T ,w)−DKL ( qZ,Y ∥∥ p(y1:T , z2:T | w)) (8) = E(z̃2:T ,ỹ1:T )∼qZ,Y log p(x1:T | z̃2:T , ỹ1:T ,w)−DKL ( q(y1, z2:T | x1:T ) ∥∥ p(y1, z2:T )) (9) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1, z2:T | x1:T ) ∥∥ p(y1, z2:T )), (10) where: • Equation (9) is given by the forward and inference models factorizing p and q in Equations (4) and (5) and illustrated by, respectively, Figures 1a and 1b: – the z variables and y1 are independent from w to p and q; – the y2:T variables are deterministic functions of y1 and z2:T with respect to p and q; • Equation (10) results from the factorization of p(x1:T | y1:T , z1:T ,w) in Equation (4). From there, by using the integral formulation of DKL: log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w) + ∫ · · · ∫ y1,z2:T q(y1, z2:T | x1:T ) log p(y1, z2:T ) q(y1, z2:T | x1:T ) dz2:T dy1 (11) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:T ) ∥∥ p(y1)) + Eỹ1∼q(y1 | x1:T ) [∫ · · · ∫ z2:T q(z2:T | x1:T , ỹ1) log p(z2:T | ỹ1) q(z2:T | x1:T , ỹ1) dz2:T ] (12) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) + Eỹ1∼q(y1 | x1:k) [∫ · · · ∫ z2:T q(z2:T | x1:T , ỹ1) log p(z2:T | ỹ1) q(z2:T | x1:T , ỹ1) dz2:T ] (13) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) + Eỹ1∼q(y1 | x1:k) ∫ · · · ∫ z2:T T∏ t=2 q(zt | x1:t) T∑ t=2 log p(zt | ỹ1, z2:t−1) q(zt | x1:t) dz2:T (14) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) − Eỹ1∼q(y1 | x1:k)DKL ( q(z2 | x1:t) ∥∥ p(z2 | ỹ1)) + Eỹ1∼q(y1 | x1:k)Ez̃2∼q(z2 | x1:2)∫ · · · ∫ z3:T T∏ t=3 q(zt | x1:t) T∑ t=3 log p(zt | y1, z̃2:t−1) q(zt | x1:t) dz3:T , (15) where: • Equation (13) follows from the inference model of Equation (5), where y1 only depends on x1:k; • Equation (14) is obtained from the factorizations of Equations (4) and (5). By iterating Equation (15)’s step on z3, . . . ,zT and factorizing all expectations, we obtain: (16) log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) − Eỹ1∼q(y1 | xc) ( Ez̃t∼q(zt | x1:t) )T t=2 T∑ t=2 DKL ( q(zt | x1:t) ∥∥ p(zt | ỹ1, z̃1:t−1)), (17) and we finally retrieve Equation (6) by using the factorization of Equation (5): log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) − E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=2 DKL ( q(zt | x1:t) ∥∥ p(zt | ỹt−1)). (18) B DATASETS DETAILS B.1 STOCHASTIC MOVING MNIST (SM-MNIST) This dataset consists in one or two train MNIST digits (LeCun et al., 1998) of size 27× 27 moving linearly within a 64× 64 frame and randomly bounce against its border, sampling a new direction and velocity at each bounce (Denton & Fergus, 2018). We use the same settings as Denton & Fergus (2018), train all models on 15 timesteps and condition them at test time on 5 frames. Note that we adapted the dataset to sample more coherent bounces: the original dataset computes digit trajectories that are dependent on the chosen framerate, unlike our corrected version of the dataset. We consequently retrained SVG on this dataset, obtaining comparable results as those originally presented by Denton & Fergus (2018). Test data were produced by generating 5000 samples with a different digit for each sequence coming from the MNIST test set. B.2 KTH ACTION DATASET (KTH) This dataset is composed of real-world 64× 64 videos of 25 people performing one of six actions (walking, jogging, running, boxing, handwaving and handclapping) in front of different backgrounds (Schüldt et al., 2004). Uncertainty lies in the appearance of subjects, the action they perform and how it is performed. The training set is formed with actions from 20 people, the remaining five being used for testing. Training is performed by sampling sub-sequences of size 20 in the train set. The test set is composed of 1000 randomly sampled sub-sequences of size 40. B.3 BAIR ROBOT PUSHING DATASET (BAIR) This dataset contains 64× 64 videos of a Sawyer robotic arm pushing objects on a tabletop (Ebert et al., 2017). It is highly stochastic as the arm can change its direction at any moment. Training is performed on 12 frames and testing is done with two conditioning frames on the provided test set, consisting of 256 sequences of 30 frames. C TRAINING DETAILS C.1 SPECIFICATIONS We used Python 3.7.4 and PyTorch 1.2.0 (Paszke et al., 2017) to implement our model. Each model was trained on a Nvidia GPUs with CUDA 10 in mixed-precision training with the help of Apex.2 C.2 ARCHITECTURE Encoder and decoder architecture. Both gθ and hφ are chosen to have different architectures depending on the dataset. We used the same architectures as in Denton & Fergus (2018): a DCGAN 2https://github.com/nvidia/apex. discriminator and generator architecture (Radford et al., 2016) for Moving MNIST, and a VGG16 (Simonyan & Zisserman, 2015) architecture (mirrored for hφ) for BAIR and KTH. In both cases, the output of hφ (i.e., x̃) is a vector of size 128, and gθ and hφ weights are initialized using a centered normal distribution with a standard deviation of 0.02. For the Moving MNIST dataset, the content variable w is obtained directly from x̃ and is thus a vector of size 128. For KTH and BAIR, we supplement this vectorial variable with skip connections from all layers of the encoder gθ that are then fed to the decoder hφ to handle complex backgrounds. For Moving MNIST, the number of frames k used to compute the content variable is 5; for KTH, it is 3; for BAIR, it is 2. LSTM architecture. The LSTM used for all datasets has a single layer of LSTM cells with a hidden state size of 256. MLP architecture. All MLPs used in inference (with parameters φ) have three linear layers with hidden size 256 and leaky ReLU activations. All MLPs used in the forward model (with parameters θ) have four linear layers with hidden size 512 and leaky ReLU activations. Weights of fθ, in particular, are orthogonally initialized with a gain of 1.41, while the other MLPs are initialized with default weight initialization of PyTorch. Sizes of latent variables. The sizes of the latent variables in our model are the following: for Moving MNIST, y and z have size 20; for KTH and BAIR, y and z have size 50. Euler step size All models but those trained on KTH are trained with ∆t = 1. Models on KTH are trained with ∆t = 12 . C.3 OPTIMIZATION Loss function. All models are trained using the Adam optimizer (Kingma & Ba, 2015) with learning rate 3× 10−4 and λ = 1. The batch size for Moving MNIST and BAIR is chosen to be 128, and the batch size for KTH is chosen to be 100. Following (Higgins et al., 2017), we use β = 1 (cf. Equation (7)), except for the Moving MNIST dataset where the β factor in front of the KL on z (last term of Equation (6)) is equal to 2. Variance of the observation. The variance ν used in the observation probability distribution G ( gθ(y) ) = N ( gθ(y), νI ) is chosen as follows: • for Moving MNIST, ν = 1; • for KTH, ν = 4× 10−2; • for BAIR, ν = 12 . Number of optimization steps. The number of optimization steps is the following for the different datasets: • Moving MNIST (stochastic): 1 000 000 steps with additional 100 000 steps where the learning rated is linearly decreased to 0; • Moving MNIST (deterministic): 800 000 steps with additional 100 000 steps where the learning rated is linearly decreased to 0; • KTH: 200 000 steps, the final model being chosen among several checkpoints as the one having the best evaluation score (which differs from the test score as we extract from the train set an evaluation set); • BAIR: 250 000 steps, the final model is chosen as for KTH. D ADDITIONAL NUMERICAL RESULTS Tables 1 to 3 present, respectively, numerical results for PSNR, SSIM and LPIPS averaged over all time steps for our methods and considered baselines on the SM-MNIST, KTH and BAIR datasets, corresponding to Figures 2, 4 and 6. E PENDULUM EXPERIMENTS We test the ability of our model to model the dynamics of a common dataset used in the literature of state-space models (Karl et al., 2017; Fraccaro et al., 2017), Pendulum (Karl et al., 2017). It consists of noisy observations of a dynamic torque-controlled pendulum; it is stochastic as the information of this control is not available. We test our model, without the content variable w, in the same setting as DVBF (Karl et al., 2017) and KVAE (Fraccaro et al., 2017) and report the corresponding ELBO scores in Table 4. The encoders and decoders for all methods are MLPs. Our model outperforms DVBF and is merely beaten by KVAE. This can be explained by the nature of the KVAE model, whose sequential model is not learned using a VAE but a Kalman filter allowing exact inference in the latent space. On the contrary, DVBF is learned, like our model, by a sequential VAE, and is thus much closer to our model than KVAE. This result then shows that the dynamic model that we chose in the context of sequential VAEs is more adapted on this dataset than the one of DVBF, and achieve results close to a method taking advantage of exact inference using adapted tools such as Kalman filters. F INFLUENCE OF THE EULER STEP SIZE Table 5 details the numerical results of our model trained on BAIR with ∆t = 1 and tested with different values of ∆t. It shows that, when refining the Euler approximation, our model can improve its performance in a setting that is unseen during training. Results stabilize when ∆t is small enough, showing that the model is close to the continuous limit. Tables 6 and 7 details the numerical results of our model trained on KTH with, respectively, ∆t = 1 and ∆t = 12 , and tested with different values of ∆t. They show that if ∆t is chosen too high when training (here, ∆t = 1), the model drops in performance when refining the Euler approximation. We assume that this phenomenon arises because the Euler approximation used in training is too rough, making the model adapt to a very discretized dynamic that cannot be transferred to smaller Euler step sizes. Indeed, when training with smaller step size, (here, ∆t = 12 ), results in the training settings are equivalent while results obtained with a lower ∆t are now much closer, if not equivalent, to the nominal ones. This shows that the model learned a continuous dynamic if learned with a small enough step size. Note that the loss of performance using a higher ∆t in testing than in training, like in Table 7, is expected as it corresponds to loosening the Euler approximation compared to training. However, even in this adversarial setting, our model maintains state-of-the-art results, demonstrating the quality of the learned dynamic as it can be further discretized if needed at the cost of a reasonable drop in performance. G AUTOREGRESSIVITY AND IMPACT OF ENCODER AND DECODER ARCHITECTURE Figure 10 exposes the numerical results on KTH of our model and SVG for different choices of architectures: DCGAN and VGG. Since DCGAN is a less powerful architecture than VGG, results of each method with VGG are expectedly better than those of the same method with DCGAN. Moreover, our model outperforms SVG for any fixed choice of encoder and decoder architecture, which is coherent with Figure 4. We observe, however, that the difference between a method using VGG and its DCGAN counterpart differs depending on the model. Ours shows more robustness to changing of encoder and decoder architecture, as it loses much less performance than SVG when switching to a less powerful architecture. Indeed, while the difference in LPIPS is similar for both models (as expected from a score evaluating the realism of produced frames), the loss of SVG is significantly larger than our loss in terms of SSIM, and in particular PNSR. This shows that reducing the capacity of the encoders and decoders of SVG not only hurts its ability to produce realistic frames as expected but also substantially lowers its ability to learn a good dynamic. We assume that this phenomenon is caused by the autoregressive nature of SVG, which makes it reliant of the performance of its encoders and decoders. This supports our motivation to propose a non-autoregressive model for stochastic video prediction. H ADDITIONAL SAMPLES This section includes some additional samples corresponding to experiments described in Section 4. H.1 STOCHASTIC MOVING MNIST We present in Figures 11 to 14 additional samples from SVG and our model on SM-MNIST. In particular, Figure 13 shows SVG changing a digit shape in the course of a prediction even though it does not cross another digit, whereas ours maintain the digit shape. We assume that this advantage of ours comes from the latent nature of the dynamic of our model and the use in our of a static content variable that is prevented from containing temporal information. Indeed, even when the best sample from our model is not close from the ground truth of the dataset, like in Figure 14, the shapes of the digits are still maintained by our model. H.2 KTH We present in Figures 15 to 19 additional samples from SV2P, SVG, SAVP and our model on KTH, with additional insights. H.3 BAIR We present in Figures 20 to 22 additional samples from SV2P, SVG, SAVP and our model on BAIR, with additional insights. H.4 OVERSAMPLING We present in Figure 23 additional examples of video generation at a doubled frame rate by our model. H.5 CONTENT SWAP We present in Figures 24 to 28 additional examples of content swap as in Figure 8. H.6 INTERPOLATION IN THE LATENT SPACE We present in Figures 29 and 30 additional examples of interpolation in the latent space between two trajectories.
1. What is the focus and contribution of the paper regarding video pixel generation? 2. What are the strengths of the proposed approach, particularly in its novelty and empirical evaluations? 3. Are there any concerns or suggestions regarding the proposed method's ability to decouple appearance and dynamics? 4. How does the reviewer assess the relevance and impact of the paper compared to prior works in video generation? 5. What are the limitations or potential improvements regarding the proposed method's application to more complex datasets?
Review
Review Contributions: this submission proposes a video pixel generation framework with the goal to decouple visual appearance and dynamics. The latent dynamics are modeled with a latent residual dynamics model. Empirical evaluations on moving MNIST show that the proposed residual dynamics model outperform MLP or GRU. On more challenging KTH and BAIR datasets, the proposed method achieves on par or better quantitative performance with previous methods, and have nice qualitative results on content "swap" and dynamics interpolation. Assessment: - To my knowledge, the proposed model is novel for video generation. - The proposed method is supported with strong quantitative results and qualitative analysis, ablation on Moving MNIST shows that the proposed latent residual dynamics model outperforms MLP and GRU baselines. - The authors might be interested in related work on video generation with decoupled appearance and dynamics models, such as [1]. It would also be interesting to see evaluation on more challenging datasets, such as Human3.6M. - Question: how does the proposed inference framework make sure to decouple appearance with dynamics? Can y_i not encode the appearance information? [1] Minderer et al., Unsupervised Learning of Object Structure and Dynamics from Videos. NeurIPS 2019. ----------------------------- Post rebuttal: Thank you for your answers to my questions and the updated manuscript. My questions have been addressed and the additional results further confirm the performance of the proposed method. Therefore I recommend weak accept of the submission.
ICLR
Title Stochastic Latent Residual Video Prediction Abstract Video prediction is a challenging task: models have to account for the inherent uncertainty of the future. Most works in the literature are based on stochastic imageautoregressive recurrent networks, raising several performance and applicability issues. An alternative is to use fully latent temporal models which untie frame synthesis and dynamics. However, no such model for video prediction has been proposed in the literature yet, due to design and training difficulties. In this paper, we overcome these difficulties by introducing a novel stochastic temporal model. It is based on residual updates of a latent state, motivated by discretization schemes of differential equations. This first-order principle naturally models video dynamics as it allows our simpler, lightweight, interpretable, latent model to outperform prior state-of-the-art methods on challenging datasets. 1 INTRODUCTION Being able to predict the future of a video from a few conditioning frames in a self-supervised manner has many applications in fields such as reinforcement learning (Gregor et al., 2019) or robotics (Babaeizadeh et al., 2018). More generally, it challenges the ability of a model to capture visual and dynamic representations of the world. Video prediction has received a lot of attention from the computer vision community. However, most proposed methods are deterministic, reducing their ability to capture video dynamics, which are intrinsically stochastic (Denton & Fergus, 2018). Stochastic video prediction is a challenging task which has been tackled by recent works. Most state-of-the-art approaches are based on image-autoregressive models (Denton & Fergus, 2018; Babaeizadeh et al., 2018), built around Recurrent Neural Networks (RNNs), where each generated frame is fed back to the model to produce the next frame. However, performances of their temporal models innately depend on the capacity of their encoder and decoder, as each generated frame has to be re-encoded in a latent space. Such autoregressive processes induce a high computational cost, and strongly tie the frame synthesis and temporal models, which may hurt the performance of the generation process and limit its applicability (Gregor et al., 2019; Rubanova et al., 2019). An alternative approach consists in separating the dynamic of the state representations from the generated frames, which are independently decoded from the latent space. In addition to removing the aforementioned link between frame synthesis and temporal dynamics, this is computationally appealing when coupled with a low-dimensional latent-space. Moreover, such models can be used to shape a complete representation of the state of a system, e.g. for reinforcement learning applications (Gregor et al., 2019), and more interpretable than autoregressive models (Rubanova et al., 2019). Yet, these State-Space Models (SSMs) are more difficult to train as they require non-trivial latent state inference schemes (Krishnan et al., 2017) and a careful design of the dynamic model (Karl et al., 2017). This leads most successful SSMs to only be evaluated on small or artificial toy tasks. In this work, we propose a novel stochastic dynamic model for the task of video prediction which successfully leverages structural and computational advantages of SSMs that operate on low-dimensional latent spaces. The dynamic component determines the evolution through residual updates of the latent state, conditioned on learned stochastic variables. This formulation allows us to implement an efficient training strategy and process in an interpretable manner complex high-dimensional data such as videos. This residual principle can be linked to recent advances relating residual networks and Ordinary Differential Equations (ODEs) (Chen et al., 2018). This interpretation opens new perspectives such as generating videos at different frame rates, as demonstrated in our experiments. Overall, this approach outperforms current state-of-the-art models on the task of stochastic video prediction, as demonstrated by comparisons with competitive baselines on representative benchmarks. 2 RELATED WORK Video synthesis covers a range of different tasks, such as video-to-video translation (Wang et al., 2018), super-resolution (Caballero et al., 2017), interpolation between frames (Jiang et al., 2018), unconditonal generation (Tulyakov et al., 2018), or video prediction, which is the focus of this paper. Deterministic models. Inspired by prior sequence generation models using RNNs (Graves, 2013), a number of video prediction methods (Srivastava et al., 2015; Villegas et al., 2017; Wichers et al., 2018) rely on LSTMs (Hochreiter & Schmidhuber, 1997), or, like Ranzato et al. (2014) and Jia et al. (2016), on derived networks such as ConvLSTMs (Shi et al., 2015) taking advantage of Convolutional Neural Networks (CNNs). Indeed, computer vision approaches are usually tailored to high-dimensional video sequences and propose domain-specific techniques as they often use pixel-level transformations and optical flow (Shi et al., 2015; Walker et al., 2015; Finn et al., 2016; Jia et al., 2016; Vondrick & Torralba, 2017; Liang et al., 2017; Liu et al., 2017; Lotter et al., 2017; Lu et al., 2017a; Fan et al., 2019) that help to produce high-quality predictions. Such predictions are, however, deterministic, thus hurting their performance as they fail to generate sharp long-term video frames (Babaeizadeh et al., 2018; Denton & Fergus, 2018). Following Mathieu et al. (2016), some works proposed to use an adversarial loss (Goodfellow et al., 2014) on the predictions of their model to sharpen the generated frames (Vondrick & Torralba, 2017; Liang et al., 2017; Lu et al., 2017a; Xu et al., 2018). Nonetheless, adversarial losses are notoriously hard to train, and lead to mode collapse, preventing diversity of generations. Stochastic and image-autoregressive models. Some approaches rely on exact likelihood maximization, using pixel-level autoregressive generation (van den Oord et al., 2016; Kalchbrenner et al., 2017) or normalizing flows through invertible transformations between the observation space and a latent space (Kingma & Dhariwal, 2018; Kumar et al., 2019). However, they require careful design of complex temporal generation schemes manipulating high-dimensional data, thus inducing a prohibitive temporal generation cost. More efficient continuous models rely on Variational Auto-Encoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014) for the inference of lowdimensional latent state variables. Except Xue et al. (2016) who learn a one-frame-ahead VAE, they model sequence stochasticity by incorporating a random latent variable per frame into a deterministic RNN-based image-autoregressive model. Babaeizadeh et al. (2018) integrate stochastic variables into the ConvLSTM architecture of Finn et al. (2016). Concurrently with He et al. (2018), Denton & Fergus (2018), with Castrejon et al. (2019) in a follow-up, use a prior LSTM conditioned on previously generated frames in order to sample random variables that are fed to a predictor LSTM. Finally, Lee et al. (2018) combine the ConvLSTM architecture and this learned prior, adding an adversarial loss on the predicted videos to sharpen them at the cost of a diversity drop. Yet, all these methods are image-autoregressive, as they feed their predictions back into the latent space, thus tying the frame synthesis and temporal models and increasing their computational cost. Concurrently to our work, Minderer et al. (2019) propose to use the autoregressive VRNN model (Chung et al., 2015) on learned image key-points instead of raw frames. While this change could mitigate the aforementioned problems, the extent of such mitigation is unclear. We follow a complementary approach by directly proposing a dynamic model that is state-space and acts on a small latent state, tackling these issues. State-space models. Many latent state-space models have been proposed for sequence modelization (Bayer & Osendorfer, 2014; Fraccaro et al., 2016; 2017; Krishnan et al., 2017; Karl et al., 2017; Hafner et al., 2019), usually trained by deep Variational Inference (VI). These methods, which use locally linear temporal transition functions or RNN-based dynamics, are designed for and tested on low-dimensional data, as learning such models on complex data is challenging, or focus on control or planning tasks. In contrast, our fully latent method is the first one to be successfully applied to complex high-dimensional data such as videos, thanks to a temporal model based on residual updates of its latent state. It thus falls within the scope of a recent trend linking differential equations with neural networks (Lu et al., 2017b; Long et al., 2018), leading to the integration of ODEs, that are seen as continuous residual networks, in neural network architectures (Chen et al., 2018). However, the latter work and follow-ups (Rubanova et al., 2019; Yıldız et al., 2019) are either limited to low-dimensional data, prone to overfitting or unable to handle stochasticity within a sequence. Another line of works considers stochastic differential equations (SDEs) with neural networks (Ryder et al., 2018; De Brouwer et al., 2019), but are limited to continuous Brownian noise, whereas video prediction additionally requires to model punctual stochastic events. 3 MODEL We consider the task of stochastic video prediction, consisting in approaching, given a number of conditioning video frames, the distribution of possible future frames given this conditioning. 3.1 LATENT RESIDUAL DYNAMIC MODEL Let x1:T be a sequence of T video frames. We model their evolution by introducing latent variables y that are driven by a dynamic temporal model. Each frame xt is then generated from the corresponding latent state yt only, making the dynamics independent from the previously generated frames. We propose to model the transition function of the latent dynamic of y with a stochastic residual network. State yt+1 is chosen to deterministically depend on the previous state yt, conditionally to an auxiliary random variable zt+1. These auxiliary variables encapsulate the randomness of the video dynamics. They have a learned factorized Gaussian prior that depends on the previous state only. The model is depicted in Figure 1a, and defined as follows: y1 ∼ N (0, I), zt+1 ∼ N ( µθ(yt), σθ(yt)I ) , yt+1 = yt + fθ(yt, zt+1), xt ∼ G ( gθ(yt) ) , (1) where µθ, σθ, fθ and gθ are neural networks, and G ( gθ(yt) ) is a probability distribution parameterized by gθ(yt). In our experiments, G is a normal distribution with fixed diagonal variance and mean gθ(yt). Note that y1 is assumed to have a standard Gaussian prior, and, in our VAE setting, will be inferred from conditioning frames for the prediction task, as shown in Section 3.3. The residual update rule takes inspiration in the Euler discretization scheme of differential equations. The state of the system yt is updated by its first-order movement, i.e., the residual fθ(yt, zt+1). Compared to a regular RNN, this simple principle makes our temporal model lighter and more interpretable. Equation (1), however, differs from a discretized ODE because of the introduction of the stochastic discrete-time variables z. Nonetheless, we propose to allow the Euler step size ∆t to be smaller than 1, as a way to make the temporal model closer to a continuous dynamics. The updated dynamics becomes, with 1∆t ∈ N to synchronize the step size with the video frame rate: yt+∆t = yt + ∆t · fθ ( yt, zbtc+1 ) . (2) For this formulation, the auxiliary variable zt is kept constant between two integer time steps. Note that a different ∆t can be used during training or testing. This allows our model to generate videos at an arbitrary frame rate since each intermediate latent state can be decoded in the observation space. This ability enables us to observe the quality of the learned dynamic as well as challenge its ODE inspiration by testing its generalization to the continuous limit in Section 4. In the following, we consider ∆t as a hyperparameter. For the sake of clarity, we consider that ∆t = 1 in the following; generalizing to smaller ∆t is straightforward as Figure 1a remains unchanged. 3.2 CONTENT VARIABLE Some components of video sequences can be static, such as the background or shapes of moving objects. They may not impact the dynamics; we therefore model them separately, in the same spirit as Denton & Birodkar (2017) and Yingzhen & Mandt (2018). We compute a content variable w that remains constant throughout the whole generation process and is fed together with yt into the frame generator. It enables the dynamical part of the model to focus only on movement, hence being lighter and more stable. Moreover, it allows us to leverage architectural advances in neural networks, such as skip connections (Ronneberger et al., 2015), to produce more realistic frames. This content variable is a deterministic function cψ of a fixed number k < T of frames x (k) c : x(k)c = xi1 , . . . ,xik , w = cψ ( x(k)c ) = cψ ( xi1 , . . . ,xik ) , xt ∼ G ( gθ(yt,w) ) . (3) During testing, x(k)c are the last k conditioning frames (usually between 2 and 5). This content variable is not endowed with any probabilistic prior, contrary to the dynamic variables y and z. Hence, the information it contains is not constrained in the loss function (see Section 3.3), but only architecturally. To prevent temporal information from leaking in w, we propose to uniformly sample these k frames within x1:T during training. We also design cψ as a permutation-invariant function (Zaheer et al., 2017), which is done by using an MLP fed with the sum of individual frame representations, similarly to Santoro et al. (2017). This absence of prior and its architectural constraint allows w to contain as much non-temporal information as possible, while preventing it from containing dynamic information. On the other hand, due to their strong standard Gaussian priors, y and z are encouraged to discard unnecessary information. Therefore, y and z should only contain temporal information that could not be captured by w. Note that this content variable can be removed from our model, yielding a more classical deep state-space model. An experiment in this setting is presented in Appendix E. 3.3 VARIATIONAL INFERENCE AND ARCHITECTURE Following the generative process depicted in Figure 1a, the conditional joint probability of the full model, given a content variable w, can be written as: p(x1:T , z2:T ,y1:T | w) = p(y1) T−1∏ t=1 p(zt+1 | yt)p(yt+1 | yt, zt+1) T∏ t=1 p(xt | yt,w), (4) where p(yt+1 | yt, zt+1) = δ ( yt + fθ(yt, zt+1)− yt+1 ) and δ is the Dirac delta function centered on 0, according to the expression of yt+1 in Equation (1). Thus, in order to optimize the likelihood of the observed videos p(x1:T | w), we need to infer latent variables y1 and z2:T . This is done by deep Variational Inference using the inference model parameterized by φ and shown in Figure 1b, which comes down to consider a variational distribution qZ,Y defined and factorized as follows: qZ,Y , q(z2:T ,y1:T | x1:T ,w) = q(y1 | x1:k) T∏ t=2 q(zt | x1:t)δ ( yt−1 + fθ(yt−1, zt)− yt ) . (5) This yields the following evidence lower bound (ELBO), whose full derivation is given in Appendix A: log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) − E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=2 DKL ( q(zt | x1:t) ∥∥ p(zt | ỹt−1)) , L(x1:T ;w, θ, φ). (6) The sum of KL divergence expectations implies to consider the full past sequence of inferred states for each time step, due to the dependence on conditionally deterministic variables y2:T . However, optimizing L(x1:T ;w, θ, φ) with respect to model parameters θ and variational parameters φ can be done efficiently by sampling a single full sequence of states from qZ,Y per example, and computing gradients by backpropagation (Rumelhart et al., 1988) trough all inferred variables, using the reparametrization trick (Kingma & Welling, 2014; Rezende et al., 2014). We classically choose q(y1 | x1:k) and q(zt | x1:t) to be factorized Gaussian so that all KLDs can be computed analytically. We include an `2 regularization term on residuals fθ which stabilizes the temporal dynamics of the residual network, as noted by Behrmann et al. (2019) and Rousseau et al. (2019). Given a set of videos X , the full optimization problem, where L is defined as in Equation (6), is then given as: arg max θ,φ,ψ ∑ x∈X E x (k) c L ( x1:T ; cψ ( x(k)c ) , θ, φ ) − λ · E(z2:T ,y1:T )∼qZ,Y T∑ t=2 ∥∥fθ(yt−1, zt)∥∥2 . (7) Figure 1c depicts the full architecture of our temporal model, corresponding to how the model is applied during testing. The first latent variables are inferred with the conditioning framed and are then predicted with the dynamic model. In contrast, during training, each frame of the input sequence is considered for inference, which is done as follows. Firstly, each frame xt is independently encoded into a vector-valued representation x̃t, with x̃t = hφ(xt). y1 is then inferred using an MLP on the first k encoded frames x̃1:k. Each zt is inferred in a feed-forward fashion with an LSTM on the encoded frames. Inferring z this way experimentally performs better than, e.g., inferring them from the whole sequence x1:T ; we hypothesize that this follows from the fact that this filtering scheme is closer to the prediction setting, where the future is not available. 4 EXPERIMENTS This section exposes the experimental results of our method on three standard stochastic video prediction datasets.1 We compare our method with state-of-the-art baselines on stochastic video prediction. Furthermore, we qualitatively study the dynamics and latent space learned by our model. Training details are described in Appendix C. The stochastic nature and novelty of the task of stochastic video prediction make it challenging to evaluate (Lee et al., 2018): since videos and models are stochastic, comparing the ground truth and a predicted video is not adequate. We thus adopt the common approach (Denton & Fergus, 2018; Lee et al., 2018) consisting in, for each test sequence, sampling from the tested model a given number (here, 100) of possible futures and reporting the best performing sample against the true video. We report this discrepancy for three commonly used metrics: Peak Signal-to-Noise Ratio (PSNR, higher is better), Structured Similarity (SSIM, higher is better), and Learned Perceptual Image Patch Similarity (LPIPS, lower is better) (Zhang et al., 2018). PSNR tends to promote blurry predictions, as it is a pixel-level measure derived from the `2 distance, but greatly penalizes errors in predicted positions of objects in the scenes. SSIM is a similarity metric between image patches. LPIPS is a learned distance between activations of deep CNNs trained on image classification tasks, and have been shown to better correlate with human judgment on real images. While these three metrics are computed frame-wise, the recently proposed Fréchet Video Distance (FVD, lower is better) (Unterthiner et al., 2018) aims at directly comparing the distribution of predicted videos with the ground truth distribution through the representations computed by a deep CNN trained on action 1Code, video samples, and datasets are available at https://sites.google.com/view/srvp/. recognition tasks. It has been shown, independently from LPIPS, to better correlate with human judgment than PSNR and SSIM. We treat all four metrics as complementary, as they capture different modalities. PSNR challenges the dynamics of the predicted videos, while SSIM rather compares local frame patches but loses some dynamics information. LPIPS and FVD both measure the realism of the predictions compared to the ground truth. FVD considers videos as a whole, making it more capable of detecting temporal inconsistencies. On the other hand, the frame-wise LPIPS metric penalizes more the temporal drifts of videos, since it directly compares each predicted and ground truth frame. We present experimental results on a simulated dataset and two real-world datasets, that we briefly present in the following and detail in Appendix B. The corresponding numerical results can be found in Appendix D. For the sake of concision, we only display a handful of qualitative samples in this section, and refer to Appendix H for additional samples. We compare our model against several state-of-the-art models: SV2P (Babaeizadeh et al., 2018), SVG (Denton & Fergus, 2018) and SAVP (Lee et al., 2018). All baseline results were obtained with pretrained models released by the authors. Note that we use the same neural architecture as SVG for our encoders and decoders in order to perform fair comparisons with this method, which is the closest to ours among the state of the art. Unless specified otherwise, our model is tested with the same ∆t as in training (see Equation (2)). Stochastic Moving MNIST (SM-MNIST). This dataset consists of one or two MNIST digits (LeCun et al., 1998) moving linearly and randomly bouncing on walls with new direction and velocity sampled randomly at each bounce (Denton & Fergus, 2018). As SV2P and SAVP were not tested on this dataset (in particular, with no pretrain model, code or hyperparameters), we only report scores for SVG as state-of-the-art model on SM-MNIST. Figure 2a shows quantitative results with two digits. Our model outperforms SVG on both PSNR and SSIM; LPIPS and FVD are not reported as they are not relevant for this synthetic task. Decoupling dynamics from image synthesis allows our method to maintain temporal consistency despite highuncertainty frames where crossing digits become indistinguishable. For instance in Figure 3, the digits shape changes after they cross in the SVG prediction, while our model predicts the correct digits. To evaluate the predictive ability on a longer horizon, we perform experiments on the classic deterministic version of the dataset (Srivastava et al., 2015). We show the results up to t + 95 in Figure 2b. We can see that our model better captures the dynamics of the problem compared to SVG as its performance decreases significantly less, even at a long-term horizon. We also compare to two alternative versions of our model in Figure 2, where the residual dynamic function is replaced by an MLP or a GRU network (Cho et al., 2014). Our residual model outperforms both versions on the stochastic, and especially on the deterministic version of the dataset, showing its intrinsic advantage at modeling dynamics. Finally, on the deterministic version of Moving MNIST, we compare to an alternative where z is entirely removed, resulting in a temporal model very close to the one presented in Chen et al. (2018). The loss of performance of this alternative model is significant, especially in SSIM, showing that our stochastic residual model offers a substantial advantage even when used in a deterministic environment. KTH Action dataset (KTH). This dataset is composed of real-world videos of people performing a single action per video in front of different backgrounds (Schüldt et al., 2004). Uncertainty lies in the appearance of subjects, the actions they perform and how they are performed. We outperform on this dataset every considered baseline for each metric, as depicted in Figure 4 and Table 2. In some videos, the subject only appears after the conditioning frames, requiring the model to sample the moment and location of the subject appearance, as well as its action. This critical case is illustrated in Figure 5. There, SVG fails to even generate a moving person; only SAVP and our model manage to do so, and our best sample is closer to the subject’s poses compared to SAVP. Moreover, the worst and a random sample of our model demonstrate that it captures the diversity of the dataset by making a person appear at different time steps and with different speeds. An additional experiment on this dataset is included in Appendix G, studying the influence of the encoder and decoder architecture on SVG and our model. Finally, Table 2 compares our method to its MLP and GRU alternative versions, leading to two conclusions. Firstly, it confirms the structural advantage of residual dynamics observed on Moving MNIST. On one hand, MLP better captures dynamics than GRU on KTH according to PSNR and SSIM, but loses in terms of realism according to LPIPS and FVD. On the other hand, the residual version shows a slight dynamics improvement with respect to both MLP and GRU, while substantially pushing further prediction realism. Secondly, all three versions of our model (residual, MLP, GRU) outperform prior methods. Therefore, this improvement is due to their common inference method, latent nature and content variable, strengthening our motivation to propose a non-autoregressive model. BAIR robot pushing dataset (BAIR). This dataset contains videos of a Sawyer robotic arm pushing objects on a tabletop (Ebert et al., 2017). It is highly stochastic as the arm can change its direction at any moment. We achieve similar or better results compared to state-of-the-art models, as Figure 6 and Table 3 shows, and second-best PSNR behind SV2P, but the latter produces very blurry samples, which can be seen in Appendix H, yielding prohibitive LPIPS and FVD scores. In contrast, we achieve the highest SSIM overall, as well as state-of-the-art LPIPS and competitive FVD among these models. Note that we could not add VideoFlow to our experiments, due to the unavailability of pretrained models and numerical results. However, compared to PSNR, SSIM and LPIPS results reported by Kumar et al. (2019) for BAIR (the only tested dataset and metrics in their paper), our model appears to behave better than VideoFlow, which is on par with SAVP on these metrics. Varying frame rate in testing. We challenge the ability of our model to use a different Euler step size than the one used in training (see Equation (2)). Figures 4 and 6 include corresponding results with a halved ∆t. Prediction performances remain stable while generating twice as many frames (cf. Appendix F for further discussion). Our model is thus robust to the refinement of the Euler approximation, showing the quality of the learned dynamic which is close to continuous. In particular, this shows that our model learned a dynamic driven by a piecewise ODE, i.e., the learned dynamic of each interval between two consecutive frames is an ODE, as a constant z is given on such interval. This can be used to generate frames at a higher frame rate than the training videos without supervision. We show in Figure 7 and Appendix F frames generated at a double and quadruple frame rate on BAIR and KTH. Both figures show smooth intermediate generated frames. Disentangling dynamics and content. Let us show that the proposed model actually separates content from dynamics as discussed in Section 3.2. To this end, two sequences xs and xt are drawn from the BAIR test set. While xs is used for extracting our content variable ws, dynamic states yt are inferred with our model from xt. New frame sequences x̂ are finally generated from the fusion of the content vector and the dynamics. This results in a content corresponding to the first sequence xs while moving according to the dynamics of the second sequence xt, as observed in Figure 8. More samples for BAIR and KTH can be seen in Appendix H. Interpolation of dynamics. Our state-space structure allows us to learn semantic representations in yt. To highlight this feature, we test whether two Moving MNIST trajectories can be interpolated by linearly interpolating their inferred latent initial conditions. We begin by generating two trajectories xs and xt of a single moving digit. We infer their respective latent initial conditions ys1 and y t 1. We then use our model to generate frame sequences from latent initial conditions linearly interpolated between ys1 and y t 1. If it learned a meaningful latent space, the resulting trajectories should also be smooth interpolations between the directions of reference trajectories xs and xt, and this is what we observe in Figure 9. Additional examples can be found in Appendix H. 5 CONCLUSION We introduce a novel dynamic latent model for stochastic video prediction which, unlike prior imageautoregressive models, decouples frame synthesis and dynamics. This temporal model is based on residual updates of a small latent state that is showed to perform better than RNN-based models. This endows our method with several desirable properties, such as temporal efficiency and latent space interpretability. We experimentally demonstrate the performance and advantages of the proposed model, which outperforms prior state-of-the-art methods for stochastic video prediction. This work is, to the best of our knowledge, the first to propose a latent dynamic model scaling for video prediction. The proposed model is also novel with respect to the recent line of work dealing with neural networks and ODEs for temporal modeling; it is the first such residual model to scale to complex stochastic data such as videos. We believe that the general principles of our model (state-space, residual dynamic, static content variable) can be generally applied to other models as well. Interesting future works include replacing the VRNN model of Minderer et al. (2019) in order to model the evolution of key-points, or leveraging the state-space nature of our model in model-based reinforcement learning. A EVIDENCE LOWER BOUND We develop in this section the computations of the variational lower bound for the proposed model. Using the original variational lower bound of Kingma & Welling (2014) in Equation (8): log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y log p(x1:T | z̃2:T , ỹ1:T ,w)−DKL ( qZ,Y ∥∥ p(y1:T , z2:T | w)) (8) = E(z̃2:T ,ỹ1:T )∼qZ,Y log p(x1:T | z̃2:T , ỹ1:T ,w)−DKL ( q(y1, z2:T | x1:T ) ∥∥ p(y1, z2:T )) (9) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1, z2:T | x1:T ) ∥∥ p(y1, z2:T )), (10) where: • Equation (9) is given by the forward and inference models factorizing p and q in Equations (4) and (5) and illustrated by, respectively, Figures 1a and 1b: – the z variables and y1 are independent from w to p and q; – the y2:T variables are deterministic functions of y1 and z2:T with respect to p and q; • Equation (10) results from the factorization of p(x1:T | y1:T , z1:T ,w) in Equation (4). From there, by using the integral formulation of DKL: log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w) + ∫ · · · ∫ y1,z2:T q(y1, z2:T | x1:T ) log p(y1, z2:T ) q(y1, z2:T | x1:T ) dz2:T dy1 (11) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:T ) ∥∥ p(y1)) + Eỹ1∼q(y1 | x1:T ) [∫ · · · ∫ z2:T q(z2:T | x1:T , ỹ1) log p(z2:T | ỹ1) q(z2:T | x1:T , ỹ1) dz2:T ] (12) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) + Eỹ1∼q(y1 | x1:k) [∫ · · · ∫ z2:T q(z2:T | x1:T , ỹ1) log p(z2:T | ỹ1) q(z2:T | x1:T , ỹ1) dz2:T ] (13) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) + Eỹ1∼q(y1 | x1:k) ∫ · · · ∫ z2:T T∏ t=2 q(zt | x1:t) T∑ t=2 log p(zt | ỹ1, z2:t−1) q(zt | x1:t) dz2:T (14) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) − Eỹ1∼q(y1 | x1:k)DKL ( q(z2 | x1:t) ∥∥ p(z2 | ỹ1)) + Eỹ1∼q(y1 | x1:k)Ez̃2∼q(z2 | x1:2)∫ · · · ∫ z3:T T∏ t=3 q(zt | x1:t) T∑ t=3 log p(zt | y1, z̃2:t−1) q(zt | x1:t) dz3:T , (15) where: • Equation (13) follows from the inference model of Equation (5), where y1 only depends on x1:k; • Equation (14) is obtained from the factorizations of Equations (4) and (5). By iterating Equation (15)’s step on z3, . . . ,zT and factorizing all expectations, we obtain: (16) log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) − Eỹ1∼q(y1 | xc) ( Ez̃t∼q(zt | x1:t) )T t=2 T∑ t=2 DKL ( q(zt | x1:t) ∥∥ p(zt | ỹ1, z̃1:t−1)), (17) and we finally retrieve Equation (6) by using the factorization of Equation (5): log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) − E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=2 DKL ( q(zt | x1:t) ∥∥ p(zt | ỹt−1)). (18) B DATASETS DETAILS B.1 STOCHASTIC MOVING MNIST (SM-MNIST) This dataset consists in one or two train MNIST digits (LeCun et al., 1998) of size 27× 27 moving linearly within a 64× 64 frame and randomly bounce against its border, sampling a new direction and velocity at each bounce (Denton & Fergus, 2018). We use the same settings as Denton & Fergus (2018), train all models on 15 timesteps and condition them at test time on 5 frames. Note that we adapted the dataset to sample more coherent bounces: the original dataset computes digit trajectories that are dependent on the chosen framerate, unlike our corrected version of the dataset. We consequently retrained SVG on this dataset, obtaining comparable results as those originally presented by Denton & Fergus (2018). Test data were produced by generating 5000 samples with a different digit for each sequence coming from the MNIST test set. B.2 KTH ACTION DATASET (KTH) This dataset is composed of real-world 64× 64 videos of 25 people performing one of six actions (walking, jogging, running, boxing, handwaving and handclapping) in front of different backgrounds (Schüldt et al., 2004). Uncertainty lies in the appearance of subjects, the action they perform and how it is performed. The training set is formed with actions from 20 people, the remaining five being used for testing. Training is performed by sampling sub-sequences of size 20 in the train set. The test set is composed of 1000 randomly sampled sub-sequences of size 40. B.3 BAIR ROBOT PUSHING DATASET (BAIR) This dataset contains 64× 64 videos of a Sawyer robotic arm pushing objects on a tabletop (Ebert et al., 2017). It is highly stochastic as the arm can change its direction at any moment. Training is performed on 12 frames and testing is done with two conditioning frames on the provided test set, consisting of 256 sequences of 30 frames. C TRAINING DETAILS C.1 SPECIFICATIONS We used Python 3.7.4 and PyTorch 1.2.0 (Paszke et al., 2017) to implement our model. Each model was trained on a Nvidia GPUs with CUDA 10 in mixed-precision training with the help of Apex.2 C.2 ARCHITECTURE Encoder and decoder architecture. Both gθ and hφ are chosen to have different architectures depending on the dataset. We used the same architectures as in Denton & Fergus (2018): a DCGAN 2https://github.com/nvidia/apex. discriminator and generator architecture (Radford et al., 2016) for Moving MNIST, and a VGG16 (Simonyan & Zisserman, 2015) architecture (mirrored for hφ) for BAIR and KTH. In both cases, the output of hφ (i.e., x̃) is a vector of size 128, and gθ and hφ weights are initialized using a centered normal distribution with a standard deviation of 0.02. For the Moving MNIST dataset, the content variable w is obtained directly from x̃ and is thus a vector of size 128. For KTH and BAIR, we supplement this vectorial variable with skip connections from all layers of the encoder gθ that are then fed to the decoder hφ to handle complex backgrounds. For Moving MNIST, the number of frames k used to compute the content variable is 5; for KTH, it is 3; for BAIR, it is 2. LSTM architecture. The LSTM used for all datasets has a single layer of LSTM cells with a hidden state size of 256. MLP architecture. All MLPs used in inference (with parameters φ) have three linear layers with hidden size 256 and leaky ReLU activations. All MLPs used in the forward model (with parameters θ) have four linear layers with hidden size 512 and leaky ReLU activations. Weights of fθ, in particular, are orthogonally initialized with a gain of 1.41, while the other MLPs are initialized with default weight initialization of PyTorch. Sizes of latent variables. The sizes of the latent variables in our model are the following: for Moving MNIST, y and z have size 20; for KTH and BAIR, y and z have size 50. Euler step size All models but those trained on KTH are trained with ∆t = 1. Models on KTH are trained with ∆t = 12 . C.3 OPTIMIZATION Loss function. All models are trained using the Adam optimizer (Kingma & Ba, 2015) with learning rate 3× 10−4 and λ = 1. The batch size for Moving MNIST and BAIR is chosen to be 128, and the batch size for KTH is chosen to be 100. Following (Higgins et al., 2017), we use β = 1 (cf. Equation (7)), except for the Moving MNIST dataset where the β factor in front of the KL on z (last term of Equation (6)) is equal to 2. Variance of the observation. The variance ν used in the observation probability distribution G ( gθ(y) ) = N ( gθ(y), νI ) is chosen as follows: • for Moving MNIST, ν = 1; • for KTH, ν = 4× 10−2; • for BAIR, ν = 12 . Number of optimization steps. The number of optimization steps is the following for the different datasets: • Moving MNIST (stochastic): 1 000 000 steps with additional 100 000 steps where the learning rated is linearly decreased to 0; • Moving MNIST (deterministic): 800 000 steps with additional 100 000 steps where the learning rated is linearly decreased to 0; • KTH: 200 000 steps, the final model being chosen among several checkpoints as the one having the best evaluation score (which differs from the test score as we extract from the train set an evaluation set); • BAIR: 250 000 steps, the final model is chosen as for KTH. D ADDITIONAL NUMERICAL RESULTS Tables 1 to 3 present, respectively, numerical results for PSNR, SSIM and LPIPS averaged over all time steps for our methods and considered baselines on the SM-MNIST, KTH and BAIR datasets, corresponding to Figures 2, 4 and 6. E PENDULUM EXPERIMENTS We test the ability of our model to model the dynamics of a common dataset used in the literature of state-space models (Karl et al., 2017; Fraccaro et al., 2017), Pendulum (Karl et al., 2017). It consists of noisy observations of a dynamic torque-controlled pendulum; it is stochastic as the information of this control is not available. We test our model, without the content variable w, in the same setting as DVBF (Karl et al., 2017) and KVAE (Fraccaro et al., 2017) and report the corresponding ELBO scores in Table 4. The encoders and decoders for all methods are MLPs. Our model outperforms DVBF and is merely beaten by KVAE. This can be explained by the nature of the KVAE model, whose sequential model is not learned using a VAE but a Kalman filter allowing exact inference in the latent space. On the contrary, DVBF is learned, like our model, by a sequential VAE, and is thus much closer to our model than KVAE. This result then shows that the dynamic model that we chose in the context of sequential VAEs is more adapted on this dataset than the one of DVBF, and achieve results close to a method taking advantage of exact inference using adapted tools such as Kalman filters. F INFLUENCE OF THE EULER STEP SIZE Table 5 details the numerical results of our model trained on BAIR with ∆t = 1 and tested with different values of ∆t. It shows that, when refining the Euler approximation, our model can improve its performance in a setting that is unseen during training. Results stabilize when ∆t is small enough, showing that the model is close to the continuous limit. Tables 6 and 7 details the numerical results of our model trained on KTH with, respectively, ∆t = 1 and ∆t = 12 , and tested with different values of ∆t. They show that if ∆t is chosen too high when training (here, ∆t = 1), the model drops in performance when refining the Euler approximation. We assume that this phenomenon arises because the Euler approximation used in training is too rough, making the model adapt to a very discretized dynamic that cannot be transferred to smaller Euler step sizes. Indeed, when training with smaller step size, (here, ∆t = 12 ), results in the training settings are equivalent while results obtained with a lower ∆t are now much closer, if not equivalent, to the nominal ones. This shows that the model learned a continuous dynamic if learned with a small enough step size. Note that the loss of performance using a higher ∆t in testing than in training, like in Table 7, is expected as it corresponds to loosening the Euler approximation compared to training. However, even in this adversarial setting, our model maintains state-of-the-art results, demonstrating the quality of the learned dynamic as it can be further discretized if needed at the cost of a reasonable drop in performance. G AUTOREGRESSIVITY AND IMPACT OF ENCODER AND DECODER ARCHITECTURE Figure 10 exposes the numerical results on KTH of our model and SVG for different choices of architectures: DCGAN and VGG. Since DCGAN is a less powerful architecture than VGG, results of each method with VGG are expectedly better than those of the same method with DCGAN. Moreover, our model outperforms SVG for any fixed choice of encoder and decoder architecture, which is coherent with Figure 4. We observe, however, that the difference between a method using VGG and its DCGAN counterpart differs depending on the model. Ours shows more robustness to changing of encoder and decoder architecture, as it loses much less performance than SVG when switching to a less powerful architecture. Indeed, while the difference in LPIPS is similar for both models (as expected from a score evaluating the realism of produced frames), the loss of SVG is significantly larger than our loss in terms of SSIM, and in particular PNSR. This shows that reducing the capacity of the encoders and decoders of SVG not only hurts its ability to produce realistic frames as expected but also substantially lowers its ability to learn a good dynamic. We assume that this phenomenon is caused by the autoregressive nature of SVG, which makes it reliant of the performance of its encoders and decoders. This supports our motivation to propose a non-autoregressive model for stochastic video prediction. H ADDITIONAL SAMPLES This section includes some additional samples corresponding to experiments described in Section 4. H.1 STOCHASTIC MOVING MNIST We present in Figures 11 to 14 additional samples from SVG and our model on SM-MNIST. In particular, Figure 13 shows SVG changing a digit shape in the course of a prediction even though it does not cross another digit, whereas ours maintain the digit shape. We assume that this advantage of ours comes from the latent nature of the dynamic of our model and the use in our of a static content variable that is prevented from containing temporal information. Indeed, even when the best sample from our model is not close from the ground truth of the dataset, like in Figure 14, the shapes of the digits are still maintained by our model. H.2 KTH We present in Figures 15 to 19 additional samples from SV2P, SVG, SAVP and our model on KTH, with additional insights. H.3 BAIR We present in Figures 20 to 22 additional samples from SV2P, SVG, SAVP and our model on BAIR, with additional insights. H.4 OVERSAMPLING We present in Figure 23 additional examples of video generation at a doubled frame rate by our model. H.5 CONTENT SWAP We present in Figures 24 to 28 additional examples of content swap as in Figure 8. H.6 INTERPOLATION IN THE LATENT SPACE We present in Figures 29 and 30 additional examples of interpolation in the latent space between two trajectories.
1. What is the main contribution of the paper regarding video prediction using State-Space Models? 2. What are the strengths and weaknesses of the proposed method compared to prior works? 3. How does the reviewer assess the novelty and advantages of the proposed approach? 4. What are some concerns regarding the experiments and comparisons made in the paper? 5. Are there any suggestions or recommendations for improving the paper?
Review
Review Summary: The paper proposes a video prediction method based on State-Space Models. The paper describes two main contributions: 1. By learning dynamics in the latent state space, the method avoids the high computational cost and accumulating image reconstruction errors of autoregressive models that condition on generated frames. 2. To model dynamics, the paper proposes a residual update rule inspired by Euler’s method to solve ODEs. According to this rule, the update to the latent state y_t is modeled as an additive residual f(y_t, z_{t+1}). This has the advantage that the step size of the discretization can be adjusted freely, e.g. between training and inference. The paper provides extensive experimental comparison of their model to the SVG and SAVP models on several standard datasets. The paper further contains experiments illustrating features of the model such as disentangling dynamics and content, and interpolation of dynamics in the latent space. Decision: The paper is written clearly and the mathematical treatment and experiments appear rigorous. The idea of predicting video using state-space models is interesting and promising. However, as described below, the paper overstates its novelty and falls short of showing the advantages of the method beyond incremental improvements on frame-wise image quality metrics. I therefore suggest rejection in its current version. Supporting arguments and suggestions: 1. The idea to use fully latent models for video prediction, to untie frame synthesis and dynamics, is not new and the paper does not fully cite this literature. For example, [1] and [2] perform unsupervised, non-autoregressive video prediction. The differences to these models should be discussed. 2. The advantages of the residual update rule are not made clear enough. The parallels to the ODE literature seem tenuous. The main advantage described in the paper is the ability to synthesize videos at different frame rates, but interpolation over such short time horizons is not a hard problem. At least, the paper should compare to existing methods for frame interpolation. Apart from interpolation (variable step size), it appears that the update rule could be changed from y_{t+1} = y_t + f(y_t, z_{t+1}) to y_{t+1} = f(y_t, z_{t+1}) without impact to the model. How is it different from the standard VRNN formulation [3]? More experiments to show the advantage of the proposed update rule would be helpful. 3. Some of the experiments seem like interesting starting points but do not support general claims. For example, Fig 2 (b) shows that the proposed dynamics model is better than an MLP or GRU on deterministic Moving MNIST, but is this also true on real datasets, which have much more complex dynamics? Similarly, the interpolation in Figure 9 is intriguing, but it would be helpful to describe and test how this ability is useful for applications of the predictive model. 4. The comparisons use frame-wise metrics of image quality (PSNR, SSIM, LPIPS). Even though they are common in the literature, these metrics are unsuitable for comparing long video sequences due to their stochasticity. The metrics are probably dominated by relatively uninteresting features such as the quality of the static background. Metrics for comparing entire videos exist (e.g. FVD [4]) and should be used. Even better, the paper should demonstrate the usefulness of the model for downstream tasks such as reinforcement learning, although I understand that this may be out of scope. Minor comments: - As far as I know, the correct term for error terms is residual, not residue. - What do the error bars in the figures show? Please add this information to the figure legends. [1] Wichers et al., 2018, https://arxiv.org/pdf/1806.04768.pdf [2] Minderer et al., 2019, https://arxiv.org/abs/1906.07889 [3] Chung et al, 2015, https://arxiv.org/abs/1506.02216 [4] Unterthiner et al., 2018, https://arxiv.org/abs/1812.01717
ICLR
Title Stochastic Latent Residual Video Prediction Abstract Video prediction is a challenging task: models have to account for the inherent uncertainty of the future. Most works in the literature are based on stochastic imageautoregressive recurrent networks, raising several performance and applicability issues. An alternative is to use fully latent temporal models which untie frame synthesis and dynamics. However, no such model for video prediction has been proposed in the literature yet, due to design and training difficulties. In this paper, we overcome these difficulties by introducing a novel stochastic temporal model. It is based on residual updates of a latent state, motivated by discretization schemes of differential equations. This first-order principle naturally models video dynamics as it allows our simpler, lightweight, interpretable, latent model to outperform prior state-of-the-art methods on challenging datasets. 1 INTRODUCTION Being able to predict the future of a video from a few conditioning frames in a self-supervised manner has many applications in fields such as reinforcement learning (Gregor et al., 2019) or robotics (Babaeizadeh et al., 2018). More generally, it challenges the ability of a model to capture visual and dynamic representations of the world. Video prediction has received a lot of attention from the computer vision community. However, most proposed methods are deterministic, reducing their ability to capture video dynamics, which are intrinsically stochastic (Denton & Fergus, 2018). Stochastic video prediction is a challenging task which has been tackled by recent works. Most state-of-the-art approaches are based on image-autoregressive models (Denton & Fergus, 2018; Babaeizadeh et al., 2018), built around Recurrent Neural Networks (RNNs), where each generated frame is fed back to the model to produce the next frame. However, performances of their temporal models innately depend on the capacity of their encoder and decoder, as each generated frame has to be re-encoded in a latent space. Such autoregressive processes induce a high computational cost, and strongly tie the frame synthesis and temporal models, which may hurt the performance of the generation process and limit its applicability (Gregor et al., 2019; Rubanova et al., 2019). An alternative approach consists in separating the dynamic of the state representations from the generated frames, which are independently decoded from the latent space. In addition to removing the aforementioned link between frame synthesis and temporal dynamics, this is computationally appealing when coupled with a low-dimensional latent-space. Moreover, such models can be used to shape a complete representation of the state of a system, e.g. for reinforcement learning applications (Gregor et al., 2019), and more interpretable than autoregressive models (Rubanova et al., 2019). Yet, these State-Space Models (SSMs) are more difficult to train as they require non-trivial latent state inference schemes (Krishnan et al., 2017) and a careful design of the dynamic model (Karl et al., 2017). This leads most successful SSMs to only be evaluated on small or artificial toy tasks. In this work, we propose a novel stochastic dynamic model for the task of video prediction which successfully leverages structural and computational advantages of SSMs that operate on low-dimensional latent spaces. The dynamic component determines the evolution through residual updates of the latent state, conditioned on learned stochastic variables. This formulation allows us to implement an efficient training strategy and process in an interpretable manner complex high-dimensional data such as videos. This residual principle can be linked to recent advances relating residual networks and Ordinary Differential Equations (ODEs) (Chen et al., 2018). This interpretation opens new perspectives such as generating videos at different frame rates, as demonstrated in our experiments. Overall, this approach outperforms current state-of-the-art models on the task of stochastic video prediction, as demonstrated by comparisons with competitive baselines on representative benchmarks. 2 RELATED WORK Video synthesis covers a range of different tasks, such as video-to-video translation (Wang et al., 2018), super-resolution (Caballero et al., 2017), interpolation between frames (Jiang et al., 2018), unconditonal generation (Tulyakov et al., 2018), or video prediction, which is the focus of this paper. Deterministic models. Inspired by prior sequence generation models using RNNs (Graves, 2013), a number of video prediction methods (Srivastava et al., 2015; Villegas et al., 2017; Wichers et al., 2018) rely on LSTMs (Hochreiter & Schmidhuber, 1997), or, like Ranzato et al. (2014) and Jia et al. (2016), on derived networks such as ConvLSTMs (Shi et al., 2015) taking advantage of Convolutional Neural Networks (CNNs). Indeed, computer vision approaches are usually tailored to high-dimensional video sequences and propose domain-specific techniques as they often use pixel-level transformations and optical flow (Shi et al., 2015; Walker et al., 2015; Finn et al., 2016; Jia et al., 2016; Vondrick & Torralba, 2017; Liang et al., 2017; Liu et al., 2017; Lotter et al., 2017; Lu et al., 2017a; Fan et al., 2019) that help to produce high-quality predictions. Such predictions are, however, deterministic, thus hurting their performance as they fail to generate sharp long-term video frames (Babaeizadeh et al., 2018; Denton & Fergus, 2018). Following Mathieu et al. (2016), some works proposed to use an adversarial loss (Goodfellow et al., 2014) on the predictions of their model to sharpen the generated frames (Vondrick & Torralba, 2017; Liang et al., 2017; Lu et al., 2017a; Xu et al., 2018). Nonetheless, adversarial losses are notoriously hard to train, and lead to mode collapse, preventing diversity of generations. Stochastic and image-autoregressive models. Some approaches rely on exact likelihood maximization, using pixel-level autoregressive generation (van den Oord et al., 2016; Kalchbrenner et al., 2017) or normalizing flows through invertible transformations between the observation space and a latent space (Kingma & Dhariwal, 2018; Kumar et al., 2019). However, they require careful design of complex temporal generation schemes manipulating high-dimensional data, thus inducing a prohibitive temporal generation cost. More efficient continuous models rely on Variational Auto-Encoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014) for the inference of lowdimensional latent state variables. Except Xue et al. (2016) who learn a one-frame-ahead VAE, they model sequence stochasticity by incorporating a random latent variable per frame into a deterministic RNN-based image-autoregressive model. Babaeizadeh et al. (2018) integrate stochastic variables into the ConvLSTM architecture of Finn et al. (2016). Concurrently with He et al. (2018), Denton & Fergus (2018), with Castrejon et al. (2019) in a follow-up, use a prior LSTM conditioned on previously generated frames in order to sample random variables that are fed to a predictor LSTM. Finally, Lee et al. (2018) combine the ConvLSTM architecture and this learned prior, adding an adversarial loss on the predicted videos to sharpen them at the cost of a diversity drop. Yet, all these methods are image-autoregressive, as they feed their predictions back into the latent space, thus tying the frame synthesis and temporal models and increasing their computational cost. Concurrently to our work, Minderer et al. (2019) propose to use the autoregressive VRNN model (Chung et al., 2015) on learned image key-points instead of raw frames. While this change could mitigate the aforementioned problems, the extent of such mitigation is unclear. We follow a complementary approach by directly proposing a dynamic model that is state-space and acts on a small latent state, tackling these issues. State-space models. Many latent state-space models have been proposed for sequence modelization (Bayer & Osendorfer, 2014; Fraccaro et al., 2016; 2017; Krishnan et al., 2017; Karl et al., 2017; Hafner et al., 2019), usually trained by deep Variational Inference (VI). These methods, which use locally linear temporal transition functions or RNN-based dynamics, are designed for and tested on low-dimensional data, as learning such models on complex data is challenging, or focus on control or planning tasks. In contrast, our fully latent method is the first one to be successfully applied to complex high-dimensional data such as videos, thanks to a temporal model based on residual updates of its latent state. It thus falls within the scope of a recent trend linking differential equations with neural networks (Lu et al., 2017b; Long et al., 2018), leading to the integration of ODEs, that are seen as continuous residual networks, in neural network architectures (Chen et al., 2018). However, the latter work and follow-ups (Rubanova et al., 2019; Yıldız et al., 2019) are either limited to low-dimensional data, prone to overfitting or unable to handle stochasticity within a sequence. Another line of works considers stochastic differential equations (SDEs) with neural networks (Ryder et al., 2018; De Brouwer et al., 2019), but are limited to continuous Brownian noise, whereas video prediction additionally requires to model punctual stochastic events. 3 MODEL We consider the task of stochastic video prediction, consisting in approaching, given a number of conditioning video frames, the distribution of possible future frames given this conditioning. 3.1 LATENT RESIDUAL DYNAMIC MODEL Let x1:T be a sequence of T video frames. We model their evolution by introducing latent variables y that are driven by a dynamic temporal model. Each frame xt is then generated from the corresponding latent state yt only, making the dynamics independent from the previously generated frames. We propose to model the transition function of the latent dynamic of y with a stochastic residual network. State yt+1 is chosen to deterministically depend on the previous state yt, conditionally to an auxiliary random variable zt+1. These auxiliary variables encapsulate the randomness of the video dynamics. They have a learned factorized Gaussian prior that depends on the previous state only. The model is depicted in Figure 1a, and defined as follows: y1 ∼ N (0, I), zt+1 ∼ N ( µθ(yt), σθ(yt)I ) , yt+1 = yt + fθ(yt, zt+1), xt ∼ G ( gθ(yt) ) , (1) where µθ, σθ, fθ and gθ are neural networks, and G ( gθ(yt) ) is a probability distribution parameterized by gθ(yt). In our experiments, G is a normal distribution with fixed diagonal variance and mean gθ(yt). Note that y1 is assumed to have a standard Gaussian prior, and, in our VAE setting, will be inferred from conditioning frames for the prediction task, as shown in Section 3.3. The residual update rule takes inspiration in the Euler discretization scheme of differential equations. The state of the system yt is updated by its first-order movement, i.e., the residual fθ(yt, zt+1). Compared to a regular RNN, this simple principle makes our temporal model lighter and more interpretable. Equation (1), however, differs from a discretized ODE because of the introduction of the stochastic discrete-time variables z. Nonetheless, we propose to allow the Euler step size ∆t to be smaller than 1, as a way to make the temporal model closer to a continuous dynamics. The updated dynamics becomes, with 1∆t ∈ N to synchronize the step size with the video frame rate: yt+∆t = yt + ∆t · fθ ( yt, zbtc+1 ) . (2) For this formulation, the auxiliary variable zt is kept constant between two integer time steps. Note that a different ∆t can be used during training or testing. This allows our model to generate videos at an arbitrary frame rate since each intermediate latent state can be decoded in the observation space. This ability enables us to observe the quality of the learned dynamic as well as challenge its ODE inspiration by testing its generalization to the continuous limit in Section 4. In the following, we consider ∆t as a hyperparameter. For the sake of clarity, we consider that ∆t = 1 in the following; generalizing to smaller ∆t is straightforward as Figure 1a remains unchanged. 3.2 CONTENT VARIABLE Some components of video sequences can be static, such as the background or shapes of moving objects. They may not impact the dynamics; we therefore model them separately, in the same spirit as Denton & Birodkar (2017) and Yingzhen & Mandt (2018). We compute a content variable w that remains constant throughout the whole generation process and is fed together with yt into the frame generator. It enables the dynamical part of the model to focus only on movement, hence being lighter and more stable. Moreover, it allows us to leverage architectural advances in neural networks, such as skip connections (Ronneberger et al., 2015), to produce more realistic frames. This content variable is a deterministic function cψ of a fixed number k < T of frames x (k) c : x(k)c = xi1 , . . . ,xik , w = cψ ( x(k)c ) = cψ ( xi1 , . . . ,xik ) , xt ∼ G ( gθ(yt,w) ) . (3) During testing, x(k)c are the last k conditioning frames (usually between 2 and 5). This content variable is not endowed with any probabilistic prior, contrary to the dynamic variables y and z. Hence, the information it contains is not constrained in the loss function (see Section 3.3), but only architecturally. To prevent temporal information from leaking in w, we propose to uniformly sample these k frames within x1:T during training. We also design cψ as a permutation-invariant function (Zaheer et al., 2017), which is done by using an MLP fed with the sum of individual frame representations, similarly to Santoro et al. (2017). This absence of prior and its architectural constraint allows w to contain as much non-temporal information as possible, while preventing it from containing dynamic information. On the other hand, due to their strong standard Gaussian priors, y and z are encouraged to discard unnecessary information. Therefore, y and z should only contain temporal information that could not be captured by w. Note that this content variable can be removed from our model, yielding a more classical deep state-space model. An experiment in this setting is presented in Appendix E. 3.3 VARIATIONAL INFERENCE AND ARCHITECTURE Following the generative process depicted in Figure 1a, the conditional joint probability of the full model, given a content variable w, can be written as: p(x1:T , z2:T ,y1:T | w) = p(y1) T−1∏ t=1 p(zt+1 | yt)p(yt+1 | yt, zt+1) T∏ t=1 p(xt | yt,w), (4) where p(yt+1 | yt, zt+1) = δ ( yt + fθ(yt, zt+1)− yt+1 ) and δ is the Dirac delta function centered on 0, according to the expression of yt+1 in Equation (1). Thus, in order to optimize the likelihood of the observed videos p(x1:T | w), we need to infer latent variables y1 and z2:T . This is done by deep Variational Inference using the inference model parameterized by φ and shown in Figure 1b, which comes down to consider a variational distribution qZ,Y defined and factorized as follows: qZ,Y , q(z2:T ,y1:T | x1:T ,w) = q(y1 | x1:k) T∏ t=2 q(zt | x1:t)δ ( yt−1 + fθ(yt−1, zt)− yt ) . (5) This yields the following evidence lower bound (ELBO), whose full derivation is given in Appendix A: log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) − E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=2 DKL ( q(zt | x1:t) ∥∥ p(zt | ỹt−1)) , L(x1:T ;w, θ, φ). (6) The sum of KL divergence expectations implies to consider the full past sequence of inferred states for each time step, due to the dependence on conditionally deterministic variables y2:T . However, optimizing L(x1:T ;w, θ, φ) with respect to model parameters θ and variational parameters φ can be done efficiently by sampling a single full sequence of states from qZ,Y per example, and computing gradients by backpropagation (Rumelhart et al., 1988) trough all inferred variables, using the reparametrization trick (Kingma & Welling, 2014; Rezende et al., 2014). We classically choose q(y1 | x1:k) and q(zt | x1:t) to be factorized Gaussian so that all KLDs can be computed analytically. We include an `2 regularization term on residuals fθ which stabilizes the temporal dynamics of the residual network, as noted by Behrmann et al. (2019) and Rousseau et al. (2019). Given a set of videos X , the full optimization problem, where L is defined as in Equation (6), is then given as: arg max θ,φ,ψ ∑ x∈X E x (k) c L ( x1:T ; cψ ( x(k)c ) , θ, φ ) − λ · E(z2:T ,y1:T )∼qZ,Y T∑ t=2 ∥∥fθ(yt−1, zt)∥∥2 . (7) Figure 1c depicts the full architecture of our temporal model, corresponding to how the model is applied during testing. The first latent variables are inferred with the conditioning framed and are then predicted with the dynamic model. In contrast, during training, each frame of the input sequence is considered for inference, which is done as follows. Firstly, each frame xt is independently encoded into a vector-valued representation x̃t, with x̃t = hφ(xt). y1 is then inferred using an MLP on the first k encoded frames x̃1:k. Each zt is inferred in a feed-forward fashion with an LSTM on the encoded frames. Inferring z this way experimentally performs better than, e.g., inferring them from the whole sequence x1:T ; we hypothesize that this follows from the fact that this filtering scheme is closer to the prediction setting, where the future is not available. 4 EXPERIMENTS This section exposes the experimental results of our method on three standard stochastic video prediction datasets.1 We compare our method with state-of-the-art baselines on stochastic video prediction. Furthermore, we qualitatively study the dynamics and latent space learned by our model. Training details are described in Appendix C. The stochastic nature and novelty of the task of stochastic video prediction make it challenging to evaluate (Lee et al., 2018): since videos and models are stochastic, comparing the ground truth and a predicted video is not adequate. We thus adopt the common approach (Denton & Fergus, 2018; Lee et al., 2018) consisting in, for each test sequence, sampling from the tested model a given number (here, 100) of possible futures and reporting the best performing sample against the true video. We report this discrepancy for three commonly used metrics: Peak Signal-to-Noise Ratio (PSNR, higher is better), Structured Similarity (SSIM, higher is better), and Learned Perceptual Image Patch Similarity (LPIPS, lower is better) (Zhang et al., 2018). PSNR tends to promote blurry predictions, as it is a pixel-level measure derived from the `2 distance, but greatly penalizes errors in predicted positions of objects in the scenes. SSIM is a similarity metric between image patches. LPIPS is a learned distance between activations of deep CNNs trained on image classification tasks, and have been shown to better correlate with human judgment on real images. While these three metrics are computed frame-wise, the recently proposed Fréchet Video Distance (FVD, lower is better) (Unterthiner et al., 2018) aims at directly comparing the distribution of predicted videos with the ground truth distribution through the representations computed by a deep CNN trained on action 1Code, video samples, and datasets are available at https://sites.google.com/view/srvp/. recognition tasks. It has been shown, independently from LPIPS, to better correlate with human judgment than PSNR and SSIM. We treat all four metrics as complementary, as they capture different modalities. PSNR challenges the dynamics of the predicted videos, while SSIM rather compares local frame patches but loses some dynamics information. LPIPS and FVD both measure the realism of the predictions compared to the ground truth. FVD considers videos as a whole, making it more capable of detecting temporal inconsistencies. On the other hand, the frame-wise LPIPS metric penalizes more the temporal drifts of videos, since it directly compares each predicted and ground truth frame. We present experimental results on a simulated dataset and two real-world datasets, that we briefly present in the following and detail in Appendix B. The corresponding numerical results can be found in Appendix D. For the sake of concision, we only display a handful of qualitative samples in this section, and refer to Appendix H for additional samples. We compare our model against several state-of-the-art models: SV2P (Babaeizadeh et al., 2018), SVG (Denton & Fergus, 2018) and SAVP (Lee et al., 2018). All baseline results were obtained with pretrained models released by the authors. Note that we use the same neural architecture as SVG for our encoders and decoders in order to perform fair comparisons with this method, which is the closest to ours among the state of the art. Unless specified otherwise, our model is tested with the same ∆t as in training (see Equation (2)). Stochastic Moving MNIST (SM-MNIST). This dataset consists of one or two MNIST digits (LeCun et al., 1998) moving linearly and randomly bouncing on walls with new direction and velocity sampled randomly at each bounce (Denton & Fergus, 2018). As SV2P and SAVP were not tested on this dataset (in particular, with no pretrain model, code or hyperparameters), we only report scores for SVG as state-of-the-art model on SM-MNIST. Figure 2a shows quantitative results with two digits. Our model outperforms SVG on both PSNR and SSIM; LPIPS and FVD are not reported as they are not relevant for this synthetic task. Decoupling dynamics from image synthesis allows our method to maintain temporal consistency despite highuncertainty frames where crossing digits become indistinguishable. For instance in Figure 3, the digits shape changes after they cross in the SVG prediction, while our model predicts the correct digits. To evaluate the predictive ability on a longer horizon, we perform experiments on the classic deterministic version of the dataset (Srivastava et al., 2015). We show the results up to t + 95 in Figure 2b. We can see that our model better captures the dynamics of the problem compared to SVG as its performance decreases significantly less, even at a long-term horizon. We also compare to two alternative versions of our model in Figure 2, where the residual dynamic function is replaced by an MLP or a GRU network (Cho et al., 2014). Our residual model outperforms both versions on the stochastic, and especially on the deterministic version of the dataset, showing its intrinsic advantage at modeling dynamics. Finally, on the deterministic version of Moving MNIST, we compare to an alternative where z is entirely removed, resulting in a temporal model very close to the one presented in Chen et al. (2018). The loss of performance of this alternative model is significant, especially in SSIM, showing that our stochastic residual model offers a substantial advantage even when used in a deterministic environment. KTH Action dataset (KTH). This dataset is composed of real-world videos of people performing a single action per video in front of different backgrounds (Schüldt et al., 2004). Uncertainty lies in the appearance of subjects, the actions they perform and how they are performed. We outperform on this dataset every considered baseline for each metric, as depicted in Figure 4 and Table 2. In some videos, the subject only appears after the conditioning frames, requiring the model to sample the moment and location of the subject appearance, as well as its action. This critical case is illustrated in Figure 5. There, SVG fails to even generate a moving person; only SAVP and our model manage to do so, and our best sample is closer to the subject’s poses compared to SAVP. Moreover, the worst and a random sample of our model demonstrate that it captures the diversity of the dataset by making a person appear at different time steps and with different speeds. An additional experiment on this dataset is included in Appendix G, studying the influence of the encoder and decoder architecture on SVG and our model. Finally, Table 2 compares our method to its MLP and GRU alternative versions, leading to two conclusions. Firstly, it confirms the structural advantage of residual dynamics observed on Moving MNIST. On one hand, MLP better captures dynamics than GRU on KTH according to PSNR and SSIM, but loses in terms of realism according to LPIPS and FVD. On the other hand, the residual version shows a slight dynamics improvement with respect to both MLP and GRU, while substantially pushing further prediction realism. Secondly, all three versions of our model (residual, MLP, GRU) outperform prior methods. Therefore, this improvement is due to their common inference method, latent nature and content variable, strengthening our motivation to propose a non-autoregressive model. BAIR robot pushing dataset (BAIR). This dataset contains videos of a Sawyer robotic arm pushing objects on a tabletop (Ebert et al., 2017). It is highly stochastic as the arm can change its direction at any moment. We achieve similar or better results compared to state-of-the-art models, as Figure 6 and Table 3 shows, and second-best PSNR behind SV2P, but the latter produces very blurry samples, which can be seen in Appendix H, yielding prohibitive LPIPS and FVD scores. In contrast, we achieve the highest SSIM overall, as well as state-of-the-art LPIPS and competitive FVD among these models. Note that we could not add VideoFlow to our experiments, due to the unavailability of pretrained models and numerical results. However, compared to PSNR, SSIM and LPIPS results reported by Kumar et al. (2019) for BAIR (the only tested dataset and metrics in their paper), our model appears to behave better than VideoFlow, which is on par with SAVP on these metrics. Varying frame rate in testing. We challenge the ability of our model to use a different Euler step size than the one used in training (see Equation (2)). Figures 4 and 6 include corresponding results with a halved ∆t. Prediction performances remain stable while generating twice as many frames (cf. Appendix F for further discussion). Our model is thus robust to the refinement of the Euler approximation, showing the quality of the learned dynamic which is close to continuous. In particular, this shows that our model learned a dynamic driven by a piecewise ODE, i.e., the learned dynamic of each interval between two consecutive frames is an ODE, as a constant z is given on such interval. This can be used to generate frames at a higher frame rate than the training videos without supervision. We show in Figure 7 and Appendix F frames generated at a double and quadruple frame rate on BAIR and KTH. Both figures show smooth intermediate generated frames. Disentangling dynamics and content. Let us show that the proposed model actually separates content from dynamics as discussed in Section 3.2. To this end, two sequences xs and xt are drawn from the BAIR test set. While xs is used for extracting our content variable ws, dynamic states yt are inferred with our model from xt. New frame sequences x̂ are finally generated from the fusion of the content vector and the dynamics. This results in a content corresponding to the first sequence xs while moving according to the dynamics of the second sequence xt, as observed in Figure 8. More samples for BAIR and KTH can be seen in Appendix H. Interpolation of dynamics. Our state-space structure allows us to learn semantic representations in yt. To highlight this feature, we test whether two Moving MNIST trajectories can be interpolated by linearly interpolating their inferred latent initial conditions. We begin by generating two trajectories xs and xt of a single moving digit. We infer their respective latent initial conditions ys1 and y t 1. We then use our model to generate frame sequences from latent initial conditions linearly interpolated between ys1 and y t 1. If it learned a meaningful latent space, the resulting trajectories should also be smooth interpolations between the directions of reference trajectories xs and xt, and this is what we observe in Figure 9. Additional examples can be found in Appendix H. 5 CONCLUSION We introduce a novel dynamic latent model for stochastic video prediction which, unlike prior imageautoregressive models, decouples frame synthesis and dynamics. This temporal model is based on residual updates of a small latent state that is showed to perform better than RNN-based models. This endows our method with several desirable properties, such as temporal efficiency and latent space interpretability. We experimentally demonstrate the performance and advantages of the proposed model, which outperforms prior state-of-the-art methods for stochastic video prediction. This work is, to the best of our knowledge, the first to propose a latent dynamic model scaling for video prediction. The proposed model is also novel with respect to the recent line of work dealing with neural networks and ODEs for temporal modeling; it is the first such residual model to scale to complex stochastic data such as videos. We believe that the general principles of our model (state-space, residual dynamic, static content variable) can be generally applied to other models as well. Interesting future works include replacing the VRNN model of Minderer et al. (2019) in order to model the evolution of key-points, or leveraging the state-space nature of our model in model-based reinforcement learning. A EVIDENCE LOWER BOUND We develop in this section the computations of the variational lower bound for the proposed model. Using the original variational lower bound of Kingma & Welling (2014) in Equation (8): log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y log p(x1:T | z̃2:T , ỹ1:T ,w)−DKL ( qZ,Y ∥∥ p(y1:T , z2:T | w)) (8) = E(z̃2:T ,ỹ1:T )∼qZ,Y log p(x1:T | z̃2:T , ỹ1:T ,w)−DKL ( q(y1, z2:T | x1:T ) ∥∥ p(y1, z2:T )) (9) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1, z2:T | x1:T ) ∥∥ p(y1, z2:T )), (10) where: • Equation (9) is given by the forward and inference models factorizing p and q in Equations (4) and (5) and illustrated by, respectively, Figures 1a and 1b: – the z variables and y1 are independent from w to p and q; – the y2:T variables are deterministic functions of y1 and z2:T with respect to p and q; • Equation (10) results from the factorization of p(x1:T | y1:T , z1:T ,w) in Equation (4). From there, by using the integral formulation of DKL: log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w) + ∫ · · · ∫ y1,z2:T q(y1, z2:T | x1:T ) log p(y1, z2:T ) q(y1, z2:T | x1:T ) dz2:T dy1 (11) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:T ) ∥∥ p(y1)) + Eỹ1∼q(y1 | x1:T ) [∫ · · · ∫ z2:T q(z2:T | x1:T , ỹ1) log p(z2:T | ỹ1) q(z2:T | x1:T , ỹ1) dz2:T ] (12) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) + Eỹ1∼q(y1 | x1:k) [∫ · · · ∫ z2:T q(z2:T | x1:T , ỹ1) log p(z2:T | ỹ1) q(z2:T | x1:T , ỹ1) dz2:T ] (13) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) + Eỹ1∼q(y1 | x1:k) ∫ · · · ∫ z2:T T∏ t=2 q(zt | x1:t) T∑ t=2 log p(zt | ỹ1, z2:t−1) q(zt | x1:t) dz2:T (14) = E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) − Eỹ1∼q(y1 | x1:k)DKL ( q(z2 | x1:t) ∥∥ p(z2 | ỹ1)) + Eỹ1∼q(y1 | x1:k)Ez̃2∼q(z2 | x1:2)∫ · · · ∫ z3:T T∏ t=3 q(zt | x1:t) T∑ t=3 log p(zt | y1, z̃2:t−1) q(zt | x1:t) dz3:T , (15) where: • Equation (13) follows from the inference model of Equation (5), where y1 only depends on x1:k; • Equation (14) is obtained from the factorizations of Equations (4) and (5). By iterating Equation (15)’s step on z3, . . . ,zT and factorizing all expectations, we obtain: (16) log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) − Eỹ1∼q(y1 | xc) ( Ez̃t∼q(zt | x1:t) )T t=2 T∑ t=2 DKL ( q(zt | x1:t) ∥∥ p(zt | ỹ1, z̃1:t−1)), (17) and we finally retrieve Equation (6) by using the factorization of Equation (5): log p(x1:T | w) ≥ E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=1 log p(xt | ỹt,w)−DKL ( q(y1 | x1:k) ∥∥ p(y1)) − E(z̃2:T ,ỹ1:T )∼qZ,Y T∑ t=2 DKL ( q(zt | x1:t) ∥∥ p(zt | ỹt−1)). (18) B DATASETS DETAILS B.1 STOCHASTIC MOVING MNIST (SM-MNIST) This dataset consists in one or two train MNIST digits (LeCun et al., 1998) of size 27× 27 moving linearly within a 64× 64 frame and randomly bounce against its border, sampling a new direction and velocity at each bounce (Denton & Fergus, 2018). We use the same settings as Denton & Fergus (2018), train all models on 15 timesteps and condition them at test time on 5 frames. Note that we adapted the dataset to sample more coherent bounces: the original dataset computes digit trajectories that are dependent on the chosen framerate, unlike our corrected version of the dataset. We consequently retrained SVG on this dataset, obtaining comparable results as those originally presented by Denton & Fergus (2018). Test data were produced by generating 5000 samples with a different digit for each sequence coming from the MNIST test set. B.2 KTH ACTION DATASET (KTH) This dataset is composed of real-world 64× 64 videos of 25 people performing one of six actions (walking, jogging, running, boxing, handwaving and handclapping) in front of different backgrounds (Schüldt et al., 2004). Uncertainty lies in the appearance of subjects, the action they perform and how it is performed. The training set is formed with actions from 20 people, the remaining five being used for testing. Training is performed by sampling sub-sequences of size 20 in the train set. The test set is composed of 1000 randomly sampled sub-sequences of size 40. B.3 BAIR ROBOT PUSHING DATASET (BAIR) This dataset contains 64× 64 videos of a Sawyer robotic arm pushing objects on a tabletop (Ebert et al., 2017). It is highly stochastic as the arm can change its direction at any moment. Training is performed on 12 frames and testing is done with two conditioning frames on the provided test set, consisting of 256 sequences of 30 frames. C TRAINING DETAILS C.1 SPECIFICATIONS We used Python 3.7.4 and PyTorch 1.2.0 (Paszke et al., 2017) to implement our model. Each model was trained on a Nvidia GPUs with CUDA 10 in mixed-precision training with the help of Apex.2 C.2 ARCHITECTURE Encoder and decoder architecture. Both gθ and hφ are chosen to have different architectures depending on the dataset. We used the same architectures as in Denton & Fergus (2018): a DCGAN 2https://github.com/nvidia/apex. discriminator and generator architecture (Radford et al., 2016) for Moving MNIST, and a VGG16 (Simonyan & Zisserman, 2015) architecture (mirrored for hφ) for BAIR and KTH. In both cases, the output of hφ (i.e., x̃) is a vector of size 128, and gθ and hφ weights are initialized using a centered normal distribution with a standard deviation of 0.02. For the Moving MNIST dataset, the content variable w is obtained directly from x̃ and is thus a vector of size 128. For KTH and BAIR, we supplement this vectorial variable with skip connections from all layers of the encoder gθ that are then fed to the decoder hφ to handle complex backgrounds. For Moving MNIST, the number of frames k used to compute the content variable is 5; for KTH, it is 3; for BAIR, it is 2. LSTM architecture. The LSTM used for all datasets has a single layer of LSTM cells with a hidden state size of 256. MLP architecture. All MLPs used in inference (with parameters φ) have three linear layers with hidden size 256 and leaky ReLU activations. All MLPs used in the forward model (with parameters θ) have four linear layers with hidden size 512 and leaky ReLU activations. Weights of fθ, in particular, are orthogonally initialized with a gain of 1.41, while the other MLPs are initialized with default weight initialization of PyTorch. Sizes of latent variables. The sizes of the latent variables in our model are the following: for Moving MNIST, y and z have size 20; for KTH and BAIR, y and z have size 50. Euler step size All models but those trained on KTH are trained with ∆t = 1. Models on KTH are trained with ∆t = 12 . C.3 OPTIMIZATION Loss function. All models are trained using the Adam optimizer (Kingma & Ba, 2015) with learning rate 3× 10−4 and λ = 1. The batch size for Moving MNIST and BAIR is chosen to be 128, and the batch size for KTH is chosen to be 100. Following (Higgins et al., 2017), we use β = 1 (cf. Equation (7)), except for the Moving MNIST dataset where the β factor in front of the KL on z (last term of Equation (6)) is equal to 2. Variance of the observation. The variance ν used in the observation probability distribution G ( gθ(y) ) = N ( gθ(y), νI ) is chosen as follows: • for Moving MNIST, ν = 1; • for KTH, ν = 4× 10−2; • for BAIR, ν = 12 . Number of optimization steps. The number of optimization steps is the following for the different datasets: • Moving MNIST (stochastic): 1 000 000 steps with additional 100 000 steps where the learning rated is linearly decreased to 0; • Moving MNIST (deterministic): 800 000 steps with additional 100 000 steps where the learning rated is linearly decreased to 0; • KTH: 200 000 steps, the final model being chosen among several checkpoints as the one having the best evaluation score (which differs from the test score as we extract from the train set an evaluation set); • BAIR: 250 000 steps, the final model is chosen as for KTH. D ADDITIONAL NUMERICAL RESULTS Tables 1 to 3 present, respectively, numerical results for PSNR, SSIM and LPIPS averaged over all time steps for our methods and considered baselines on the SM-MNIST, KTH and BAIR datasets, corresponding to Figures 2, 4 and 6. E PENDULUM EXPERIMENTS We test the ability of our model to model the dynamics of a common dataset used in the literature of state-space models (Karl et al., 2017; Fraccaro et al., 2017), Pendulum (Karl et al., 2017). It consists of noisy observations of a dynamic torque-controlled pendulum; it is stochastic as the information of this control is not available. We test our model, without the content variable w, in the same setting as DVBF (Karl et al., 2017) and KVAE (Fraccaro et al., 2017) and report the corresponding ELBO scores in Table 4. The encoders and decoders for all methods are MLPs. Our model outperforms DVBF and is merely beaten by KVAE. This can be explained by the nature of the KVAE model, whose sequential model is not learned using a VAE but a Kalman filter allowing exact inference in the latent space. On the contrary, DVBF is learned, like our model, by a sequential VAE, and is thus much closer to our model than KVAE. This result then shows that the dynamic model that we chose in the context of sequential VAEs is more adapted on this dataset than the one of DVBF, and achieve results close to a method taking advantage of exact inference using adapted tools such as Kalman filters. F INFLUENCE OF THE EULER STEP SIZE Table 5 details the numerical results of our model trained on BAIR with ∆t = 1 and tested with different values of ∆t. It shows that, when refining the Euler approximation, our model can improve its performance in a setting that is unseen during training. Results stabilize when ∆t is small enough, showing that the model is close to the continuous limit. Tables 6 and 7 details the numerical results of our model trained on KTH with, respectively, ∆t = 1 and ∆t = 12 , and tested with different values of ∆t. They show that if ∆t is chosen too high when training (here, ∆t = 1), the model drops in performance when refining the Euler approximation. We assume that this phenomenon arises because the Euler approximation used in training is too rough, making the model adapt to a very discretized dynamic that cannot be transferred to smaller Euler step sizes. Indeed, when training with smaller step size, (here, ∆t = 12 ), results in the training settings are equivalent while results obtained with a lower ∆t are now much closer, if not equivalent, to the nominal ones. This shows that the model learned a continuous dynamic if learned with a small enough step size. Note that the loss of performance using a higher ∆t in testing than in training, like in Table 7, is expected as it corresponds to loosening the Euler approximation compared to training. However, even in this adversarial setting, our model maintains state-of-the-art results, demonstrating the quality of the learned dynamic as it can be further discretized if needed at the cost of a reasonable drop in performance. G AUTOREGRESSIVITY AND IMPACT OF ENCODER AND DECODER ARCHITECTURE Figure 10 exposes the numerical results on KTH of our model and SVG for different choices of architectures: DCGAN and VGG. Since DCGAN is a less powerful architecture than VGG, results of each method with VGG are expectedly better than those of the same method with DCGAN. Moreover, our model outperforms SVG for any fixed choice of encoder and decoder architecture, which is coherent with Figure 4. We observe, however, that the difference between a method using VGG and its DCGAN counterpart differs depending on the model. Ours shows more robustness to changing of encoder and decoder architecture, as it loses much less performance than SVG when switching to a less powerful architecture. Indeed, while the difference in LPIPS is similar for both models (as expected from a score evaluating the realism of produced frames), the loss of SVG is significantly larger than our loss in terms of SSIM, and in particular PNSR. This shows that reducing the capacity of the encoders and decoders of SVG not only hurts its ability to produce realistic frames as expected but also substantially lowers its ability to learn a good dynamic. We assume that this phenomenon is caused by the autoregressive nature of SVG, which makes it reliant of the performance of its encoders and decoders. This supports our motivation to propose a non-autoregressive model for stochastic video prediction. H ADDITIONAL SAMPLES This section includes some additional samples corresponding to experiments described in Section 4. H.1 STOCHASTIC MOVING MNIST We present in Figures 11 to 14 additional samples from SVG and our model on SM-MNIST. In particular, Figure 13 shows SVG changing a digit shape in the course of a prediction even though it does not cross another digit, whereas ours maintain the digit shape. We assume that this advantage of ours comes from the latent nature of the dynamic of our model and the use in our of a static content variable that is prevented from containing temporal information. Indeed, even when the best sample from our model is not close from the ground truth of the dataset, like in Figure 14, the shapes of the digits are still maintained by our model. H.2 KTH We present in Figures 15 to 19 additional samples from SV2P, SVG, SAVP and our model on KTH, with additional insights. H.3 BAIR We present in Figures 20 to 22 additional samples from SV2P, SVG, SAVP and our model on BAIR, with additional insights. H.4 OVERSAMPLING We present in Figure 23 additional examples of video generation at a doubled frame rate by our model. H.5 CONTENT SWAP We present in Figures 24 to 28 additional examples of content swap as in Figure 8. H.6 INTERPOLATION IN THE LATENT SPACE We present in Figures 29 and 30 additional examples of interpolation in the latent space between two trajectories.
1. What is the focus of the paper regarding video prediction? 2. What are the strengths of the proposed approach, particularly in its decoupling strategy? 3. What are the weaknesses of the paper, especially in terms of comparisons with prior works and experimental evaluations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for improving the paper, such as providing more thorough comparisons with existing methods and presenting additional quantitative evaluations?
Review
Review The paper proposes a video prediction model which explicitly decouples frame synthesis and motion dynamics. This is a very subtle change (compared to the current models) that can result in higher quality predictions. First of all, the paper is extremely well written. It provides clear motivations and goals, as well as an impressively comprehensive related work that discusses their shortcomings. The experiments are comprehensive and provide good support for the claims. And finally, the appendix presents additional visualization and information. On the main proposed method, it is a very subtle but reasonable change. Therefore, my suggestion to the authors is to provide a more thorough comparison with existing methods specifically SVG (Denton 2018) since the models share a lot of similarities. It is also quite similar to PlaNet (Hafner 2019). This is where the paper can be improved. For the experiments, although they are quite comprehensive, there is still room for improvement. First, none of the metrics used are good evaluation metrics for frame prediction (I know they are quite common but that doesn't make them good) as they do not give us an objective evaluation in the sense of the semantic quality of predicted frames, specially for long videos. It really helps if the authors present additional quantitative evaluation to show that the predicted frames contain useful semantic information with metrics such as FVD and Inception score. Second, a pretribulation study is required to see where the improvements are coming from. Is it from a different architecture or the separation of dynamics? Finally, a website with generated videos really helps for qualitative comparison! Overall, this is a well-written paper with clear motivations and goals. I find the impact of the paper to be marginal (given the quality difference with already existing models) which can be improved by emphasizing more on other aspects such as disentanglement.
ICLR
Title MARGINALIZED AVERAGE ATTENTIONAL NETWORK FOR WEAKLY-SUPERVISED LEARNING Abstract In weakly-supervised temporal action localization, previous works have failed to locate dense and integral regions for each entire action due to the overestimation of the most salient regions. To alleviate this issue, we propose a marginalized average attentional network (MAAN) to suppress the dominant response of the most salient regions in a principled manner. The MAAN employs a novel marginalized average aggregation (MAA) module and learns a set of latent discriminative probabilities in an end-to-end fashion. MAA samples multiple subsets from the video snippet features according to a set of latent discriminative probabilities and takes the expectation over all the averaged subset features. Theoretically, we prove that the MAA module with learned latent discriminative probabilities successfully reduces the difference in responses between the most salient regions and the others. Therefore, MAAN is able to generate better class activation sequences and identify dense and integral action regions in the videos. Moreover, we propose a fast algorithm to reduce the complexity of constructing MAA from O(2 ) to O(T ). Extensive experiments on two large-scale video datasets show that our MAAN achieves a superior performance on weakly-supervised temporal action localization. 1 INTRODUCTION Weakly-supervised temporal action localization has been of interest to the community recently. The setting is to train a model with solely video-level class labels, and to predict both the class and the temporal boundary of each action instance at the test time. The major challenge in the weakly-supervised localization problem is to find the right way to express and infer the underlying location information with only the video-level class labels. Traditionally, this is achieved by explicitly sampling several possible instances with different locations and durations (Bilen & Vedaldi, 2016; Kantorov et al., 2016; Zhang et al., 2017). The instance-level classifiers would then be trained through multiple instances learning (Cinbis et al., 2017; Yuan et al., 2017a) or curriculum learning (Bengio et al., 2009). However, the length of actions and videos varies too much such that the number of instance proposals for each video varies a lot and it can also be huge. As a result, traditional methods based on instance proposals become infeasible in many cases. Recent research, however, has pivoted to acquire the location information by generating the class activation sequence (CAS) directly (Nguyen et al., 2018), which produces the classification score sequence of being each action for each snippet over time. The CAS along the 1D temporal dimension for a video is inspired by the class activation map (CAM) (Zhou et al., 2016a; 2014; Pinheiro & Collobert, 2015; Oquab et al., 2015) in weakly-supervised object detection. The CAM-based models have shown that despite being trained on image-level labels, convolutional neural networks (CNNs) have the remarkable ability to localize objects. Similar to object detection, the basic idea behind CAS-based methods for action localization in the training is to sample the non-overlapping snippets from a video, then to aggregate the snippet-level features into a video-level feature, and finally to yield a video-level class prediction. During testing, the model generates a CAS for each class that identifies the discriminative action regions, and then applies a threshold on the CAS to localize each action instance in terms of the start time and the end time. In CAS-based methods, the feature aggregator that aggregates multiple snippet-level features into a video-level feature is the critical building block of weakly-supervised neural networks. A model’s ability to capture the location information of an action is primarily determined by the design of the aggregators. While using the global average pooling over a full image or across the video snippets has shown great promise in identifying the discriminative regions (Zhou et al., 2016a; 2014; Pinheiro & Collobert, 2015; Oquab et al., 2015), treating each pixel or snippet equally loses the opportunity to benefit from several more essential parts. Some recent works (Nguyen et al., 2018; Zhu et al., 2017) have tried to learn attentional weights for different snippets to compute a weighted sum as the aggregated feature. However, they suffer from the weights being easily dominated by only a few most salient snippets. In general, models trained with only video-level class labels tend to be easily responsive to small and sparse discriminative regions from the snippets of interest. This deviates from the objective of the localization task that is to locate dense and integral regions for each entire action. To mitigate this gap and reduce the effect of the domination by the most salient regions, several heuristic tricks have been proposed to apply to existing models. For example, (Wei et al., 2017; Zhang et al., 2018b) attempt to heuristically erase the most salient regions predicted by the model which are currently being mined, and force the network to attend other salient regions in the remaining regions by forwarding the model several times. However, the heuristic multiple-run model is not end-to-end trainable. It is the ensemble of multiple-run mined regions but not the single model’s own ability that learns the entire action regions. “Hide-and-seek”(Singh & Lee, 2017) randomly masks out some regions of the input during training, enforcing the model to localize other salient regions when the most salient regions happen to be masked out. However, all the input regions are masked out with the same probability due to the uniform prior, and it is very likely that most of the time it is the background that is being masked out. A detailed discussion about related works can be found in Appendix D. To this end, we propose the marginalized average attentional network (MAAN) to alleviate the issue raised by the domination of the most salient region in an end-to-end fashion for weakly-supervised action localization. Specifically, MAAN suppresses the action prediction response of the most salient regions by employing marginalized average aggregation (MAA) and learning the latent discriminative probability in a principled manner. Unlike the previous attentional pooling aggregator which calculates the weighted sum with attention weights, MAA first samples a subset of features according to their latent discriminative probabilities, and then calculates the average of these sampled features. Finally, MAA takes the expectation (marginalization) of the average aggregated subset features over all the possible subsets to achieve the final aggregation. As a result, MAA not only alleviates the domination by the most salient regions, but also maintains the scale of the aggregated feature within a reasonable range. We theoretically prove that, with the MAA, the learned latent discriminative probability indeed reduces the difference of response between the most salient regions and the others. Therefore, MAAN can identify more dense and integral regions for each action. Moreover, since enumerating all the possible subsets is exponentially expensive, we further propose a fast iterative algorithm to reduce the complexity of the expectation calculation procedure and provide a theoretical analysis. Furthermore, MAAN is easy to train in an end-to-end fashion since all the components of the network are differentiable. Extensive experiments on two large-scale video datasets show that MAAN consistently outperforms the baseline models and achieves superior performance on weakly-supervised temporal action localization. In summary, our main contributions include: (1) a novel end-to-end trainable marginalized average attentional network (MAAN) with a marginalized average aggregation (MAA) module in the weaklysupervised setting; (2) theoretical analysis of the properties of MAA and an explanation of the reasons MAAN alleviates the issue raised by the domination of the most salient regions; (3) a fast iterative algorithm that can effectively reduce the computational complexity of MAA; and (4) a superior performance on two benchmark video datasets, THUMOS14 and ActivityNet1.3, on the weakly-supervised temporal action localization. 2 MARGINALIZED AVERAGE ATTENTIONAL NETWORK In this section, we describe our proposed MAAN for weakly-supervised temporal action localization. We first derive the formulation of the feature aggregation module in MAAN as a MAA procedure in Sec. 2.1. Then, we study the properties of MAA in Sec. 2.2, and present our fast iterative computation algorithm for MAA construction in Sec. 2.3. Finally, we describe our network architecture that incorporates MAA, and introduce the corresponding inference process on weakly-supervised temporal action localization in Sec. 2.4. 2.1 MARGINALIZED AVERAGE AGGREGATION Let {x1,x2, · · ·xT } denote the set of snippet-level features to be aggregated, where xt ∈ Rm is the m dimensional feature representation extracted from a video snippet centered at time t, and T is the total number of sampled video snippets. The conventional attentional weighted sum pooling aggregates the input snippet-level features into a video-level representation x. Denote the set of attentional weights corresponding to the snippet-level features as {λ1, λ2, · · ·λT }, where λt is a scalar attentional weight for xt. Then the aggregated video-level representation is given by x = T∑ t=1 λtxt, (1) as illustrated in Figure 1 (a). Different from the conventional aggregation mechanism, the proposed MAA module aggregates the features by firstly generating a set of binary indicators to determine whether a snippet should be sampled or not. The model then computes the average aggregation of these sampled snippet-level representations. Lastly, the model computes the expectation (marginalization) of the aggregated average feature for all the possible subsets, and obtains the proposed marginalized average aggregated feature. Formally, in the proposed MAA module, we first define a set of probabilities {p1, p2, · · · pT }, where each pt ∈ [0, 1] is a scalar corresponding to xt, similar to the notation λt mentioned previously. We then sample a set of random variables {z1, z2, · · · zT }, where zt ∼ Bernoulli(pt), i.e., zt ∈ {0, 1} with probability P (zt = 1) = pt. The sampled set is used to represent the subset selection of snippet-level features, in which zt = 1 indicates xt is selected, otherwise not. Therefore, the average aggregation of the sampled subset of snipped-level representations is given by s = ∑T i=1 zixi/ ∑T i=1 zi , and our proposed aggregated feature, defined as the expectation of all the possible subset-level average aggregated representations, is given by x = E[s] = E [∑T i=1 zixi∑T i=1 zi ] , (2) which is illustrated in Figure 1 (b). 2.2 PARTIAL ORDER PRESERVATION AND DOMINANT RESPONSE SUPPRESSION Direct learning and prediction with the attention weights λ in Eq. (1) in weakly-supervised action localization leads to an over-response in the most salient regions. The MAA in Eq. (2) has two properties that alleviate the domination effect of the most salient regions. First, the partial order preservation property, i.e., the latent discriminative probabilities preserve the partial order with respect to their attention weights. Second, the dominant response suppression property, i.e., the differences in the latent discriminative probabilities between the most salient items and others are smaller than the differences between their attention weights. The partial order preservation property guarantees that it does not mix up the action and non-action snippets by assigning a high latent discriminative probability to a snippet with low response. The dominant response suppression property reduces the dominant effect of the most salient regions and encourages the identification of dense and more integral action regions. Formally, we present the two properties in Proposition 1 and Proposition 2, respectively. Detailed proofs can be found in Appendix A and Appendix B respectively. Proposition 1. Let zi ∼ Bernoulli(pi) for i ∈ {1, ..., T}. Then for T ≥ 2, Eq. (3) holds true, and pi ≥ pj ⇔ ci ≥ cj ⇔ λi ≥ λj . E [∑T i=1 zixi∑T i=1 zi ] = ∑T i=1 cipixi = ∑T i=1 λixi, (3) where ci = E [ 1/(1 + ∑T k=1,k 6=i zk) ] and λi = cipi for i ∈ {1, ..., T}. Proposition 1 shows that the latent discriminative probabilities {pi} preserve the partial order of the attention weights {λi}. This means that a large attention weight corresponds to a large discriminative probability, which guarantees that the latent discriminative probabilities preserve the ranking of the action prediction response. Eq. (3) can be seen as a factorization of the attention weight λi into the multiplication of two components, pi and ci, for i ∈ {1, ..., T}. pi is the latent discriminative probability related to the feature of snippet i itself. The factor ci captures the contextual information of snippet i from the other snippets. This factorization can be considered to be introducing structural information into the aggregation. Factor ci can be considered as performing a structural regularization for learning the latent discriminative probabilities pi for i ∈ {1, ..., T}, as well as for learning a more informative aggregation. Proposition 2. Let zi ∼ Bernoulli(pi) for i ∈ {1, ..., T}. Denote ci = E [ 1/(1 + ∑T k=1,k 6=i zk) ] and λi = cipi for i ∈ {1, ..., T}. Denote I = { i ∣∣∣ci ≥ 1/(∑Tt=1 pt)} as an index set. Then I 6= ∅ and for ∀i ∈ I, ∀j ∈ {1, ..., T} inequality (4) holds true.∣∣∣∣∣ pi∑T t=1 pt − pj∑T t=1 pt ∣∣∣∣∣ ≤ ∣∣∣∣∣ λi∑T t=1 λt − λj∑T t=1 λt ∣∣∣∣∣ (4) The index set I can be viewed as the most salient features set. Proposition 2 shows that the difference between the normalized latent discriminative probabilities of the most salient regions and others is smaller than the difference between their attention weights. It means that the prediction for each snippet using the latent discriminative probability can reduce the gap between the most salient featuress and the others compared to conventional methods that are based on attention weights. Thus, MAAN suppresses the dominant responses of the most salient featuress and encourages it to identify dense and more integral action regions. Directly learning the attention weights λ leans to an over response to the most salient region in weakly-supervised temporal localization. Namely, the attention weights for only a few snippets are too large and dominate the others, while attention weights for most of the other snippets that also belong to the true action are underestimated. Proposition 2 shows that latent discriminative probabilities are able to reduce the gap between the most salient features and the others compared to the attention weights. Thus, by employing the latent discriminative probabilities for prediction instead of the attention weights, our method can alleviate the dominant effect of the most salient region in weakly-supervised temporal localization. 2.3 RECURRENT FAST COMPUTATION Given a video containing T snippet-level representations, there are 2T possible configurations for the subset selection. Directly summing up all the 2T configurations to calculate x has a complexity of O(2T ) . In order to reduce the exponential complexity, we propose an iterative method to calculate x with O(T 2) complexity. Let us denote the aggregated feature of {x1,x2, · · ·xt} with length t as ht, and denote Yt = t∑ i=1 zixi and Zt = t∑ i=1 zi for simplicity, then we have a set of ht = E [∑t i=1 zixi∑t i=1 zi ] = E [ Yt Zt ] , t ∈ {1, 2, · · · , T}, (5) and the aggregated feature of {x1,x2, · · ·xT } can be obtained as x = hT . In Eq. (5), Zt is the summation of all the zi, which indicates the number of elements selected in the subset. Although there are 2t distinct configurations for {z1, z2, · · · zt}, it has only t + 1 distinct values for Zt, i.e. 0, 1, · · · , t. Therefore, we can divide all the 2t distinct configurations into t+ 1 groups, where the configurations sharing with the same Zt fall into the same group. Then the expectation ht can be calculated as the summation of the t + 1 parts. That is, ht = E [ E [ Yt Zt ∣∣∣Zt = i]] = ∑ti=0 mti, where the mti, indicating the i th part of ht for group Zt = i, is shown in Eq. (6). mti = P (Zt = i)E [ Yt Zt ∣∣∣∣Zt = i] . (6) In order to calculate ht+1 = ∑t+1 i=0 m t+1 i , given m t i , i ∈ {0, · · · , t}, we can calculate m t+1 i , i ∈ {0, 1, · · · , t + 1} recurrently. The key idea here is that mt+1i comes from two cases: if zt+1 = 0, then mt+1i is the same as m t i; if zt+1 = 1, then m t+1 i is the weighted average of m t i−1 and xt+1. The latter case is also related to the probability P (Zt = i− 1). By denoting qti−1 = P (Zt = i− 1) for simplicity, we can obtain mt+1i as a function of several elements: mt+1i = f(m t i−1,m t i,xt+1, pt+1, q t i−1). (7) Similarly, the computation of qt+1i = P (Zt+1 = i) comes from two cases: the probability of selecting i− 1 items from the first t items and selecting the (t+ 1)th item, i.e., qti−1pt+1; and the probability of selecting i items all from the first t items and not selecting the (t+ 1)th item, i.e., qti (1− pt+1). We derive the function of m t+1 i and q t+1 i in Proposition 3. Detailed proofs can be found in Appendix C. Proposition 3. Let zt ∼ Bernoulli(pt) , Zt = t∑ i=1 zi and Yt = t∑ i=1 zixi for t ∈ {1, ..., T}. Define mti , i ∈ {0, · · · , t} as Eq. (6) and qti = P (Zt = i), then m t+1 i i ∈ {0, 1, · · · , t + 1} can be obtained recurrently by Eq. (8) and Eq. (9). mt+1i = pt+1 ( bi−1m t i−1 + (1− bi−1)qti−1xt+1 ) + (1− pt+1)mti, (8) qt+1i = pt+1q t i−1 + (1− pt+1) qti , (9) where bi = ii+1 , q t −1 = 0, q t t+1 = 0, q 0 0 = 1, m t 0 = 0, and m t t+1 = 0. Proposition 3 provides a recurrent formula to calculate mti. With this recurrent formula, we calculate the aggregation hT by iteratively calculating mti from i = 1 to t and t = 1 to T . Therefore, we can obtain the aggregated feature of {x1,x2, · · ·xT } as x = hT = ∑T i=0 m T i . The iterative computation procedure is summarized in Algorithm 1 in Appendix E. The time complexity is O(T 2). With the fast iterative algorithm in Algorithm 1, the MAA becomes practical for end-to-end training. A demonstration of the computation graph for qt+1i in Eq. (9) and m t+1 i in Eq. (8) is presented in the left and right-hand sides of Figure 2, respectively. From Figure 2, we can see clearly that, to compute m32 (the big black node on the right), it needs m 2 1, m 2 2, x3, p3, and q 2 1 . The MAA can be easily implemented as a subnetwork for end-to-end training and can be used to replace the operation of other feature aggregators. 2.4 NETWORK ARCHITECTURE AND TEMPORAL ACTION LOCALIZATION Network Architecture: We now describe the network architecture that employs the MAA module described above for weakly-supervised temporal action localization. We start from a previous stateof-the-art base architecture, the sparse temporal pooling network (STPN) (Nguyen et al., 2018). As shown in Figure 3, it first divides the input video into several non-overlapped snippets and extracts the I3D (Carreira & Zisserman, 2017) feature for each snippet. Each snippet-level feature is then fed to an attention module to generate an attention weight between 0 and 1. STPN then uses a feature aggregator to calculate a weighted sum of the snippet-level features with these class-agnostic attention weights to create a video-level representation, as shown on the left in Figure 4. The video-level representation is then passed through an FC layer followed by a sigmoid layer to obtain class scores. Our MAAN uses the attention module to generate the latent discriminative probability pt and replaces the feature aggregator from the weighted sum aggregation by the proposed marginalized average aggregation, which is demonstrated on the right in Figure 4. Training with video-level class labels: Formally, the model first performs aggregation of the snippet-level features (i.e. x1,x2, · · ·xT ) to obtain the video-level representation x̄ ( x̄ = E[ ∑T i=1 zixi/ ∑T i=1 zi]). Then, it applies a logistic regression layer (FC layer + sigmoid) to output video-level classification prediction probability. Specifically, the prediction probability for class c ∈ {1, 2, · · ·C} is parameterized as σcj = σ(w>c xj), where xj is the aggregated feature for video j ∈ {1, ..., N}. Suppose each video xj is i.i.d and each action class is independent from the other, the negative log-likelihood function (cross-entropy loss) is given as follows: L(W) = − N∑ j=1 C∑ c=1 ( ycj log σ c j + (1− ycj) log(1− σcj) ) , (10) where ycj ∈ {0, 1} is the ground-truth video-level label for class c happening in video j and W = [w1, ...,wC ]. Temporal Action Localization: Let sc = w>c x be the video-level action prediction score, and σ(sc) = σ(w>c x) be the video-level action prediction probability. In STPN, as x̄ = ∑T t=1 λtxt, the sc can be rewritten as: sc = w>c x = ∑T t=1 λtw > c xt, (11) In STPN, the prediction score of snippet t for action class c in a video is defined as: sct = λtσ(w > c xt), (12) where σ(·) denotes the sigmoid function. In MAAN, as x̄ = E[ ∑T i=1 zixi/ ∑T i=1 zi], according to Proposition 1, the sc can be rewritten as: sc = w>c x = w > c E[ ∑T i=1 zixi/ ∑T i=1 zi] = ∑T t=1 ctptw > c xt. (13) The latent discriminative probability pt corresponds to the class-agnostic attention weight for snippet t. According to Proposition 1 and Proposition 2, ct does not relate to snippet t, but captures the context of other snippets. wc corresponds to the class-specific weights for action class c for all the snippets, and w>c xt indicates the relevance of snippet t to class c. To generate temporal proposals, we compute the prediction score of snippet t belonging to action class c in a video as: sct = ptσ(w > c xt). (14) We denote the sc = (sc1, s c 2, ..., s c T )> as the class activation sequence (CAS) for class c. Similar to STPN, the threshold is applied to the CAS for each class to extract the one-dimensional connected components to generate its temporal proposals. We then perform non-maximum suppression among temporal proposals of each class independently to remove highly overlapped detections. Compared to STPN (Eq. (12)), MAAN (Eq. (14)) employs the latent discriminative probability pt instead of directly using the attention weight λt (equivalent to ctpt) for prediction. Proposition 2 suggests that MAAN can suppress the dominant response sct compared to STPN. Thus, MAAN is more likely to achieve a better performance in weakly-supervised temporal action localization. 3 EXPERIMENTS This section discusses the experiments on the weakly-supervised temporal action localization problem, which is our main focus. We have also extended our algorithm on addressing the weakly-supervised image object detection problem and the relevant experiments are presented in Appendix F. 3.1 EXPERIMENTAL SETTINGS Datasets. We evaluate MAAN on two popular action localization benchmark datasets, THUMOS14 (Jiang et al., 2014) and ActivityNet1.3 (Heilbron et al., 2015). THUMOS14 contains 20 action classes for the temporal action localization task, which consists of 200 untrimmed videos (3,027 action instances) in the validation set and 212 untrimmed videos (3,358 action instances) in the test set. Following standard practice, we train the models on the validation set without using the temporal annotations and evaluate them on the test set. ActivityNet1.3 is a large-scale video benchmark for action detection which covers a wide range of complex human activities. It provides samples from 200 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. This dataset contains 10,024 training videos, 4,926 validation videos and 5,044 test videos. In the experiments, we train the models on the training videos and test on the validation videos. Evaluation Metrics. We follow the standard evaluation metric by reporting mean average precision (mAP) values at several different levels of intersection over union (IoU) thresholds. We use the benchmarking code provided by ActivityNet1 to evaluate the models. Implementation Details. We use two-stream I3D networks (Carreira & Zisserman, 2017) pre-trained on the Kinetics dataset (Kay et al., 2017) to extract the snippet-level feature vectors for each video. All the videos are divided into sets of non-overlapping video snippets. Each snippet contains 16 consecutive frames or optical flow maps. We input each 16 stacked RGB frames or flow maps into the I3D RGB or flow models to extract the corresponding 1024 dimensional feature vectors. Due to the various lengths of the videos, in the training, we uniformly divide each video into T non-overlapped segments, and randomly sample one snippet from each segment. Therefore, we sample T snippets for each video as the input of the model for training. We set T to 20 in our MAAN model. The attention module in Figure 3 consists of an FC layer of 1024× 256, a LeakyReLU layer, an FC layer of 256× 1, and a sigmoid non-linear activation, to generate the latent discriminative probability pt. We pass the aggregated video-level representation through an FC layer of 1024× C followed by a sigmoid activation to obtain class scores. We use the ADAM optimizer (Kingma & Ba, 2014) with an initial learning rate of 5× 10−4 to optimize network parameters. At the test time, we first reject 1https://github.com/activitynet/ActivityNet/tree/master/Evaluation classes whose video-level probabilities are below 0.1. We then forward all the snippets of the video to generate the CAS for the remaining classes. We generate the temporal proposals by cutting the CAS with a threshold th. The combination ratio of two-stream modalities is set to 0.5 and 0.5. Our algorithm is implemented in PyTorch 2. We run all the experiments on a single NVIDIA Tesla M40 GPU with a 24 GB memory. 3.2 THUMOS14 DATASET We first compare our MAAN model on the THUMOS14 dataset with several baseline models that use different feature aggregators in Figure 3 to gain some basic understanding of the behavior of our proposed MAA. The descriptions of the four baseline models are listed below. (1) STPN. It employs the weighed sum aggregation x̄ = ∑T t=1 λtxt to generate the video-level representation. (2) Dropout. It explicitly performs dropout sampling with dropout probability p = 0.5 in STPN to obtain the video-level representation, x̄ = ∑T t=1 rtλtxt, rt ∼ Bernoulli(0.5). (3) Normalization. Denoted as “Norm” in the experiments, it utilizes the weighted average aggregation x̄ = ∑T t=1 λtxt/ ∑T t=1 λt for the video-level representation. (4) SoftMax Normalization. Denoted as “SoftMaxNorm” in the experiments, it applies the softmax function as the normalized weights to get the weighted average aggregated video-level feature, x̄ = ∑T t=1 e λtxt/ ∑T t=1 e λt . We test all the models with the cutting threshold th as 0.2 of the max value of the CAS. We compare the detection average precision (%) at IoU = [0.1 : 0.1 : 0.9] and the video-level classification mean average precision (%) (denoted as Cls mAP) on the test set in Table 1. From Table 1, we can observe that although all the methods achieve a similar video-level classification mAP, their localization performances vary a lot. It shows that achieving a good video-level classification performance cannot guarantee obtaining a good snippet-level localization performance because the former only requires the correct prediction of the existence of an action, while the latter requires the correct prediction of both its existence and its duration and location. Moreover, Table 1 demonstrates that MAAN consistently outperforms all the baseline models at different levels of IoUs in the weakly-supervised temporal localization task. Both the “Norm” and “SoftmaxNorm” are the normalized weighted average aggregation. However, the “SoftmaxNorm” performs the worst, because the softmax function over-amplifies the weight of the most salient snippet. As a result, it tends to identify very few discriminative snippets and obtains sparse and non-integral localization. The “Norm” also performs worse than our MAAN. It is the normalized weighted average over the snippet-level representation, while MAAN can be considered as the normalized weighted average (expectation) over the subsetlevel representation. Therefore, MAAN encourages the identification of dense and integral action segments as compared to “Norm” which encourages the identification of only several discriminative snippets. MAAN works better than “Dropout” because “Dropout” randomly drops out the snippets with different attention weights by uniform probabilities. At each iteration, the scale of the aggregated feature varies a lot, however, MAAN samples with the learnable latent discriminative probability and conducts the expectation of keeping the scale of the aggregated feature stable. Compared to STPN, MAAN also achieves superior results. MAAN implicitly factorizes the attention weight into ctpt, where pt learns the latent discriminative probability of the current snippet, and ct captures the contextual information and regularizes the network to learn a more informative aggregation. The properties of MAA disallow the predicted class activation sequences to concentrate on the most salient regions. The quantitative results show the effectiveness of the MAA feature aggregator. 2https://github.com/pytorch/pytorch Figure 5 visualizes the one-dimensional CASs of the proposed MAAN and all the baseline models. The temporal CAS generated by MAAN can cover large and dense regions to obtain more accurate action segments. In the example in Figure 5, MAAN can discover almost all the actions that are annotated in the ground-truth; however, the STPN have missed several action segments, and also tends to only output the more salient regions in each action segment. Other methods are much sparser compared to MAAN. The first row of Figure 5 shows several action segments in red and in green, corresponding to action segments that are relatively difficult and easy to be localized, respectively. We can see that all the easily-localized segments contain the whole person who is performing the “HammerThrow” action, while the difficultly-localized segments contain only a part of the person or the action. Our MAAN can successfully localize the easy segments as well as the difficult segments; however, all the other methods fail on the difficult ones. It shows that MAAN can identify several dense and integral action regions other than only the most discriminative region which is identified by the other methods. We also compare our model with the state-of-the-art action localization approaches on the THUMOS14 dataset. The numerical results are summarized in Table 2. We include both fully and weakly-supervised learning, as in (Nguyen et al., 2018). As shown in Table 2, our implemented STPN performs slightly better than the results reported in the original paper (Nguyen et al., 2018). From Table 2, our proposed MAAN outperforms the STPN and most of the existing weakly-supervised action localization approaches. Furthermore, our model still presents competitive results compared with several recent fully-supervised approaches even when trained with only video-level labels. 3.3 ACTIVITYNET1.3 DATASET We train the MAAN model on the ActivityNet1.3 training set and compare our performance with the recent state-of-the-art approaches on the validation set in Table 3. The action segment in ActivityNet is usually much longer than that of THUMOS14 and occupies a larger percentage of a video. We use a set of thresholds, which are [0.2, 0.15, 0.1, 0.05] of the max value of the CAS, to generate the proposals from the one-dimensional CAS. As shown in Table 3, with the set of thresholds, our implemented STPN performs slightly better than the results reported in the original paper (Nguyen et al., 2018). With the same threshold and experimental setting, our proposed MAAN model outperforms the STPN approach on the large-scale ActivityNet1.3. Similar to THUMOS14, our model also achieves good results that are close to some of the fully-supervised approaches. 4 CONCLUSION We have proposed the marginalized average attentional network (MAAN) for weakly-supervised temporal action localization. MAAN employs a novel marginalized average aggregation (MAA) operation to encourage the network to identify the dense and integral action segments and is trained in an end-to-end fashion. Theoretically, we have proved that MAA reduces the gap between the most discriminant regions in the video to the others, and thus MAAN generates better class activation sequences to infer the action locations. We have also proposed a fast algorithm to reduce the computation complexity of MAA. Our proposed MAAN achieves superior performance on both the THUMOS14 and the ActivityNet1.3 datasets on weakly-supervised temporal action localization tasks compared to current state-of-the-art methods. 5 ACKNOWLEDGEMENT We thank our anonymous reviewers for their helpful feedback and suggestions. Prof. Ivor W. Tsang was supported by ARC FT130100746, ARC LP150100671, and DP180100106. A PROOF OF PROPOSITION 1 A.1 PROOF OF EQUATION (3) Proof. E [∑T i=1 zixi∑T i=1 zi ] = ∑T i=1 E[zi/ ∑T i=1 zi]xi. (15) In addition, E[zi/ ∑T i=1 zi] = pi × E [ 1/(1 + ∑T k=1,k 6=i zk) ] + (1− pi)× 0 = pici. (16) Thus, we achieve E [∑T i=1 zixi∑T i=1 zi ] = ∑T i=1 cipixi = ∑T i=1 λixi. (17) A.2 PROOF OF pi ≥ pj ⇔ ci ≥ cj ⇔ λi ≥ λj Proof. Denote ST = ∑T k=1,k 6=i,k 6=j zk, then we have ci − cj = E [ 1/(1 + ∑ k 6=i zk) ] − E [ 1/(1 + ∑ k 6=j zk) ] (18) = pjE [1/(2 + ST )] + (1− pj)E [1/(1 + ST )]− piE [1/(2 + ST )]− (1− pi)E [1/(1 + ST )] = (pi − pj) (E [1/(1 + ST )]− E [1/(2 + ST )]) . (19) Since E [1/(1 + ST )]− E [1/(2 + ST )] > 0, we achieve that pi ≥ pj ⇔ ci ≥ cj . Since λi = cipi and λj = cjpj , and ci, cj , pi, pj ≥ 0, it follows that pi ≥ pj ⇔ λi ≥ λj . B PROOF OF PROPOSITION 2 Proof. ∑T i=1 cipi = ∑T i=1 E[zi/ ∑T i=1 zi] = E [ ( ∑T i=1 zi)/( ∑T i=1 zi) ] = 1 When p1 = p2 = · · · = pT , we have λ1 = λ2 = · · · = λT . Then inequality (4) trivially holds true. Without loss of generality, assume p1 ≥ p2 ≥ · · · ≥ pT and there exists a strict inequality. Then ∃k ∈ {1, ..., T − 1} such that ci ≥ 1/( ∑T t=1 pt) for 1 ≤ i ≤ k and cj ≤ 1/( ∑T t=1 pt) for k < j ≤ T . Otherwise, we obtain ci ≥ 1/( ∑T t=1 pt) or ci ≤ 1/( ∑T t=1 pt) for 1 ≤ i ≤ T and there exists a strict inequality. It follows that ∑T i=1 cipi > 1 or ∑T i=1 cipi < 1, which contradicts∑T i=1 cipi = 1. Thus, we obtain the set I 6= ∅. Without loss of generality, for 1 ≤ i ≤ k and i ≤ j ≤ T , we have ci ≥ 1/( ∑T t=1 pt) and pi ≥ pj , then we obtain that ci ≥ cj . It follows that pi/( ∑T t=1 pt)− pj/( ∑T t=1 pt)− ( λi/( ∑T t=1 λt)− λj/( ∑T t=1 λt) ) (20) = pi/( ∑T t=1 pt)− pj/( ∑T t=1 pt)− (cipi − cjpj) (21) = ( 1/( ∑T t=1 pt)− ci ) pi − ( 1/( ∑T t=1 pt)− cj ) pj (22) ≤ ( 1/( ∑T t=1 pt)− ci ) pi − ( 1/( ∑T t=1 pt)− ci ) pj (23) = ( 1/( ∑T t=1 pt)− ci ) (pi − pj) ≤ 0. (24) C PROOF OF PROPOSITION 3 C.1 COMPUTATION OF ht ht = E[ Yt Zt ] = ∑ z1,z2,...,zt P (z1, z2, · · · zt) ∑t j=1 zjxj∑t j=1 zj (25) = ∑t i=0 ( ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, ..., zt) ∑t j=1 zjxj∑t j=1 zj ) (26) = ∑t i=0 ∑ z1,z2,...,zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) ∑t j=1 zjxj i (27) = ∑t i=0 mti, (28) where 1(·) denotes the indicator function. We achieve Eq. (26) by partitioning the summation into t+ 1 groups . Terms belonging to group i have ∑t j=1 zj = i. Let mti = ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) ∑t j=1 zjxj i , and we achieve Eq. (28). C.2 PROOF OF RECURRENT FORMULA OF mt+1i We now give the proof of the recurrent formula of Eq. (29) mt+1i = pt+1 ( bi−1m t i−1 + (1− bi−1)qti−1xt+1 ) + (1− pt+1)mti. (29) Proof. mt+1i = ∑ z1,z2,···zt,zt+1 1 (∑t+1 j=1 zj = i ) P (z1, z2, · · · zt+1) ∑t+1 j=1 zjxj i (30) = ∑ z1,z2,···zt,zt+1 1 (∑t j=1 zj + zt+1 = i ) P (z1, z2, · · · zt)P (zt+1) ∑t j=1 zjxj + zt+1xt+1 i (31) = ∑ z1,z2,···zt [ 1 (∑t j=1 zj + 1 = i ) P (z1, z2, · · · zt) pt+1 ∑t j=1 zjxj+xt+1 i ] + ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) (1− pt+1) ∑t j=1 zjxj i (32) = ∑ z1,z2,···zt 1 (∑t j=1 zj + 1 = i ) P (z1, z2, · · · zt) pt+1 ∑t j=1 zjxj+xt+1 i +(1− pt+1) ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) ∑t j=1 zjxj i (33) = pt+1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) i−1i ∑t j=1 zjxj+xt+1 i−1 +(1− pt+1)mti (34) = pt+1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) [ i−1 i ∑t j=1 zjxj i−1 + xt+1 i ] +(1− pt+1)mti (35) = pt+1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) [ bi−1 ∑t j=1 zjxj i−1 + (1− bi−1)xt+1 ] +(1− pt+1)mti (36) Then, we have mt+1i = pt+1bi−1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) ∑t j=1 zjxj i−1 +pt+1(1− bi−1) ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt)xt+1 + (1− pt+1)mti. (37) Since qti−1 = P (∑t j=1 zj = i− 1 ) = ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) we can achieve mt+1i = pt+1 [ bi−1m t i−1 + (1− bi−1)qti−1xt+1 ] + (1− pt+1)mti. (38) C.3 PROOF OF RECURRENT FORMULA OF qt+1i We present the proof of Eq. (39) qt+1i = pt+1q t i−1 + (1− pt+1)qti (39) Proof. qt+1i = ∑ z1,z2,···zt,zt+1 1 (∑t+1 j=1 zj = i ) P (z1, z2, · · · zt+1) (40) = ∑ z1,z2,···zt,zt+1 1 (∑t j=1 zj + zt+1 = i ) P (z1, z2, · · · zt)P (zt+1) (41) = ∑ z1,z2,···zt 1 (∑t j=1 zj + 1 = i ) P (z1, z2, · · · zt) pt+1 (42) + ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) (1− pt+1) (43) = pt+1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) + (1− pt+1)qti (44) = pt+1q t i−1 + (1− pt+1)qti (45) D RELATED WORK Video Action Analysis. Researchers have developed quite a few deep network models for video action analysis. Two-stream networks (Simonyan & Zisserman, 2014) and 3D convolutional neural networks (C3D) (Tran et al., 2015) are popular solutions to learn video representations and these techniques, including their variations, are extensively used for video action analysis. Recently, a combination of two-stream networks and 3D convolutions, referred to as I3D (Carreira & Zisserman, 2017), was proposed as a generic video representation learning method, and served as an effective backbone network in various video analysis tasks such as recognition (Wang et al., 2016), localization (Shou et al., 2016), and weakly-supervised learning (Wang et al., 2017). Weakly-Supervised Temporal Action Localization. There are only a few approaches based on weakly-supervised learning that rely solely on video-level class labels to localize actions in the temporal domain. Wang et al. (Wang et al., 2017) proposed a UntrimmedNet framework, where two softmax functions are applied across class labels and proposals to perform action classification and detect important temporal segments, respectively. However, using the softmax function across proposals may not be effective for identifying multiple instances. Singh et al. (Singh & Lee, 2017) designed a Hide-and-Seek model to randomly hide some regions in a video during training and force the network to seek other relevant regions. However, the randomly hiding operation, as a data augmentation, cannot guarantee whether it is the action region or the background region that is hidden during training, especially when the dropout probabilities for all the regions are the same. Nguyen et al. (Nguyen et al., 2018) proposed a sparse temporal pooling network (STPN) to identify a sparse set of key segments associated with the actions through attention-based temporal pooling of video segments. However, the sparse constraint may force the network to focus on very few segments and lead to incomplete detection. In order to prevent the model from focusing only on the most salient regions, we are inspired to propose the MAAN model to explicitly take the expectation with respect to the average aggregated features of all the sampled subsets from the video. Feature Aggregators. Learning discriminative localization representations with only video-level class labels requires the feature aggregation operation to turn multiple snippet-level representations into a video-level representation for classification. The feature aggregation mechanism is widely adopted in the deep learning literature and a variety of scenarios, for example, neural machine translation (Bahdanau et al., 2015), visual question answering (Hermann et al., 2015), and so on. However, most of these cases belong to fully-supervised learning where the goal is to learn a model that attends the most relevant features given the supervision information corresponding to the task directly. Many variant feature aggregators have been proposed, ranging from nonparametric max pooling and average pooling, to parametric hard attention (Gkioxari et al., 2015), soft attention (Vaswani et al., 2017; Sharma et al., 2015), second-order pooling (Girdhar & Ramanan, 2017; Kong & Fowlkes, 2017), structured attention (Kim et al., 2017; Mensch & Blondel, 2018), graph aggregators (Zhang et al., 2018a; Hamilton et al., 2017), and so on. Different from the fullysupervised setting where the feature aggregator is designed for the corresponding tasks, we develop a feature aggregator that is trained only with class labels, and then to be used to predict the dense action locations for test data. Different from the heuristic approaches (Wei et al., 2017; Zhang et al., 2018b) which can be considered as a kind of hard-code attention by erasing some regions with a hand-crafted threshold, we introduce the end-to-end differentiable marginalized average aggregation which incorporates learnable latent discriminative probabilities into the learning process. E MARGINALIZED AVERAGE AGGREGATION Algorithm 1 Marginalized Average Aggregation Input: Feature Representations {x1,x2, · · ·xT } , Sampling Probability {p1, p2, · · · pT }. Output: Aggregated Representation x Initialize m00 = 0, q 0 0 = 1, bi = i i+1 ; for t = 1 to T do Set mt0 = 0, and q t −1 = 0 and q t t+1 = 0; for i = 1 to t do qti = ptq t−1 i−1 + (1− pt) q t−1 i mti = pt ( bi−1m t−1 i−1 + (1− bi−1)q t−1 i−1xt ) + (1− pt)mt−1i end for end for Return x = T∑ i=0 mTi F EXPERIMENTS ON WEAKLY-SUPERVISED IMAGE OBJECT LOCALIZATION F.1 MODELS AND IMPLEMENTATION DETAILS We also evaluate the proposed model on the weakly-supervised object localization task. For weaklysupervised object localization, we are given a set of images in which each image is labeled only with its category label. The goal is to learn a model to predict both the category label as well as the bounding box for the objects in a new test image. Based on the model in (Zhou et al., 2016a) (denoted as CAM model), we replace the global average pooling feature aggregator with other kinds of feature aggregator, such as the weighted sum pooling and the proposed MAA by extending the original 1D temporal version in temporal action localization into a 2D spatial version. We denote the model with weighted sum pooling as the weighted-CAM model. For the weighted-CAM model and the proposed MAAN model, we use an attention module to generate the attention weight λ in STPN or the latent discriminative probability p in MAAN. The attention module consists of a 2D convolutional layer of kernel size 1× 1, stride 1 with 256 units, a LeakyReLU layer, a 2D convolutional layer of kernel size 1× 1, stride 1 with 1 unit, and a sigmoid non-linear activation. F.2 DATASET AND EVALUATION METRIC We evaluate the weakly-supervised localization accuracy of the proposed model on the CUB-2002011 dataset (Wah et al., 2011). The CUB-200-2011 dataset has 11,788 images of 200 categories with 5,994 images for training and 5,794 for testing. We leverage the localization metric suggested by (Russakovsky et al., 2015) for comparison. This metric computes the percentage of images that is misclassified or with bounding boxes with less than 50% IoU with the groundtruth as the localization error. F.3 COMPARISONS We compare our MAA aggregator (MAAN) with the weighted sum pooling (weighted-CAM) and global average pooling (CAM (Zhou et al., 2016b)). For MAAN and weighted-CAM, we pool the convolutional feature for aggregation into two different sizes, 4× 4 and 7× 7. We fix all other factors (e.g. network structure, hyper-parameters, optimizer), except for the feature aggregators to evaluate the models. F.3.1 QUALITATIVE RESULTS The localization errors for different methods are presented in Table 4, where the GoogLeNet-GAP is the CAM model. Our method outperforms GoogLeNet-GAP by 5.06% in a Top-1 error. Meanwhile, MAAN achieves consistently lower localization error than weighted-CAM on the two learning schemes. It demonstrates that the proposed MAAN can improve the localization performance in the weakly-supervised setting. Moreover, both MAAN and weighted-CAM obtain smaller localization error when employing the 7× 7 learning scheme than the 4× 4 learning scheme. F.3.2 VISUALIZATION Figure 6 visualizes the heat maps and localization bounding boxes obtained by all the compared methods. The object localization heat maps generated by the proposed MAAN can cover larger object regions and obtain more accurate bounding boxes.
1. What is the focus of the paper regarding temporal action localization? 2. What are the strengths of the proposed approach, particularly in terms of attention networks? 3. What are the weaknesses of the paper, especially in experimentation? 4. Do you have any concerns regarding the effectiveness of the proposed method in other tasks?
Review
Review This paper considers the problem of weakly-supervised temporal action localization. It proposes a marginalized average attention network (MAAN) to suppress the effect of overestimating salient regions. Theoretically, this paper proves that the learned latent discriminative probabilities reduce the difference of responses between the most salient regions and the others. In addition, it develops a fast algorithm to reduce the complexity of constructing MAA to O(T^2). Experiments are conducted on THUMOST14 and ActivityNet 1.3. I like the theoretical part of this paper but have concerns about the experiments. More specifically, my doubts are - The I3D network models are not trained from scratch. The parameters are borrowed from (Carreira and Zisserman 2017), which in fact make the attention averaging very easy. I don’t know whether the success is because the proposed MAAN is working or because the feature representation is very powerful. - If possible, I wish to see the success of the proposed method for other tasks, such as image caption generation, and machine translation. If the paper can show success in any of such task, I would like to adjust my rating to above acceptance.
ICLR
Title MARGINALIZED AVERAGE ATTENTIONAL NETWORK FOR WEAKLY-SUPERVISED LEARNING Abstract In weakly-supervised temporal action localization, previous works have failed to locate dense and integral regions for each entire action due to the overestimation of the most salient regions. To alleviate this issue, we propose a marginalized average attentional network (MAAN) to suppress the dominant response of the most salient regions in a principled manner. The MAAN employs a novel marginalized average aggregation (MAA) module and learns a set of latent discriminative probabilities in an end-to-end fashion. MAA samples multiple subsets from the video snippet features according to a set of latent discriminative probabilities and takes the expectation over all the averaged subset features. Theoretically, we prove that the MAA module with learned latent discriminative probabilities successfully reduces the difference in responses between the most salient regions and the others. Therefore, MAAN is able to generate better class activation sequences and identify dense and integral action regions in the videos. Moreover, we propose a fast algorithm to reduce the complexity of constructing MAA from O(2 ) to O(T ). Extensive experiments on two large-scale video datasets show that our MAAN achieves a superior performance on weakly-supervised temporal action localization. 1 INTRODUCTION Weakly-supervised temporal action localization has been of interest to the community recently. The setting is to train a model with solely video-level class labels, and to predict both the class and the temporal boundary of each action instance at the test time. The major challenge in the weakly-supervised localization problem is to find the right way to express and infer the underlying location information with only the video-level class labels. Traditionally, this is achieved by explicitly sampling several possible instances with different locations and durations (Bilen & Vedaldi, 2016; Kantorov et al., 2016; Zhang et al., 2017). The instance-level classifiers would then be trained through multiple instances learning (Cinbis et al., 2017; Yuan et al., 2017a) or curriculum learning (Bengio et al., 2009). However, the length of actions and videos varies too much such that the number of instance proposals for each video varies a lot and it can also be huge. As a result, traditional methods based on instance proposals become infeasible in many cases. Recent research, however, has pivoted to acquire the location information by generating the class activation sequence (CAS) directly (Nguyen et al., 2018), which produces the classification score sequence of being each action for each snippet over time. The CAS along the 1D temporal dimension for a video is inspired by the class activation map (CAM) (Zhou et al., 2016a; 2014; Pinheiro & Collobert, 2015; Oquab et al., 2015) in weakly-supervised object detection. The CAM-based models have shown that despite being trained on image-level labels, convolutional neural networks (CNNs) have the remarkable ability to localize objects. Similar to object detection, the basic idea behind CAS-based methods for action localization in the training is to sample the non-overlapping snippets from a video, then to aggregate the snippet-level features into a video-level feature, and finally to yield a video-level class prediction. During testing, the model generates a CAS for each class that identifies the discriminative action regions, and then applies a threshold on the CAS to localize each action instance in terms of the start time and the end time. In CAS-based methods, the feature aggregator that aggregates multiple snippet-level features into a video-level feature is the critical building block of weakly-supervised neural networks. A model’s ability to capture the location information of an action is primarily determined by the design of the aggregators. While using the global average pooling over a full image or across the video snippets has shown great promise in identifying the discriminative regions (Zhou et al., 2016a; 2014; Pinheiro & Collobert, 2015; Oquab et al., 2015), treating each pixel or snippet equally loses the opportunity to benefit from several more essential parts. Some recent works (Nguyen et al., 2018; Zhu et al., 2017) have tried to learn attentional weights for different snippets to compute a weighted sum as the aggregated feature. However, they suffer from the weights being easily dominated by only a few most salient snippets. In general, models trained with only video-level class labels tend to be easily responsive to small and sparse discriminative regions from the snippets of interest. This deviates from the objective of the localization task that is to locate dense and integral regions for each entire action. To mitigate this gap and reduce the effect of the domination by the most salient regions, several heuristic tricks have been proposed to apply to existing models. For example, (Wei et al., 2017; Zhang et al., 2018b) attempt to heuristically erase the most salient regions predicted by the model which are currently being mined, and force the network to attend other salient regions in the remaining regions by forwarding the model several times. However, the heuristic multiple-run model is not end-to-end trainable. It is the ensemble of multiple-run mined regions but not the single model’s own ability that learns the entire action regions. “Hide-and-seek”(Singh & Lee, 2017) randomly masks out some regions of the input during training, enforcing the model to localize other salient regions when the most salient regions happen to be masked out. However, all the input regions are masked out with the same probability due to the uniform prior, and it is very likely that most of the time it is the background that is being masked out. A detailed discussion about related works can be found in Appendix D. To this end, we propose the marginalized average attentional network (MAAN) to alleviate the issue raised by the domination of the most salient region in an end-to-end fashion for weakly-supervised action localization. Specifically, MAAN suppresses the action prediction response of the most salient regions by employing marginalized average aggregation (MAA) and learning the latent discriminative probability in a principled manner. Unlike the previous attentional pooling aggregator which calculates the weighted sum with attention weights, MAA first samples a subset of features according to their latent discriminative probabilities, and then calculates the average of these sampled features. Finally, MAA takes the expectation (marginalization) of the average aggregated subset features over all the possible subsets to achieve the final aggregation. As a result, MAA not only alleviates the domination by the most salient regions, but also maintains the scale of the aggregated feature within a reasonable range. We theoretically prove that, with the MAA, the learned latent discriminative probability indeed reduces the difference of response between the most salient regions and the others. Therefore, MAAN can identify more dense and integral regions for each action. Moreover, since enumerating all the possible subsets is exponentially expensive, we further propose a fast iterative algorithm to reduce the complexity of the expectation calculation procedure and provide a theoretical analysis. Furthermore, MAAN is easy to train in an end-to-end fashion since all the components of the network are differentiable. Extensive experiments on two large-scale video datasets show that MAAN consistently outperforms the baseline models and achieves superior performance on weakly-supervised temporal action localization. In summary, our main contributions include: (1) a novel end-to-end trainable marginalized average attentional network (MAAN) with a marginalized average aggregation (MAA) module in the weaklysupervised setting; (2) theoretical analysis of the properties of MAA and an explanation of the reasons MAAN alleviates the issue raised by the domination of the most salient regions; (3) a fast iterative algorithm that can effectively reduce the computational complexity of MAA; and (4) a superior performance on two benchmark video datasets, THUMOS14 and ActivityNet1.3, on the weakly-supervised temporal action localization. 2 MARGINALIZED AVERAGE ATTENTIONAL NETWORK In this section, we describe our proposed MAAN for weakly-supervised temporal action localization. We first derive the formulation of the feature aggregation module in MAAN as a MAA procedure in Sec. 2.1. Then, we study the properties of MAA in Sec. 2.2, and present our fast iterative computation algorithm for MAA construction in Sec. 2.3. Finally, we describe our network architecture that incorporates MAA, and introduce the corresponding inference process on weakly-supervised temporal action localization in Sec. 2.4. 2.1 MARGINALIZED AVERAGE AGGREGATION Let {x1,x2, · · ·xT } denote the set of snippet-level features to be aggregated, where xt ∈ Rm is the m dimensional feature representation extracted from a video snippet centered at time t, and T is the total number of sampled video snippets. The conventional attentional weighted sum pooling aggregates the input snippet-level features into a video-level representation x. Denote the set of attentional weights corresponding to the snippet-level features as {λ1, λ2, · · ·λT }, where λt is a scalar attentional weight for xt. Then the aggregated video-level representation is given by x = T∑ t=1 λtxt, (1) as illustrated in Figure 1 (a). Different from the conventional aggregation mechanism, the proposed MAA module aggregates the features by firstly generating a set of binary indicators to determine whether a snippet should be sampled or not. The model then computes the average aggregation of these sampled snippet-level representations. Lastly, the model computes the expectation (marginalization) of the aggregated average feature for all the possible subsets, and obtains the proposed marginalized average aggregated feature. Formally, in the proposed MAA module, we first define a set of probabilities {p1, p2, · · · pT }, where each pt ∈ [0, 1] is a scalar corresponding to xt, similar to the notation λt mentioned previously. We then sample a set of random variables {z1, z2, · · · zT }, where zt ∼ Bernoulli(pt), i.e., zt ∈ {0, 1} with probability P (zt = 1) = pt. The sampled set is used to represent the subset selection of snippet-level features, in which zt = 1 indicates xt is selected, otherwise not. Therefore, the average aggregation of the sampled subset of snipped-level representations is given by s = ∑T i=1 zixi/ ∑T i=1 zi , and our proposed aggregated feature, defined as the expectation of all the possible subset-level average aggregated representations, is given by x = E[s] = E [∑T i=1 zixi∑T i=1 zi ] , (2) which is illustrated in Figure 1 (b). 2.2 PARTIAL ORDER PRESERVATION AND DOMINANT RESPONSE SUPPRESSION Direct learning and prediction with the attention weights λ in Eq. (1) in weakly-supervised action localization leads to an over-response in the most salient regions. The MAA in Eq. (2) has two properties that alleviate the domination effect of the most salient regions. First, the partial order preservation property, i.e., the latent discriminative probabilities preserve the partial order with respect to their attention weights. Second, the dominant response suppression property, i.e., the differences in the latent discriminative probabilities between the most salient items and others are smaller than the differences between their attention weights. The partial order preservation property guarantees that it does not mix up the action and non-action snippets by assigning a high latent discriminative probability to a snippet with low response. The dominant response suppression property reduces the dominant effect of the most salient regions and encourages the identification of dense and more integral action regions. Formally, we present the two properties in Proposition 1 and Proposition 2, respectively. Detailed proofs can be found in Appendix A and Appendix B respectively. Proposition 1. Let zi ∼ Bernoulli(pi) for i ∈ {1, ..., T}. Then for T ≥ 2, Eq. (3) holds true, and pi ≥ pj ⇔ ci ≥ cj ⇔ λi ≥ λj . E [∑T i=1 zixi∑T i=1 zi ] = ∑T i=1 cipixi = ∑T i=1 λixi, (3) where ci = E [ 1/(1 + ∑T k=1,k 6=i zk) ] and λi = cipi for i ∈ {1, ..., T}. Proposition 1 shows that the latent discriminative probabilities {pi} preserve the partial order of the attention weights {λi}. This means that a large attention weight corresponds to a large discriminative probability, which guarantees that the latent discriminative probabilities preserve the ranking of the action prediction response. Eq. (3) can be seen as a factorization of the attention weight λi into the multiplication of two components, pi and ci, for i ∈ {1, ..., T}. pi is the latent discriminative probability related to the feature of snippet i itself. The factor ci captures the contextual information of snippet i from the other snippets. This factorization can be considered to be introducing structural information into the aggregation. Factor ci can be considered as performing a structural regularization for learning the latent discriminative probabilities pi for i ∈ {1, ..., T}, as well as for learning a more informative aggregation. Proposition 2. Let zi ∼ Bernoulli(pi) for i ∈ {1, ..., T}. Denote ci = E [ 1/(1 + ∑T k=1,k 6=i zk) ] and λi = cipi for i ∈ {1, ..., T}. Denote I = { i ∣∣∣ci ≥ 1/(∑Tt=1 pt)} as an index set. Then I 6= ∅ and for ∀i ∈ I, ∀j ∈ {1, ..., T} inequality (4) holds true.∣∣∣∣∣ pi∑T t=1 pt − pj∑T t=1 pt ∣∣∣∣∣ ≤ ∣∣∣∣∣ λi∑T t=1 λt − λj∑T t=1 λt ∣∣∣∣∣ (4) The index set I can be viewed as the most salient features set. Proposition 2 shows that the difference between the normalized latent discriminative probabilities of the most salient regions and others is smaller than the difference between their attention weights. It means that the prediction for each snippet using the latent discriminative probability can reduce the gap between the most salient featuress and the others compared to conventional methods that are based on attention weights. Thus, MAAN suppresses the dominant responses of the most salient featuress and encourages it to identify dense and more integral action regions. Directly learning the attention weights λ leans to an over response to the most salient region in weakly-supervised temporal localization. Namely, the attention weights for only a few snippets are too large and dominate the others, while attention weights for most of the other snippets that also belong to the true action are underestimated. Proposition 2 shows that latent discriminative probabilities are able to reduce the gap between the most salient features and the others compared to the attention weights. Thus, by employing the latent discriminative probabilities for prediction instead of the attention weights, our method can alleviate the dominant effect of the most salient region in weakly-supervised temporal localization. 2.3 RECURRENT FAST COMPUTATION Given a video containing T snippet-level representations, there are 2T possible configurations for the subset selection. Directly summing up all the 2T configurations to calculate x has a complexity of O(2T ) . In order to reduce the exponential complexity, we propose an iterative method to calculate x with O(T 2) complexity. Let us denote the aggregated feature of {x1,x2, · · ·xt} with length t as ht, and denote Yt = t∑ i=1 zixi and Zt = t∑ i=1 zi for simplicity, then we have a set of ht = E [∑t i=1 zixi∑t i=1 zi ] = E [ Yt Zt ] , t ∈ {1, 2, · · · , T}, (5) and the aggregated feature of {x1,x2, · · ·xT } can be obtained as x = hT . In Eq. (5), Zt is the summation of all the zi, which indicates the number of elements selected in the subset. Although there are 2t distinct configurations for {z1, z2, · · · zt}, it has only t + 1 distinct values for Zt, i.e. 0, 1, · · · , t. Therefore, we can divide all the 2t distinct configurations into t+ 1 groups, where the configurations sharing with the same Zt fall into the same group. Then the expectation ht can be calculated as the summation of the t + 1 parts. That is, ht = E [ E [ Yt Zt ∣∣∣Zt = i]] = ∑ti=0 mti, where the mti, indicating the i th part of ht for group Zt = i, is shown in Eq. (6). mti = P (Zt = i)E [ Yt Zt ∣∣∣∣Zt = i] . (6) In order to calculate ht+1 = ∑t+1 i=0 m t+1 i , given m t i , i ∈ {0, · · · , t}, we can calculate m t+1 i , i ∈ {0, 1, · · · , t + 1} recurrently. The key idea here is that mt+1i comes from two cases: if zt+1 = 0, then mt+1i is the same as m t i; if zt+1 = 1, then m t+1 i is the weighted average of m t i−1 and xt+1. The latter case is also related to the probability P (Zt = i− 1). By denoting qti−1 = P (Zt = i− 1) for simplicity, we can obtain mt+1i as a function of several elements: mt+1i = f(m t i−1,m t i,xt+1, pt+1, q t i−1). (7) Similarly, the computation of qt+1i = P (Zt+1 = i) comes from two cases: the probability of selecting i− 1 items from the first t items and selecting the (t+ 1)th item, i.e., qti−1pt+1; and the probability of selecting i items all from the first t items and not selecting the (t+ 1)th item, i.e., qti (1− pt+1). We derive the function of m t+1 i and q t+1 i in Proposition 3. Detailed proofs can be found in Appendix C. Proposition 3. Let zt ∼ Bernoulli(pt) , Zt = t∑ i=1 zi and Yt = t∑ i=1 zixi for t ∈ {1, ..., T}. Define mti , i ∈ {0, · · · , t} as Eq. (6) and qti = P (Zt = i), then m t+1 i i ∈ {0, 1, · · · , t + 1} can be obtained recurrently by Eq. (8) and Eq. (9). mt+1i = pt+1 ( bi−1m t i−1 + (1− bi−1)qti−1xt+1 ) + (1− pt+1)mti, (8) qt+1i = pt+1q t i−1 + (1− pt+1) qti , (9) where bi = ii+1 , q t −1 = 0, q t t+1 = 0, q 0 0 = 1, m t 0 = 0, and m t t+1 = 0. Proposition 3 provides a recurrent formula to calculate mti. With this recurrent formula, we calculate the aggregation hT by iteratively calculating mti from i = 1 to t and t = 1 to T . Therefore, we can obtain the aggregated feature of {x1,x2, · · ·xT } as x = hT = ∑T i=0 m T i . The iterative computation procedure is summarized in Algorithm 1 in Appendix E. The time complexity is O(T 2). With the fast iterative algorithm in Algorithm 1, the MAA becomes practical for end-to-end training. A demonstration of the computation graph for qt+1i in Eq. (9) and m t+1 i in Eq. (8) is presented in the left and right-hand sides of Figure 2, respectively. From Figure 2, we can see clearly that, to compute m32 (the big black node on the right), it needs m 2 1, m 2 2, x3, p3, and q 2 1 . The MAA can be easily implemented as a subnetwork for end-to-end training and can be used to replace the operation of other feature aggregators. 2.4 NETWORK ARCHITECTURE AND TEMPORAL ACTION LOCALIZATION Network Architecture: We now describe the network architecture that employs the MAA module described above for weakly-supervised temporal action localization. We start from a previous stateof-the-art base architecture, the sparse temporal pooling network (STPN) (Nguyen et al., 2018). As shown in Figure 3, it first divides the input video into several non-overlapped snippets and extracts the I3D (Carreira & Zisserman, 2017) feature for each snippet. Each snippet-level feature is then fed to an attention module to generate an attention weight between 0 and 1. STPN then uses a feature aggregator to calculate a weighted sum of the snippet-level features with these class-agnostic attention weights to create a video-level representation, as shown on the left in Figure 4. The video-level representation is then passed through an FC layer followed by a sigmoid layer to obtain class scores. Our MAAN uses the attention module to generate the latent discriminative probability pt and replaces the feature aggregator from the weighted sum aggregation by the proposed marginalized average aggregation, which is demonstrated on the right in Figure 4. Training with video-level class labels: Formally, the model first performs aggregation of the snippet-level features (i.e. x1,x2, · · ·xT ) to obtain the video-level representation x̄ ( x̄ = E[ ∑T i=1 zixi/ ∑T i=1 zi]). Then, it applies a logistic regression layer (FC layer + sigmoid) to output video-level classification prediction probability. Specifically, the prediction probability for class c ∈ {1, 2, · · ·C} is parameterized as σcj = σ(w>c xj), where xj is the aggregated feature for video j ∈ {1, ..., N}. Suppose each video xj is i.i.d and each action class is independent from the other, the negative log-likelihood function (cross-entropy loss) is given as follows: L(W) = − N∑ j=1 C∑ c=1 ( ycj log σ c j + (1− ycj) log(1− σcj) ) , (10) where ycj ∈ {0, 1} is the ground-truth video-level label for class c happening in video j and W = [w1, ...,wC ]. Temporal Action Localization: Let sc = w>c x be the video-level action prediction score, and σ(sc) = σ(w>c x) be the video-level action prediction probability. In STPN, as x̄ = ∑T t=1 λtxt, the sc can be rewritten as: sc = w>c x = ∑T t=1 λtw > c xt, (11) In STPN, the prediction score of snippet t for action class c in a video is defined as: sct = λtσ(w > c xt), (12) where σ(·) denotes the sigmoid function. In MAAN, as x̄ = E[ ∑T i=1 zixi/ ∑T i=1 zi], according to Proposition 1, the sc can be rewritten as: sc = w>c x = w > c E[ ∑T i=1 zixi/ ∑T i=1 zi] = ∑T t=1 ctptw > c xt. (13) The latent discriminative probability pt corresponds to the class-agnostic attention weight for snippet t. According to Proposition 1 and Proposition 2, ct does not relate to snippet t, but captures the context of other snippets. wc corresponds to the class-specific weights for action class c for all the snippets, and w>c xt indicates the relevance of snippet t to class c. To generate temporal proposals, we compute the prediction score of snippet t belonging to action class c in a video as: sct = ptσ(w > c xt). (14) We denote the sc = (sc1, s c 2, ..., s c T )> as the class activation sequence (CAS) for class c. Similar to STPN, the threshold is applied to the CAS for each class to extract the one-dimensional connected components to generate its temporal proposals. We then perform non-maximum suppression among temporal proposals of each class independently to remove highly overlapped detections. Compared to STPN (Eq. (12)), MAAN (Eq. (14)) employs the latent discriminative probability pt instead of directly using the attention weight λt (equivalent to ctpt) for prediction. Proposition 2 suggests that MAAN can suppress the dominant response sct compared to STPN. Thus, MAAN is more likely to achieve a better performance in weakly-supervised temporal action localization. 3 EXPERIMENTS This section discusses the experiments on the weakly-supervised temporal action localization problem, which is our main focus. We have also extended our algorithm on addressing the weakly-supervised image object detection problem and the relevant experiments are presented in Appendix F. 3.1 EXPERIMENTAL SETTINGS Datasets. We evaluate MAAN on two popular action localization benchmark datasets, THUMOS14 (Jiang et al., 2014) and ActivityNet1.3 (Heilbron et al., 2015). THUMOS14 contains 20 action classes for the temporal action localization task, which consists of 200 untrimmed videos (3,027 action instances) in the validation set and 212 untrimmed videos (3,358 action instances) in the test set. Following standard practice, we train the models on the validation set without using the temporal annotations and evaluate them on the test set. ActivityNet1.3 is a large-scale video benchmark for action detection which covers a wide range of complex human activities. It provides samples from 200 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. This dataset contains 10,024 training videos, 4,926 validation videos and 5,044 test videos. In the experiments, we train the models on the training videos and test on the validation videos. Evaluation Metrics. We follow the standard evaluation metric by reporting mean average precision (mAP) values at several different levels of intersection over union (IoU) thresholds. We use the benchmarking code provided by ActivityNet1 to evaluate the models. Implementation Details. We use two-stream I3D networks (Carreira & Zisserman, 2017) pre-trained on the Kinetics dataset (Kay et al., 2017) to extract the snippet-level feature vectors for each video. All the videos are divided into sets of non-overlapping video snippets. Each snippet contains 16 consecutive frames or optical flow maps. We input each 16 stacked RGB frames or flow maps into the I3D RGB or flow models to extract the corresponding 1024 dimensional feature vectors. Due to the various lengths of the videos, in the training, we uniformly divide each video into T non-overlapped segments, and randomly sample one snippet from each segment. Therefore, we sample T snippets for each video as the input of the model for training. We set T to 20 in our MAAN model. The attention module in Figure 3 consists of an FC layer of 1024× 256, a LeakyReLU layer, an FC layer of 256× 1, and a sigmoid non-linear activation, to generate the latent discriminative probability pt. We pass the aggregated video-level representation through an FC layer of 1024× C followed by a sigmoid activation to obtain class scores. We use the ADAM optimizer (Kingma & Ba, 2014) with an initial learning rate of 5× 10−4 to optimize network parameters. At the test time, we first reject 1https://github.com/activitynet/ActivityNet/tree/master/Evaluation classes whose video-level probabilities are below 0.1. We then forward all the snippets of the video to generate the CAS for the remaining classes. We generate the temporal proposals by cutting the CAS with a threshold th. The combination ratio of two-stream modalities is set to 0.5 and 0.5. Our algorithm is implemented in PyTorch 2. We run all the experiments on a single NVIDIA Tesla M40 GPU with a 24 GB memory. 3.2 THUMOS14 DATASET We first compare our MAAN model on the THUMOS14 dataset with several baseline models that use different feature aggregators in Figure 3 to gain some basic understanding of the behavior of our proposed MAA. The descriptions of the four baseline models are listed below. (1) STPN. It employs the weighed sum aggregation x̄ = ∑T t=1 λtxt to generate the video-level representation. (2) Dropout. It explicitly performs dropout sampling with dropout probability p = 0.5 in STPN to obtain the video-level representation, x̄ = ∑T t=1 rtλtxt, rt ∼ Bernoulli(0.5). (3) Normalization. Denoted as “Norm” in the experiments, it utilizes the weighted average aggregation x̄ = ∑T t=1 λtxt/ ∑T t=1 λt for the video-level representation. (4) SoftMax Normalization. Denoted as “SoftMaxNorm” in the experiments, it applies the softmax function as the normalized weights to get the weighted average aggregated video-level feature, x̄ = ∑T t=1 e λtxt/ ∑T t=1 e λt . We test all the models with the cutting threshold th as 0.2 of the max value of the CAS. We compare the detection average precision (%) at IoU = [0.1 : 0.1 : 0.9] and the video-level classification mean average precision (%) (denoted as Cls mAP) on the test set in Table 1. From Table 1, we can observe that although all the methods achieve a similar video-level classification mAP, their localization performances vary a lot. It shows that achieving a good video-level classification performance cannot guarantee obtaining a good snippet-level localization performance because the former only requires the correct prediction of the existence of an action, while the latter requires the correct prediction of both its existence and its duration and location. Moreover, Table 1 demonstrates that MAAN consistently outperforms all the baseline models at different levels of IoUs in the weakly-supervised temporal localization task. Both the “Norm” and “SoftmaxNorm” are the normalized weighted average aggregation. However, the “SoftmaxNorm” performs the worst, because the softmax function over-amplifies the weight of the most salient snippet. As a result, it tends to identify very few discriminative snippets and obtains sparse and non-integral localization. The “Norm” also performs worse than our MAAN. It is the normalized weighted average over the snippet-level representation, while MAAN can be considered as the normalized weighted average (expectation) over the subsetlevel representation. Therefore, MAAN encourages the identification of dense and integral action segments as compared to “Norm” which encourages the identification of only several discriminative snippets. MAAN works better than “Dropout” because “Dropout” randomly drops out the snippets with different attention weights by uniform probabilities. At each iteration, the scale of the aggregated feature varies a lot, however, MAAN samples with the learnable latent discriminative probability and conducts the expectation of keeping the scale of the aggregated feature stable. Compared to STPN, MAAN also achieves superior results. MAAN implicitly factorizes the attention weight into ctpt, where pt learns the latent discriminative probability of the current snippet, and ct captures the contextual information and regularizes the network to learn a more informative aggregation. The properties of MAA disallow the predicted class activation sequences to concentrate on the most salient regions. The quantitative results show the effectiveness of the MAA feature aggregator. 2https://github.com/pytorch/pytorch Figure 5 visualizes the one-dimensional CASs of the proposed MAAN and all the baseline models. The temporal CAS generated by MAAN can cover large and dense regions to obtain more accurate action segments. In the example in Figure 5, MAAN can discover almost all the actions that are annotated in the ground-truth; however, the STPN have missed several action segments, and also tends to only output the more salient regions in each action segment. Other methods are much sparser compared to MAAN. The first row of Figure 5 shows several action segments in red and in green, corresponding to action segments that are relatively difficult and easy to be localized, respectively. We can see that all the easily-localized segments contain the whole person who is performing the “HammerThrow” action, while the difficultly-localized segments contain only a part of the person or the action. Our MAAN can successfully localize the easy segments as well as the difficult segments; however, all the other methods fail on the difficult ones. It shows that MAAN can identify several dense and integral action regions other than only the most discriminative region which is identified by the other methods. We also compare our model with the state-of-the-art action localization approaches on the THUMOS14 dataset. The numerical results are summarized in Table 2. We include both fully and weakly-supervised learning, as in (Nguyen et al., 2018). As shown in Table 2, our implemented STPN performs slightly better than the results reported in the original paper (Nguyen et al., 2018). From Table 2, our proposed MAAN outperforms the STPN and most of the existing weakly-supervised action localization approaches. Furthermore, our model still presents competitive results compared with several recent fully-supervised approaches even when trained with only video-level labels. 3.3 ACTIVITYNET1.3 DATASET We train the MAAN model on the ActivityNet1.3 training set and compare our performance with the recent state-of-the-art approaches on the validation set in Table 3. The action segment in ActivityNet is usually much longer than that of THUMOS14 and occupies a larger percentage of a video. We use a set of thresholds, which are [0.2, 0.15, 0.1, 0.05] of the max value of the CAS, to generate the proposals from the one-dimensional CAS. As shown in Table 3, with the set of thresholds, our implemented STPN performs slightly better than the results reported in the original paper (Nguyen et al., 2018). With the same threshold and experimental setting, our proposed MAAN model outperforms the STPN approach on the large-scale ActivityNet1.3. Similar to THUMOS14, our model also achieves good results that are close to some of the fully-supervised approaches. 4 CONCLUSION We have proposed the marginalized average attentional network (MAAN) for weakly-supervised temporal action localization. MAAN employs a novel marginalized average aggregation (MAA) operation to encourage the network to identify the dense and integral action segments and is trained in an end-to-end fashion. Theoretically, we have proved that MAA reduces the gap between the most discriminant regions in the video to the others, and thus MAAN generates better class activation sequences to infer the action locations. We have also proposed a fast algorithm to reduce the computation complexity of MAA. Our proposed MAAN achieves superior performance on both the THUMOS14 and the ActivityNet1.3 datasets on weakly-supervised temporal action localization tasks compared to current state-of-the-art methods. 5 ACKNOWLEDGEMENT We thank our anonymous reviewers for their helpful feedback and suggestions. Prof. Ivor W. Tsang was supported by ARC FT130100746, ARC LP150100671, and DP180100106. A PROOF OF PROPOSITION 1 A.1 PROOF OF EQUATION (3) Proof. E [∑T i=1 zixi∑T i=1 zi ] = ∑T i=1 E[zi/ ∑T i=1 zi]xi. (15) In addition, E[zi/ ∑T i=1 zi] = pi × E [ 1/(1 + ∑T k=1,k 6=i zk) ] + (1− pi)× 0 = pici. (16) Thus, we achieve E [∑T i=1 zixi∑T i=1 zi ] = ∑T i=1 cipixi = ∑T i=1 λixi. (17) A.2 PROOF OF pi ≥ pj ⇔ ci ≥ cj ⇔ λi ≥ λj Proof. Denote ST = ∑T k=1,k 6=i,k 6=j zk, then we have ci − cj = E [ 1/(1 + ∑ k 6=i zk) ] − E [ 1/(1 + ∑ k 6=j zk) ] (18) = pjE [1/(2 + ST )] + (1− pj)E [1/(1 + ST )]− piE [1/(2 + ST )]− (1− pi)E [1/(1 + ST )] = (pi − pj) (E [1/(1 + ST )]− E [1/(2 + ST )]) . (19) Since E [1/(1 + ST )]− E [1/(2 + ST )] > 0, we achieve that pi ≥ pj ⇔ ci ≥ cj . Since λi = cipi and λj = cjpj , and ci, cj , pi, pj ≥ 0, it follows that pi ≥ pj ⇔ λi ≥ λj . B PROOF OF PROPOSITION 2 Proof. ∑T i=1 cipi = ∑T i=1 E[zi/ ∑T i=1 zi] = E [ ( ∑T i=1 zi)/( ∑T i=1 zi) ] = 1 When p1 = p2 = · · · = pT , we have λ1 = λ2 = · · · = λT . Then inequality (4) trivially holds true. Without loss of generality, assume p1 ≥ p2 ≥ · · · ≥ pT and there exists a strict inequality. Then ∃k ∈ {1, ..., T − 1} such that ci ≥ 1/( ∑T t=1 pt) for 1 ≤ i ≤ k and cj ≤ 1/( ∑T t=1 pt) for k < j ≤ T . Otherwise, we obtain ci ≥ 1/( ∑T t=1 pt) or ci ≤ 1/( ∑T t=1 pt) for 1 ≤ i ≤ T and there exists a strict inequality. It follows that ∑T i=1 cipi > 1 or ∑T i=1 cipi < 1, which contradicts∑T i=1 cipi = 1. Thus, we obtain the set I 6= ∅. Without loss of generality, for 1 ≤ i ≤ k and i ≤ j ≤ T , we have ci ≥ 1/( ∑T t=1 pt) and pi ≥ pj , then we obtain that ci ≥ cj . It follows that pi/( ∑T t=1 pt)− pj/( ∑T t=1 pt)− ( λi/( ∑T t=1 λt)− λj/( ∑T t=1 λt) ) (20) = pi/( ∑T t=1 pt)− pj/( ∑T t=1 pt)− (cipi − cjpj) (21) = ( 1/( ∑T t=1 pt)− ci ) pi − ( 1/( ∑T t=1 pt)− cj ) pj (22) ≤ ( 1/( ∑T t=1 pt)− ci ) pi − ( 1/( ∑T t=1 pt)− ci ) pj (23) = ( 1/( ∑T t=1 pt)− ci ) (pi − pj) ≤ 0. (24) C PROOF OF PROPOSITION 3 C.1 COMPUTATION OF ht ht = E[ Yt Zt ] = ∑ z1,z2,...,zt P (z1, z2, · · · zt) ∑t j=1 zjxj∑t j=1 zj (25) = ∑t i=0 ( ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, ..., zt) ∑t j=1 zjxj∑t j=1 zj ) (26) = ∑t i=0 ∑ z1,z2,...,zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) ∑t j=1 zjxj i (27) = ∑t i=0 mti, (28) where 1(·) denotes the indicator function. We achieve Eq. (26) by partitioning the summation into t+ 1 groups . Terms belonging to group i have ∑t j=1 zj = i. Let mti = ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) ∑t j=1 zjxj i , and we achieve Eq. (28). C.2 PROOF OF RECURRENT FORMULA OF mt+1i We now give the proof of the recurrent formula of Eq. (29) mt+1i = pt+1 ( bi−1m t i−1 + (1− bi−1)qti−1xt+1 ) + (1− pt+1)mti. (29) Proof. mt+1i = ∑ z1,z2,···zt,zt+1 1 (∑t+1 j=1 zj = i ) P (z1, z2, · · · zt+1) ∑t+1 j=1 zjxj i (30) = ∑ z1,z2,···zt,zt+1 1 (∑t j=1 zj + zt+1 = i ) P (z1, z2, · · · zt)P (zt+1) ∑t j=1 zjxj + zt+1xt+1 i (31) = ∑ z1,z2,···zt [ 1 (∑t j=1 zj + 1 = i ) P (z1, z2, · · · zt) pt+1 ∑t j=1 zjxj+xt+1 i ] + ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) (1− pt+1) ∑t j=1 zjxj i (32) = ∑ z1,z2,···zt 1 (∑t j=1 zj + 1 = i ) P (z1, z2, · · · zt) pt+1 ∑t j=1 zjxj+xt+1 i +(1− pt+1) ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) ∑t j=1 zjxj i (33) = pt+1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) i−1i ∑t j=1 zjxj+xt+1 i−1 +(1− pt+1)mti (34) = pt+1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) [ i−1 i ∑t j=1 zjxj i−1 + xt+1 i ] +(1− pt+1)mti (35) = pt+1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) [ bi−1 ∑t j=1 zjxj i−1 + (1− bi−1)xt+1 ] +(1− pt+1)mti (36) Then, we have mt+1i = pt+1bi−1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) ∑t j=1 zjxj i−1 +pt+1(1− bi−1) ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt)xt+1 + (1− pt+1)mti. (37) Since qti−1 = P (∑t j=1 zj = i− 1 ) = ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) we can achieve mt+1i = pt+1 [ bi−1m t i−1 + (1− bi−1)qti−1xt+1 ] + (1− pt+1)mti. (38) C.3 PROOF OF RECURRENT FORMULA OF qt+1i We present the proof of Eq. (39) qt+1i = pt+1q t i−1 + (1− pt+1)qti (39) Proof. qt+1i = ∑ z1,z2,···zt,zt+1 1 (∑t+1 j=1 zj = i ) P (z1, z2, · · · zt+1) (40) = ∑ z1,z2,···zt,zt+1 1 (∑t j=1 zj + zt+1 = i ) P (z1, z2, · · · zt)P (zt+1) (41) = ∑ z1,z2,···zt 1 (∑t j=1 zj + 1 = i ) P (z1, z2, · · · zt) pt+1 (42) + ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) (1− pt+1) (43) = pt+1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) + (1− pt+1)qti (44) = pt+1q t i−1 + (1− pt+1)qti (45) D RELATED WORK Video Action Analysis. Researchers have developed quite a few deep network models for video action analysis. Two-stream networks (Simonyan & Zisserman, 2014) and 3D convolutional neural networks (C3D) (Tran et al., 2015) are popular solutions to learn video representations and these techniques, including their variations, are extensively used for video action analysis. Recently, a combination of two-stream networks and 3D convolutions, referred to as I3D (Carreira & Zisserman, 2017), was proposed as a generic video representation learning method, and served as an effective backbone network in various video analysis tasks such as recognition (Wang et al., 2016), localization (Shou et al., 2016), and weakly-supervised learning (Wang et al., 2017). Weakly-Supervised Temporal Action Localization. There are only a few approaches based on weakly-supervised learning that rely solely on video-level class labels to localize actions in the temporal domain. Wang et al. (Wang et al., 2017) proposed a UntrimmedNet framework, where two softmax functions are applied across class labels and proposals to perform action classification and detect important temporal segments, respectively. However, using the softmax function across proposals may not be effective for identifying multiple instances. Singh et al. (Singh & Lee, 2017) designed a Hide-and-Seek model to randomly hide some regions in a video during training and force the network to seek other relevant regions. However, the randomly hiding operation, as a data augmentation, cannot guarantee whether it is the action region or the background region that is hidden during training, especially when the dropout probabilities for all the regions are the same. Nguyen et al. (Nguyen et al., 2018) proposed a sparse temporal pooling network (STPN) to identify a sparse set of key segments associated with the actions through attention-based temporal pooling of video segments. However, the sparse constraint may force the network to focus on very few segments and lead to incomplete detection. In order to prevent the model from focusing only on the most salient regions, we are inspired to propose the MAAN model to explicitly take the expectation with respect to the average aggregated features of all the sampled subsets from the video. Feature Aggregators. Learning discriminative localization representations with only video-level class labels requires the feature aggregation operation to turn multiple snippet-level representations into a video-level representation for classification. The feature aggregation mechanism is widely adopted in the deep learning literature and a variety of scenarios, for example, neural machine translation (Bahdanau et al., 2015), visual question answering (Hermann et al., 2015), and so on. However, most of these cases belong to fully-supervised learning where the goal is to learn a model that attends the most relevant features given the supervision information corresponding to the task directly. Many variant feature aggregators have been proposed, ranging from nonparametric max pooling and average pooling, to parametric hard attention (Gkioxari et al., 2015), soft attention (Vaswani et al., 2017; Sharma et al., 2015), second-order pooling (Girdhar & Ramanan, 2017; Kong & Fowlkes, 2017), structured attention (Kim et al., 2017; Mensch & Blondel, 2018), graph aggregators (Zhang et al., 2018a; Hamilton et al., 2017), and so on. Different from the fullysupervised setting where the feature aggregator is designed for the corresponding tasks, we develop a feature aggregator that is trained only with class labels, and then to be used to predict the dense action locations for test data. Different from the heuristic approaches (Wei et al., 2017; Zhang et al., 2018b) which can be considered as a kind of hard-code attention by erasing some regions with a hand-crafted threshold, we introduce the end-to-end differentiable marginalized average aggregation which incorporates learnable latent discriminative probabilities into the learning process. E MARGINALIZED AVERAGE AGGREGATION Algorithm 1 Marginalized Average Aggregation Input: Feature Representations {x1,x2, · · ·xT } , Sampling Probability {p1, p2, · · · pT }. Output: Aggregated Representation x Initialize m00 = 0, q 0 0 = 1, bi = i i+1 ; for t = 1 to T do Set mt0 = 0, and q t −1 = 0 and q t t+1 = 0; for i = 1 to t do qti = ptq t−1 i−1 + (1− pt) q t−1 i mti = pt ( bi−1m t−1 i−1 + (1− bi−1)q t−1 i−1xt ) + (1− pt)mt−1i end for end for Return x = T∑ i=0 mTi F EXPERIMENTS ON WEAKLY-SUPERVISED IMAGE OBJECT LOCALIZATION F.1 MODELS AND IMPLEMENTATION DETAILS We also evaluate the proposed model on the weakly-supervised object localization task. For weaklysupervised object localization, we are given a set of images in which each image is labeled only with its category label. The goal is to learn a model to predict both the category label as well as the bounding box for the objects in a new test image. Based on the model in (Zhou et al., 2016a) (denoted as CAM model), we replace the global average pooling feature aggregator with other kinds of feature aggregator, such as the weighted sum pooling and the proposed MAA by extending the original 1D temporal version in temporal action localization into a 2D spatial version. We denote the model with weighted sum pooling as the weighted-CAM model. For the weighted-CAM model and the proposed MAAN model, we use an attention module to generate the attention weight λ in STPN or the latent discriminative probability p in MAAN. The attention module consists of a 2D convolutional layer of kernel size 1× 1, stride 1 with 256 units, a LeakyReLU layer, a 2D convolutional layer of kernel size 1× 1, stride 1 with 1 unit, and a sigmoid non-linear activation. F.2 DATASET AND EVALUATION METRIC We evaluate the weakly-supervised localization accuracy of the proposed model on the CUB-2002011 dataset (Wah et al., 2011). The CUB-200-2011 dataset has 11,788 images of 200 categories with 5,994 images for training and 5,794 for testing. We leverage the localization metric suggested by (Russakovsky et al., 2015) for comparison. This metric computes the percentage of images that is misclassified or with bounding boxes with less than 50% IoU with the groundtruth as the localization error. F.3 COMPARISONS We compare our MAA aggregator (MAAN) with the weighted sum pooling (weighted-CAM) and global average pooling (CAM (Zhou et al., 2016b)). For MAAN and weighted-CAM, we pool the convolutional feature for aggregation into two different sizes, 4× 4 and 7× 7. We fix all other factors (e.g. network structure, hyper-parameters, optimizer), except for the feature aggregators to evaluate the models. F.3.1 QUALITATIVE RESULTS The localization errors for different methods are presented in Table 4, where the GoogLeNet-GAP is the CAM model. Our method outperforms GoogLeNet-GAP by 5.06% in a Top-1 error. Meanwhile, MAAN achieves consistently lower localization error than weighted-CAM on the two learning schemes. It demonstrates that the proposed MAAN can improve the localization performance in the weakly-supervised setting. Moreover, both MAAN and weighted-CAM obtain smaller localization error when employing the 7× 7 learning scheme than the 4× 4 learning scheme. F.3.2 VISUALIZATION Figure 6 visualizes the heat maps and localization bounding boxes obtained by all the compared methods. The object localization heat maps generated by the proposed MAAN can cover larger object regions and obtain more accurate bounding boxes.
1. What is the main contribution of the paper in the context of weakly-supervised video localization? 2. How does the proposed stochastic pooling method address the issue of discriminative attention in dense labeling? 3. What are the strengths and weaknesses of the proposed approach compared to prior works? 4. How does the reviewer assess the clarity, originality, and significance of the paper's content? 5. Are there any suggestions or ideas for future work related to the proposed method?
Review
Review Summary This paper proposed a stochastic pooling method over the temporal dimension for weakly-supervised video localization problem. The main motivation is to resolve a problem of discriminative attention that tends to focus on a few discriminative parts of an input data, which is not desirable for the purpose of dense labeling (i.e. localization). The proposed stochastic pooling method addressed this problem by aggregating all possible subsets of snippets, where each subset is constructed by sampling snppets from learnable sampling distribution. The proposed method showed that such approach learns more smooth attention both theoretically and empirically. Clarity: The paper is well written and easy to follow. The ideas and methods are clearly presented. Originality and significance: The proposed stochastic pooling is novel and demonstrated that empirically useful. Given that the proposed method can be generally applicable to other tasks, I think the significance of the work is also reasonable. One suggestion is applying the idea to semantic segmentation, which also shares a similar problem setting but easier to evaluate its impact than videos. Similar to (Zhou et al. 2016), you can plug the proposed pooling method on top of CNN feature map instead of global average pooling, which might be doable with the more affordable computational cost since the number of hidden units for pooling is much smaller than the length of videos (N < T). One downside of the proposed method is its computational complexity (O(T^2)). This is much higher than the one for other feedforward methods (O(T)), which can be easily parallelized (O(1)). This can be a big problem when we have to handle very long sequences too (increasing the length of snippets could be one alternative, but it is not desirable for localization at the end). Considering this disadvantage, the performance gain by the proposed method may not be considered attractive enough. Experiment: Overall, the experiment looks convincing to me. Minor comments: Citation error: Wrong citation: Nguyen et al. CVPR 2017 -> CVPR 2018
ICLR
Title MARGINALIZED AVERAGE ATTENTIONAL NETWORK FOR WEAKLY-SUPERVISED LEARNING Abstract In weakly-supervised temporal action localization, previous works have failed to locate dense and integral regions for each entire action due to the overestimation of the most salient regions. To alleviate this issue, we propose a marginalized average attentional network (MAAN) to suppress the dominant response of the most salient regions in a principled manner. The MAAN employs a novel marginalized average aggregation (MAA) module and learns a set of latent discriminative probabilities in an end-to-end fashion. MAA samples multiple subsets from the video snippet features according to a set of latent discriminative probabilities and takes the expectation over all the averaged subset features. Theoretically, we prove that the MAA module with learned latent discriminative probabilities successfully reduces the difference in responses between the most salient regions and the others. Therefore, MAAN is able to generate better class activation sequences and identify dense and integral action regions in the videos. Moreover, we propose a fast algorithm to reduce the complexity of constructing MAA from O(2 ) to O(T ). Extensive experiments on two large-scale video datasets show that our MAAN achieves a superior performance on weakly-supervised temporal action localization. 1 INTRODUCTION Weakly-supervised temporal action localization has been of interest to the community recently. The setting is to train a model with solely video-level class labels, and to predict both the class and the temporal boundary of each action instance at the test time. The major challenge in the weakly-supervised localization problem is to find the right way to express and infer the underlying location information with only the video-level class labels. Traditionally, this is achieved by explicitly sampling several possible instances with different locations and durations (Bilen & Vedaldi, 2016; Kantorov et al., 2016; Zhang et al., 2017). The instance-level classifiers would then be trained through multiple instances learning (Cinbis et al., 2017; Yuan et al., 2017a) or curriculum learning (Bengio et al., 2009). However, the length of actions and videos varies too much such that the number of instance proposals for each video varies a lot and it can also be huge. As a result, traditional methods based on instance proposals become infeasible in many cases. Recent research, however, has pivoted to acquire the location information by generating the class activation sequence (CAS) directly (Nguyen et al., 2018), which produces the classification score sequence of being each action for each snippet over time. The CAS along the 1D temporal dimension for a video is inspired by the class activation map (CAM) (Zhou et al., 2016a; 2014; Pinheiro & Collobert, 2015; Oquab et al., 2015) in weakly-supervised object detection. The CAM-based models have shown that despite being trained on image-level labels, convolutional neural networks (CNNs) have the remarkable ability to localize objects. Similar to object detection, the basic idea behind CAS-based methods for action localization in the training is to sample the non-overlapping snippets from a video, then to aggregate the snippet-level features into a video-level feature, and finally to yield a video-level class prediction. During testing, the model generates a CAS for each class that identifies the discriminative action regions, and then applies a threshold on the CAS to localize each action instance in terms of the start time and the end time. In CAS-based methods, the feature aggregator that aggregates multiple snippet-level features into a video-level feature is the critical building block of weakly-supervised neural networks. A model’s ability to capture the location information of an action is primarily determined by the design of the aggregators. While using the global average pooling over a full image or across the video snippets has shown great promise in identifying the discriminative regions (Zhou et al., 2016a; 2014; Pinheiro & Collobert, 2015; Oquab et al., 2015), treating each pixel or snippet equally loses the opportunity to benefit from several more essential parts. Some recent works (Nguyen et al., 2018; Zhu et al., 2017) have tried to learn attentional weights for different snippets to compute a weighted sum as the aggregated feature. However, they suffer from the weights being easily dominated by only a few most salient snippets. In general, models trained with only video-level class labels tend to be easily responsive to small and sparse discriminative regions from the snippets of interest. This deviates from the objective of the localization task that is to locate dense and integral regions for each entire action. To mitigate this gap and reduce the effect of the domination by the most salient regions, several heuristic tricks have been proposed to apply to existing models. For example, (Wei et al., 2017; Zhang et al., 2018b) attempt to heuristically erase the most salient regions predicted by the model which are currently being mined, and force the network to attend other salient regions in the remaining regions by forwarding the model several times. However, the heuristic multiple-run model is not end-to-end trainable. It is the ensemble of multiple-run mined regions but not the single model’s own ability that learns the entire action regions. “Hide-and-seek”(Singh & Lee, 2017) randomly masks out some regions of the input during training, enforcing the model to localize other salient regions when the most salient regions happen to be masked out. However, all the input regions are masked out with the same probability due to the uniform prior, and it is very likely that most of the time it is the background that is being masked out. A detailed discussion about related works can be found in Appendix D. To this end, we propose the marginalized average attentional network (MAAN) to alleviate the issue raised by the domination of the most salient region in an end-to-end fashion for weakly-supervised action localization. Specifically, MAAN suppresses the action prediction response of the most salient regions by employing marginalized average aggregation (MAA) and learning the latent discriminative probability in a principled manner. Unlike the previous attentional pooling aggregator which calculates the weighted sum with attention weights, MAA first samples a subset of features according to their latent discriminative probabilities, and then calculates the average of these sampled features. Finally, MAA takes the expectation (marginalization) of the average aggregated subset features over all the possible subsets to achieve the final aggregation. As a result, MAA not only alleviates the domination by the most salient regions, but also maintains the scale of the aggregated feature within a reasonable range. We theoretically prove that, with the MAA, the learned latent discriminative probability indeed reduces the difference of response between the most salient regions and the others. Therefore, MAAN can identify more dense and integral regions for each action. Moreover, since enumerating all the possible subsets is exponentially expensive, we further propose a fast iterative algorithm to reduce the complexity of the expectation calculation procedure and provide a theoretical analysis. Furthermore, MAAN is easy to train in an end-to-end fashion since all the components of the network are differentiable. Extensive experiments on two large-scale video datasets show that MAAN consistently outperforms the baseline models and achieves superior performance on weakly-supervised temporal action localization. In summary, our main contributions include: (1) a novel end-to-end trainable marginalized average attentional network (MAAN) with a marginalized average aggregation (MAA) module in the weaklysupervised setting; (2) theoretical analysis of the properties of MAA and an explanation of the reasons MAAN alleviates the issue raised by the domination of the most salient regions; (3) a fast iterative algorithm that can effectively reduce the computational complexity of MAA; and (4) a superior performance on two benchmark video datasets, THUMOS14 and ActivityNet1.3, on the weakly-supervised temporal action localization. 2 MARGINALIZED AVERAGE ATTENTIONAL NETWORK In this section, we describe our proposed MAAN for weakly-supervised temporal action localization. We first derive the formulation of the feature aggregation module in MAAN as a MAA procedure in Sec. 2.1. Then, we study the properties of MAA in Sec. 2.2, and present our fast iterative computation algorithm for MAA construction in Sec. 2.3. Finally, we describe our network architecture that incorporates MAA, and introduce the corresponding inference process on weakly-supervised temporal action localization in Sec. 2.4. 2.1 MARGINALIZED AVERAGE AGGREGATION Let {x1,x2, · · ·xT } denote the set of snippet-level features to be aggregated, where xt ∈ Rm is the m dimensional feature representation extracted from a video snippet centered at time t, and T is the total number of sampled video snippets. The conventional attentional weighted sum pooling aggregates the input snippet-level features into a video-level representation x. Denote the set of attentional weights corresponding to the snippet-level features as {λ1, λ2, · · ·λT }, where λt is a scalar attentional weight for xt. Then the aggregated video-level representation is given by x = T∑ t=1 λtxt, (1) as illustrated in Figure 1 (a). Different from the conventional aggregation mechanism, the proposed MAA module aggregates the features by firstly generating a set of binary indicators to determine whether a snippet should be sampled or not. The model then computes the average aggregation of these sampled snippet-level representations. Lastly, the model computes the expectation (marginalization) of the aggregated average feature for all the possible subsets, and obtains the proposed marginalized average aggregated feature. Formally, in the proposed MAA module, we first define a set of probabilities {p1, p2, · · · pT }, where each pt ∈ [0, 1] is a scalar corresponding to xt, similar to the notation λt mentioned previously. We then sample a set of random variables {z1, z2, · · · zT }, where zt ∼ Bernoulli(pt), i.e., zt ∈ {0, 1} with probability P (zt = 1) = pt. The sampled set is used to represent the subset selection of snippet-level features, in which zt = 1 indicates xt is selected, otherwise not. Therefore, the average aggregation of the sampled subset of snipped-level representations is given by s = ∑T i=1 zixi/ ∑T i=1 zi , and our proposed aggregated feature, defined as the expectation of all the possible subset-level average aggregated representations, is given by x = E[s] = E [∑T i=1 zixi∑T i=1 zi ] , (2) which is illustrated in Figure 1 (b). 2.2 PARTIAL ORDER PRESERVATION AND DOMINANT RESPONSE SUPPRESSION Direct learning and prediction with the attention weights λ in Eq. (1) in weakly-supervised action localization leads to an over-response in the most salient regions. The MAA in Eq. (2) has two properties that alleviate the domination effect of the most salient regions. First, the partial order preservation property, i.e., the latent discriminative probabilities preserve the partial order with respect to their attention weights. Second, the dominant response suppression property, i.e., the differences in the latent discriminative probabilities between the most salient items and others are smaller than the differences between their attention weights. The partial order preservation property guarantees that it does not mix up the action and non-action snippets by assigning a high latent discriminative probability to a snippet with low response. The dominant response suppression property reduces the dominant effect of the most salient regions and encourages the identification of dense and more integral action regions. Formally, we present the two properties in Proposition 1 and Proposition 2, respectively. Detailed proofs can be found in Appendix A and Appendix B respectively. Proposition 1. Let zi ∼ Bernoulli(pi) for i ∈ {1, ..., T}. Then for T ≥ 2, Eq. (3) holds true, and pi ≥ pj ⇔ ci ≥ cj ⇔ λi ≥ λj . E [∑T i=1 zixi∑T i=1 zi ] = ∑T i=1 cipixi = ∑T i=1 λixi, (3) where ci = E [ 1/(1 + ∑T k=1,k 6=i zk) ] and λi = cipi for i ∈ {1, ..., T}. Proposition 1 shows that the latent discriminative probabilities {pi} preserve the partial order of the attention weights {λi}. This means that a large attention weight corresponds to a large discriminative probability, which guarantees that the latent discriminative probabilities preserve the ranking of the action prediction response. Eq. (3) can be seen as a factorization of the attention weight λi into the multiplication of two components, pi and ci, for i ∈ {1, ..., T}. pi is the latent discriminative probability related to the feature of snippet i itself. The factor ci captures the contextual information of snippet i from the other snippets. This factorization can be considered to be introducing structural information into the aggregation. Factor ci can be considered as performing a structural regularization for learning the latent discriminative probabilities pi for i ∈ {1, ..., T}, as well as for learning a more informative aggregation. Proposition 2. Let zi ∼ Bernoulli(pi) for i ∈ {1, ..., T}. Denote ci = E [ 1/(1 + ∑T k=1,k 6=i zk) ] and λi = cipi for i ∈ {1, ..., T}. Denote I = { i ∣∣∣ci ≥ 1/(∑Tt=1 pt)} as an index set. Then I 6= ∅ and for ∀i ∈ I, ∀j ∈ {1, ..., T} inequality (4) holds true.∣∣∣∣∣ pi∑T t=1 pt − pj∑T t=1 pt ∣∣∣∣∣ ≤ ∣∣∣∣∣ λi∑T t=1 λt − λj∑T t=1 λt ∣∣∣∣∣ (4) The index set I can be viewed as the most salient features set. Proposition 2 shows that the difference between the normalized latent discriminative probabilities of the most salient regions and others is smaller than the difference between their attention weights. It means that the prediction for each snippet using the latent discriminative probability can reduce the gap between the most salient featuress and the others compared to conventional methods that are based on attention weights. Thus, MAAN suppresses the dominant responses of the most salient featuress and encourages it to identify dense and more integral action regions. Directly learning the attention weights λ leans to an over response to the most salient region in weakly-supervised temporal localization. Namely, the attention weights for only a few snippets are too large and dominate the others, while attention weights for most of the other snippets that also belong to the true action are underestimated. Proposition 2 shows that latent discriminative probabilities are able to reduce the gap between the most salient features and the others compared to the attention weights. Thus, by employing the latent discriminative probabilities for prediction instead of the attention weights, our method can alleviate the dominant effect of the most salient region in weakly-supervised temporal localization. 2.3 RECURRENT FAST COMPUTATION Given a video containing T snippet-level representations, there are 2T possible configurations for the subset selection. Directly summing up all the 2T configurations to calculate x has a complexity of O(2T ) . In order to reduce the exponential complexity, we propose an iterative method to calculate x with O(T 2) complexity. Let us denote the aggregated feature of {x1,x2, · · ·xt} with length t as ht, and denote Yt = t∑ i=1 zixi and Zt = t∑ i=1 zi for simplicity, then we have a set of ht = E [∑t i=1 zixi∑t i=1 zi ] = E [ Yt Zt ] , t ∈ {1, 2, · · · , T}, (5) and the aggregated feature of {x1,x2, · · ·xT } can be obtained as x = hT . In Eq. (5), Zt is the summation of all the zi, which indicates the number of elements selected in the subset. Although there are 2t distinct configurations for {z1, z2, · · · zt}, it has only t + 1 distinct values for Zt, i.e. 0, 1, · · · , t. Therefore, we can divide all the 2t distinct configurations into t+ 1 groups, where the configurations sharing with the same Zt fall into the same group. Then the expectation ht can be calculated as the summation of the t + 1 parts. That is, ht = E [ E [ Yt Zt ∣∣∣Zt = i]] = ∑ti=0 mti, where the mti, indicating the i th part of ht for group Zt = i, is shown in Eq. (6). mti = P (Zt = i)E [ Yt Zt ∣∣∣∣Zt = i] . (6) In order to calculate ht+1 = ∑t+1 i=0 m t+1 i , given m t i , i ∈ {0, · · · , t}, we can calculate m t+1 i , i ∈ {0, 1, · · · , t + 1} recurrently. The key idea here is that mt+1i comes from two cases: if zt+1 = 0, then mt+1i is the same as m t i; if zt+1 = 1, then m t+1 i is the weighted average of m t i−1 and xt+1. The latter case is also related to the probability P (Zt = i− 1). By denoting qti−1 = P (Zt = i− 1) for simplicity, we can obtain mt+1i as a function of several elements: mt+1i = f(m t i−1,m t i,xt+1, pt+1, q t i−1). (7) Similarly, the computation of qt+1i = P (Zt+1 = i) comes from two cases: the probability of selecting i− 1 items from the first t items and selecting the (t+ 1)th item, i.e., qti−1pt+1; and the probability of selecting i items all from the first t items and not selecting the (t+ 1)th item, i.e., qti (1− pt+1). We derive the function of m t+1 i and q t+1 i in Proposition 3. Detailed proofs can be found in Appendix C. Proposition 3. Let zt ∼ Bernoulli(pt) , Zt = t∑ i=1 zi and Yt = t∑ i=1 zixi for t ∈ {1, ..., T}. Define mti , i ∈ {0, · · · , t} as Eq. (6) and qti = P (Zt = i), then m t+1 i i ∈ {0, 1, · · · , t + 1} can be obtained recurrently by Eq. (8) and Eq. (9). mt+1i = pt+1 ( bi−1m t i−1 + (1− bi−1)qti−1xt+1 ) + (1− pt+1)mti, (8) qt+1i = pt+1q t i−1 + (1− pt+1) qti , (9) where bi = ii+1 , q t −1 = 0, q t t+1 = 0, q 0 0 = 1, m t 0 = 0, and m t t+1 = 0. Proposition 3 provides a recurrent formula to calculate mti. With this recurrent formula, we calculate the aggregation hT by iteratively calculating mti from i = 1 to t and t = 1 to T . Therefore, we can obtain the aggregated feature of {x1,x2, · · ·xT } as x = hT = ∑T i=0 m T i . The iterative computation procedure is summarized in Algorithm 1 in Appendix E. The time complexity is O(T 2). With the fast iterative algorithm in Algorithm 1, the MAA becomes practical for end-to-end training. A demonstration of the computation graph for qt+1i in Eq. (9) and m t+1 i in Eq. (8) is presented in the left and right-hand sides of Figure 2, respectively. From Figure 2, we can see clearly that, to compute m32 (the big black node on the right), it needs m 2 1, m 2 2, x3, p3, and q 2 1 . The MAA can be easily implemented as a subnetwork for end-to-end training and can be used to replace the operation of other feature aggregators. 2.4 NETWORK ARCHITECTURE AND TEMPORAL ACTION LOCALIZATION Network Architecture: We now describe the network architecture that employs the MAA module described above for weakly-supervised temporal action localization. We start from a previous stateof-the-art base architecture, the sparse temporal pooling network (STPN) (Nguyen et al., 2018). As shown in Figure 3, it first divides the input video into several non-overlapped snippets and extracts the I3D (Carreira & Zisserman, 2017) feature for each snippet. Each snippet-level feature is then fed to an attention module to generate an attention weight between 0 and 1. STPN then uses a feature aggregator to calculate a weighted sum of the snippet-level features with these class-agnostic attention weights to create a video-level representation, as shown on the left in Figure 4. The video-level representation is then passed through an FC layer followed by a sigmoid layer to obtain class scores. Our MAAN uses the attention module to generate the latent discriminative probability pt and replaces the feature aggregator from the weighted sum aggregation by the proposed marginalized average aggregation, which is demonstrated on the right in Figure 4. Training with video-level class labels: Formally, the model first performs aggregation of the snippet-level features (i.e. x1,x2, · · ·xT ) to obtain the video-level representation x̄ ( x̄ = E[ ∑T i=1 zixi/ ∑T i=1 zi]). Then, it applies a logistic regression layer (FC layer + sigmoid) to output video-level classification prediction probability. Specifically, the prediction probability for class c ∈ {1, 2, · · ·C} is parameterized as σcj = σ(w>c xj), where xj is the aggregated feature for video j ∈ {1, ..., N}. Suppose each video xj is i.i.d and each action class is independent from the other, the negative log-likelihood function (cross-entropy loss) is given as follows: L(W) = − N∑ j=1 C∑ c=1 ( ycj log σ c j + (1− ycj) log(1− σcj) ) , (10) where ycj ∈ {0, 1} is the ground-truth video-level label for class c happening in video j and W = [w1, ...,wC ]. Temporal Action Localization: Let sc = w>c x be the video-level action prediction score, and σ(sc) = σ(w>c x) be the video-level action prediction probability. In STPN, as x̄ = ∑T t=1 λtxt, the sc can be rewritten as: sc = w>c x = ∑T t=1 λtw > c xt, (11) In STPN, the prediction score of snippet t for action class c in a video is defined as: sct = λtσ(w > c xt), (12) where σ(·) denotes the sigmoid function. In MAAN, as x̄ = E[ ∑T i=1 zixi/ ∑T i=1 zi], according to Proposition 1, the sc can be rewritten as: sc = w>c x = w > c E[ ∑T i=1 zixi/ ∑T i=1 zi] = ∑T t=1 ctptw > c xt. (13) The latent discriminative probability pt corresponds to the class-agnostic attention weight for snippet t. According to Proposition 1 and Proposition 2, ct does not relate to snippet t, but captures the context of other snippets. wc corresponds to the class-specific weights for action class c for all the snippets, and w>c xt indicates the relevance of snippet t to class c. To generate temporal proposals, we compute the prediction score of snippet t belonging to action class c in a video as: sct = ptσ(w > c xt). (14) We denote the sc = (sc1, s c 2, ..., s c T )> as the class activation sequence (CAS) for class c. Similar to STPN, the threshold is applied to the CAS for each class to extract the one-dimensional connected components to generate its temporal proposals. We then perform non-maximum suppression among temporal proposals of each class independently to remove highly overlapped detections. Compared to STPN (Eq. (12)), MAAN (Eq. (14)) employs the latent discriminative probability pt instead of directly using the attention weight λt (equivalent to ctpt) for prediction. Proposition 2 suggests that MAAN can suppress the dominant response sct compared to STPN. Thus, MAAN is more likely to achieve a better performance in weakly-supervised temporal action localization. 3 EXPERIMENTS This section discusses the experiments on the weakly-supervised temporal action localization problem, which is our main focus. We have also extended our algorithm on addressing the weakly-supervised image object detection problem and the relevant experiments are presented in Appendix F. 3.1 EXPERIMENTAL SETTINGS Datasets. We evaluate MAAN on two popular action localization benchmark datasets, THUMOS14 (Jiang et al., 2014) and ActivityNet1.3 (Heilbron et al., 2015). THUMOS14 contains 20 action classes for the temporal action localization task, which consists of 200 untrimmed videos (3,027 action instances) in the validation set and 212 untrimmed videos (3,358 action instances) in the test set. Following standard practice, we train the models on the validation set without using the temporal annotations and evaluate them on the test set. ActivityNet1.3 is a large-scale video benchmark for action detection which covers a wide range of complex human activities. It provides samples from 200 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. This dataset contains 10,024 training videos, 4,926 validation videos and 5,044 test videos. In the experiments, we train the models on the training videos and test on the validation videos. Evaluation Metrics. We follow the standard evaluation metric by reporting mean average precision (mAP) values at several different levels of intersection over union (IoU) thresholds. We use the benchmarking code provided by ActivityNet1 to evaluate the models. Implementation Details. We use two-stream I3D networks (Carreira & Zisserman, 2017) pre-trained on the Kinetics dataset (Kay et al., 2017) to extract the snippet-level feature vectors for each video. All the videos are divided into sets of non-overlapping video snippets. Each snippet contains 16 consecutive frames or optical flow maps. We input each 16 stacked RGB frames or flow maps into the I3D RGB or flow models to extract the corresponding 1024 dimensional feature vectors. Due to the various lengths of the videos, in the training, we uniformly divide each video into T non-overlapped segments, and randomly sample one snippet from each segment. Therefore, we sample T snippets for each video as the input of the model for training. We set T to 20 in our MAAN model. The attention module in Figure 3 consists of an FC layer of 1024× 256, a LeakyReLU layer, an FC layer of 256× 1, and a sigmoid non-linear activation, to generate the latent discriminative probability pt. We pass the aggregated video-level representation through an FC layer of 1024× C followed by a sigmoid activation to obtain class scores. We use the ADAM optimizer (Kingma & Ba, 2014) with an initial learning rate of 5× 10−4 to optimize network parameters. At the test time, we first reject 1https://github.com/activitynet/ActivityNet/tree/master/Evaluation classes whose video-level probabilities are below 0.1. We then forward all the snippets of the video to generate the CAS for the remaining classes. We generate the temporal proposals by cutting the CAS with a threshold th. The combination ratio of two-stream modalities is set to 0.5 and 0.5. Our algorithm is implemented in PyTorch 2. We run all the experiments on a single NVIDIA Tesla M40 GPU with a 24 GB memory. 3.2 THUMOS14 DATASET We first compare our MAAN model on the THUMOS14 dataset with several baseline models that use different feature aggregators in Figure 3 to gain some basic understanding of the behavior of our proposed MAA. The descriptions of the four baseline models are listed below. (1) STPN. It employs the weighed sum aggregation x̄ = ∑T t=1 λtxt to generate the video-level representation. (2) Dropout. It explicitly performs dropout sampling with dropout probability p = 0.5 in STPN to obtain the video-level representation, x̄ = ∑T t=1 rtλtxt, rt ∼ Bernoulli(0.5). (3) Normalization. Denoted as “Norm” in the experiments, it utilizes the weighted average aggregation x̄ = ∑T t=1 λtxt/ ∑T t=1 λt for the video-level representation. (4) SoftMax Normalization. Denoted as “SoftMaxNorm” in the experiments, it applies the softmax function as the normalized weights to get the weighted average aggregated video-level feature, x̄ = ∑T t=1 e λtxt/ ∑T t=1 e λt . We test all the models with the cutting threshold th as 0.2 of the max value of the CAS. We compare the detection average precision (%) at IoU = [0.1 : 0.1 : 0.9] and the video-level classification mean average precision (%) (denoted as Cls mAP) on the test set in Table 1. From Table 1, we can observe that although all the methods achieve a similar video-level classification mAP, their localization performances vary a lot. It shows that achieving a good video-level classification performance cannot guarantee obtaining a good snippet-level localization performance because the former only requires the correct prediction of the existence of an action, while the latter requires the correct prediction of both its existence and its duration and location. Moreover, Table 1 demonstrates that MAAN consistently outperforms all the baseline models at different levels of IoUs in the weakly-supervised temporal localization task. Both the “Norm” and “SoftmaxNorm” are the normalized weighted average aggregation. However, the “SoftmaxNorm” performs the worst, because the softmax function over-amplifies the weight of the most salient snippet. As a result, it tends to identify very few discriminative snippets and obtains sparse and non-integral localization. The “Norm” also performs worse than our MAAN. It is the normalized weighted average over the snippet-level representation, while MAAN can be considered as the normalized weighted average (expectation) over the subsetlevel representation. Therefore, MAAN encourages the identification of dense and integral action segments as compared to “Norm” which encourages the identification of only several discriminative snippets. MAAN works better than “Dropout” because “Dropout” randomly drops out the snippets with different attention weights by uniform probabilities. At each iteration, the scale of the aggregated feature varies a lot, however, MAAN samples with the learnable latent discriminative probability and conducts the expectation of keeping the scale of the aggregated feature stable. Compared to STPN, MAAN also achieves superior results. MAAN implicitly factorizes the attention weight into ctpt, where pt learns the latent discriminative probability of the current snippet, and ct captures the contextual information and regularizes the network to learn a more informative aggregation. The properties of MAA disallow the predicted class activation sequences to concentrate on the most salient regions. The quantitative results show the effectiveness of the MAA feature aggregator. 2https://github.com/pytorch/pytorch Figure 5 visualizes the one-dimensional CASs of the proposed MAAN and all the baseline models. The temporal CAS generated by MAAN can cover large and dense regions to obtain more accurate action segments. In the example in Figure 5, MAAN can discover almost all the actions that are annotated in the ground-truth; however, the STPN have missed several action segments, and also tends to only output the more salient regions in each action segment. Other methods are much sparser compared to MAAN. The first row of Figure 5 shows several action segments in red and in green, corresponding to action segments that are relatively difficult and easy to be localized, respectively. We can see that all the easily-localized segments contain the whole person who is performing the “HammerThrow” action, while the difficultly-localized segments contain only a part of the person or the action. Our MAAN can successfully localize the easy segments as well as the difficult segments; however, all the other methods fail on the difficult ones. It shows that MAAN can identify several dense and integral action regions other than only the most discriminative region which is identified by the other methods. We also compare our model with the state-of-the-art action localization approaches on the THUMOS14 dataset. The numerical results are summarized in Table 2. We include both fully and weakly-supervised learning, as in (Nguyen et al., 2018). As shown in Table 2, our implemented STPN performs slightly better than the results reported in the original paper (Nguyen et al., 2018). From Table 2, our proposed MAAN outperforms the STPN and most of the existing weakly-supervised action localization approaches. Furthermore, our model still presents competitive results compared with several recent fully-supervised approaches even when trained with only video-level labels. 3.3 ACTIVITYNET1.3 DATASET We train the MAAN model on the ActivityNet1.3 training set and compare our performance with the recent state-of-the-art approaches on the validation set in Table 3. The action segment in ActivityNet is usually much longer than that of THUMOS14 and occupies a larger percentage of a video. We use a set of thresholds, which are [0.2, 0.15, 0.1, 0.05] of the max value of the CAS, to generate the proposals from the one-dimensional CAS. As shown in Table 3, with the set of thresholds, our implemented STPN performs slightly better than the results reported in the original paper (Nguyen et al., 2018). With the same threshold and experimental setting, our proposed MAAN model outperforms the STPN approach on the large-scale ActivityNet1.3. Similar to THUMOS14, our model also achieves good results that are close to some of the fully-supervised approaches. 4 CONCLUSION We have proposed the marginalized average attentional network (MAAN) for weakly-supervised temporal action localization. MAAN employs a novel marginalized average aggregation (MAA) operation to encourage the network to identify the dense and integral action segments and is trained in an end-to-end fashion. Theoretically, we have proved that MAA reduces the gap between the most discriminant regions in the video to the others, and thus MAAN generates better class activation sequences to infer the action locations. We have also proposed a fast algorithm to reduce the computation complexity of MAA. Our proposed MAAN achieves superior performance on both the THUMOS14 and the ActivityNet1.3 datasets on weakly-supervised temporal action localization tasks compared to current state-of-the-art methods. 5 ACKNOWLEDGEMENT We thank our anonymous reviewers for their helpful feedback and suggestions. Prof. Ivor W. Tsang was supported by ARC FT130100746, ARC LP150100671, and DP180100106. A PROOF OF PROPOSITION 1 A.1 PROOF OF EQUATION (3) Proof. E [∑T i=1 zixi∑T i=1 zi ] = ∑T i=1 E[zi/ ∑T i=1 zi]xi. (15) In addition, E[zi/ ∑T i=1 zi] = pi × E [ 1/(1 + ∑T k=1,k 6=i zk) ] + (1− pi)× 0 = pici. (16) Thus, we achieve E [∑T i=1 zixi∑T i=1 zi ] = ∑T i=1 cipixi = ∑T i=1 λixi. (17) A.2 PROOF OF pi ≥ pj ⇔ ci ≥ cj ⇔ λi ≥ λj Proof. Denote ST = ∑T k=1,k 6=i,k 6=j zk, then we have ci − cj = E [ 1/(1 + ∑ k 6=i zk) ] − E [ 1/(1 + ∑ k 6=j zk) ] (18) = pjE [1/(2 + ST )] + (1− pj)E [1/(1 + ST )]− piE [1/(2 + ST )]− (1− pi)E [1/(1 + ST )] = (pi − pj) (E [1/(1 + ST )]− E [1/(2 + ST )]) . (19) Since E [1/(1 + ST )]− E [1/(2 + ST )] > 0, we achieve that pi ≥ pj ⇔ ci ≥ cj . Since λi = cipi and λj = cjpj , and ci, cj , pi, pj ≥ 0, it follows that pi ≥ pj ⇔ λi ≥ λj . B PROOF OF PROPOSITION 2 Proof. ∑T i=1 cipi = ∑T i=1 E[zi/ ∑T i=1 zi] = E [ ( ∑T i=1 zi)/( ∑T i=1 zi) ] = 1 When p1 = p2 = · · · = pT , we have λ1 = λ2 = · · · = λT . Then inequality (4) trivially holds true. Without loss of generality, assume p1 ≥ p2 ≥ · · · ≥ pT and there exists a strict inequality. Then ∃k ∈ {1, ..., T − 1} such that ci ≥ 1/( ∑T t=1 pt) for 1 ≤ i ≤ k and cj ≤ 1/( ∑T t=1 pt) for k < j ≤ T . Otherwise, we obtain ci ≥ 1/( ∑T t=1 pt) or ci ≤ 1/( ∑T t=1 pt) for 1 ≤ i ≤ T and there exists a strict inequality. It follows that ∑T i=1 cipi > 1 or ∑T i=1 cipi < 1, which contradicts∑T i=1 cipi = 1. Thus, we obtain the set I 6= ∅. Without loss of generality, for 1 ≤ i ≤ k and i ≤ j ≤ T , we have ci ≥ 1/( ∑T t=1 pt) and pi ≥ pj , then we obtain that ci ≥ cj . It follows that pi/( ∑T t=1 pt)− pj/( ∑T t=1 pt)− ( λi/( ∑T t=1 λt)− λj/( ∑T t=1 λt) ) (20) = pi/( ∑T t=1 pt)− pj/( ∑T t=1 pt)− (cipi − cjpj) (21) = ( 1/( ∑T t=1 pt)− ci ) pi − ( 1/( ∑T t=1 pt)− cj ) pj (22) ≤ ( 1/( ∑T t=1 pt)− ci ) pi − ( 1/( ∑T t=1 pt)− ci ) pj (23) = ( 1/( ∑T t=1 pt)− ci ) (pi − pj) ≤ 0. (24) C PROOF OF PROPOSITION 3 C.1 COMPUTATION OF ht ht = E[ Yt Zt ] = ∑ z1,z2,...,zt P (z1, z2, · · · zt) ∑t j=1 zjxj∑t j=1 zj (25) = ∑t i=0 ( ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, ..., zt) ∑t j=1 zjxj∑t j=1 zj ) (26) = ∑t i=0 ∑ z1,z2,...,zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) ∑t j=1 zjxj i (27) = ∑t i=0 mti, (28) where 1(·) denotes the indicator function. We achieve Eq. (26) by partitioning the summation into t+ 1 groups . Terms belonging to group i have ∑t j=1 zj = i. Let mti = ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) ∑t j=1 zjxj i , and we achieve Eq. (28). C.2 PROOF OF RECURRENT FORMULA OF mt+1i We now give the proof of the recurrent formula of Eq. (29) mt+1i = pt+1 ( bi−1m t i−1 + (1− bi−1)qti−1xt+1 ) + (1− pt+1)mti. (29) Proof. mt+1i = ∑ z1,z2,···zt,zt+1 1 (∑t+1 j=1 zj = i ) P (z1, z2, · · · zt+1) ∑t+1 j=1 zjxj i (30) = ∑ z1,z2,···zt,zt+1 1 (∑t j=1 zj + zt+1 = i ) P (z1, z2, · · · zt)P (zt+1) ∑t j=1 zjxj + zt+1xt+1 i (31) = ∑ z1,z2,···zt [ 1 (∑t j=1 zj + 1 = i ) P (z1, z2, · · · zt) pt+1 ∑t j=1 zjxj+xt+1 i ] + ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) (1− pt+1) ∑t j=1 zjxj i (32) = ∑ z1,z2,···zt 1 (∑t j=1 zj + 1 = i ) P (z1, z2, · · · zt) pt+1 ∑t j=1 zjxj+xt+1 i +(1− pt+1) ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) ∑t j=1 zjxj i (33) = pt+1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) i−1i ∑t j=1 zjxj+xt+1 i−1 +(1− pt+1)mti (34) = pt+1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) [ i−1 i ∑t j=1 zjxj i−1 + xt+1 i ] +(1− pt+1)mti (35) = pt+1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) [ bi−1 ∑t j=1 zjxj i−1 + (1− bi−1)xt+1 ] +(1− pt+1)mti (36) Then, we have mt+1i = pt+1bi−1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) ∑t j=1 zjxj i−1 +pt+1(1− bi−1) ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt)xt+1 + (1− pt+1)mti. (37) Since qti−1 = P (∑t j=1 zj = i− 1 ) = ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) we can achieve mt+1i = pt+1 [ bi−1m t i−1 + (1− bi−1)qti−1xt+1 ] + (1− pt+1)mti. (38) C.3 PROOF OF RECURRENT FORMULA OF qt+1i We present the proof of Eq. (39) qt+1i = pt+1q t i−1 + (1− pt+1)qti (39) Proof. qt+1i = ∑ z1,z2,···zt,zt+1 1 (∑t+1 j=1 zj = i ) P (z1, z2, · · · zt+1) (40) = ∑ z1,z2,···zt,zt+1 1 (∑t j=1 zj + zt+1 = i ) P (z1, z2, · · · zt)P (zt+1) (41) = ∑ z1,z2,···zt 1 (∑t j=1 zj + 1 = i ) P (z1, z2, · · · zt) pt+1 (42) + ∑ z1,z2,···zt 1 (∑t j=1 zj = i ) P (z1, z2, · · · zt) (1− pt+1) (43) = pt+1 ∑ z1,z2,···zt 1 (∑t j=1 zj = i− 1 ) P (z1, z2, · · · zt) + (1− pt+1)qti (44) = pt+1q t i−1 + (1− pt+1)qti (45) D RELATED WORK Video Action Analysis. Researchers have developed quite a few deep network models for video action analysis. Two-stream networks (Simonyan & Zisserman, 2014) and 3D convolutional neural networks (C3D) (Tran et al., 2015) are popular solutions to learn video representations and these techniques, including their variations, are extensively used for video action analysis. Recently, a combination of two-stream networks and 3D convolutions, referred to as I3D (Carreira & Zisserman, 2017), was proposed as a generic video representation learning method, and served as an effective backbone network in various video analysis tasks such as recognition (Wang et al., 2016), localization (Shou et al., 2016), and weakly-supervised learning (Wang et al., 2017). Weakly-Supervised Temporal Action Localization. There are only a few approaches based on weakly-supervised learning that rely solely on video-level class labels to localize actions in the temporal domain. Wang et al. (Wang et al., 2017) proposed a UntrimmedNet framework, where two softmax functions are applied across class labels and proposals to perform action classification and detect important temporal segments, respectively. However, using the softmax function across proposals may not be effective for identifying multiple instances. Singh et al. (Singh & Lee, 2017) designed a Hide-and-Seek model to randomly hide some regions in a video during training and force the network to seek other relevant regions. However, the randomly hiding operation, as a data augmentation, cannot guarantee whether it is the action region or the background region that is hidden during training, especially when the dropout probabilities for all the regions are the same. Nguyen et al. (Nguyen et al., 2018) proposed a sparse temporal pooling network (STPN) to identify a sparse set of key segments associated with the actions through attention-based temporal pooling of video segments. However, the sparse constraint may force the network to focus on very few segments and lead to incomplete detection. In order to prevent the model from focusing only on the most salient regions, we are inspired to propose the MAAN model to explicitly take the expectation with respect to the average aggregated features of all the sampled subsets from the video. Feature Aggregators. Learning discriminative localization representations with only video-level class labels requires the feature aggregation operation to turn multiple snippet-level representations into a video-level representation for classification. The feature aggregation mechanism is widely adopted in the deep learning literature and a variety of scenarios, for example, neural machine translation (Bahdanau et al., 2015), visual question answering (Hermann et al., 2015), and so on. However, most of these cases belong to fully-supervised learning where the goal is to learn a model that attends the most relevant features given the supervision information corresponding to the task directly. Many variant feature aggregators have been proposed, ranging from nonparametric max pooling and average pooling, to parametric hard attention (Gkioxari et al., 2015), soft attention (Vaswani et al., 2017; Sharma et al., 2015), second-order pooling (Girdhar & Ramanan, 2017; Kong & Fowlkes, 2017), structured attention (Kim et al., 2017; Mensch & Blondel, 2018), graph aggregators (Zhang et al., 2018a; Hamilton et al., 2017), and so on. Different from the fullysupervised setting where the feature aggregator is designed for the corresponding tasks, we develop a feature aggregator that is trained only with class labels, and then to be used to predict the dense action locations for test data. Different from the heuristic approaches (Wei et al., 2017; Zhang et al., 2018b) which can be considered as a kind of hard-code attention by erasing some regions with a hand-crafted threshold, we introduce the end-to-end differentiable marginalized average aggregation which incorporates learnable latent discriminative probabilities into the learning process. E MARGINALIZED AVERAGE AGGREGATION Algorithm 1 Marginalized Average Aggregation Input: Feature Representations {x1,x2, · · ·xT } , Sampling Probability {p1, p2, · · · pT }. Output: Aggregated Representation x Initialize m00 = 0, q 0 0 = 1, bi = i i+1 ; for t = 1 to T do Set mt0 = 0, and q t −1 = 0 and q t t+1 = 0; for i = 1 to t do qti = ptq t−1 i−1 + (1− pt) q t−1 i mti = pt ( bi−1m t−1 i−1 + (1− bi−1)q t−1 i−1xt ) + (1− pt)mt−1i end for end for Return x = T∑ i=0 mTi F EXPERIMENTS ON WEAKLY-SUPERVISED IMAGE OBJECT LOCALIZATION F.1 MODELS AND IMPLEMENTATION DETAILS We also evaluate the proposed model on the weakly-supervised object localization task. For weaklysupervised object localization, we are given a set of images in which each image is labeled only with its category label. The goal is to learn a model to predict both the category label as well as the bounding box for the objects in a new test image. Based on the model in (Zhou et al., 2016a) (denoted as CAM model), we replace the global average pooling feature aggregator with other kinds of feature aggregator, such as the weighted sum pooling and the proposed MAA by extending the original 1D temporal version in temporal action localization into a 2D spatial version. We denote the model with weighted sum pooling as the weighted-CAM model. For the weighted-CAM model and the proposed MAAN model, we use an attention module to generate the attention weight λ in STPN or the latent discriminative probability p in MAAN. The attention module consists of a 2D convolutional layer of kernel size 1× 1, stride 1 with 256 units, a LeakyReLU layer, a 2D convolutional layer of kernel size 1× 1, stride 1 with 1 unit, and a sigmoid non-linear activation. F.2 DATASET AND EVALUATION METRIC We evaluate the weakly-supervised localization accuracy of the proposed model on the CUB-2002011 dataset (Wah et al., 2011). The CUB-200-2011 dataset has 11,788 images of 200 categories with 5,994 images for training and 5,794 for testing. We leverage the localization metric suggested by (Russakovsky et al., 2015) for comparison. This metric computes the percentage of images that is misclassified or with bounding boxes with less than 50% IoU with the groundtruth as the localization error. F.3 COMPARISONS We compare our MAA aggregator (MAAN) with the weighted sum pooling (weighted-CAM) and global average pooling (CAM (Zhou et al., 2016b)). For MAAN and weighted-CAM, we pool the convolutional feature for aggregation into two different sizes, 4× 4 and 7× 7. We fix all other factors (e.g. network structure, hyper-parameters, optimizer), except for the feature aggregators to evaluate the models. F.3.1 QUALITATIVE RESULTS The localization errors for different methods are presented in Table 4, where the GoogLeNet-GAP is the CAM model. Our method outperforms GoogLeNet-GAP by 5.06% in a Top-1 error. Meanwhile, MAAN achieves consistently lower localization error than weighted-CAM on the two learning schemes. It demonstrates that the proposed MAAN can improve the localization performance in the weakly-supervised setting. Moreover, both MAAN and weighted-CAM obtain smaller localization error when employing the 7× 7 learning scheme than the 4× 4 learning scheme. F.3.2 VISUALIZATION Figure 6 visualizes the heat maps and localization bounding boxes obtained by all the compared methods. The object localization heat maps generated by the proposed MAAN can cover larger object regions and obtain more accurate bounding boxes.
1. What is the main contribution of the paper regarding weakly-supervised action localization? 2. What are the strengths and weaknesses of the proposed solution to address the identified problem? 3. How does the reviewer assess the clarity and intuitiveness of the paper's explanations and figures? 4. What are the concerns regarding the experimental evidence and comparisons with other works? 5. Are there any gaps or missing discussions in the review of current literature? 6. How convincing are the qualitative results, and what additional analyses or discussions would improve their interpretation?
Review
Review In this paper the authors focus on the problem of weakly-supervised action localization. The authors state that a problem with weakly-supervised attention based methods is that they tend to focus on only the most salient regions and propose a solution to this which reduces the difference between the responses for the most salient regions and other regions. They do this by employing marginalized average aggregation to averaging a sample a subset of features in relation to their latent discriminative probability then calculating the expectation over all possible subsets to produce a final aggregation. The problem is interesting, especially noting that current attention methods suffer from paying attention to the most salient regions therefore missing many action segments in action localization. The authors build upon an existing weakly-supervised action localization framework, having identified a weakness of it and propose a solution. The work also pays attention to the algorithm's speed which is practically useful. The experiments also compare to several other potential feature aggregators. However, there are several weakness of the current version of the paper: - In parts the paper feels overly complicated, particularly in the method (section 2). It would be good to see more intuitive explanations of the concepts introduce here. For instance, the author's state that c_i captures the contextual information from other video snippets, it would be good to see a figure with an example video and the behaviour of p_i and c_i as opposed to lamba_i. I found it difficult to map p_i, c_i to z and lambda used elsewhere. - The experimental evidence does not show where the improvement comes from. The authors manage to acheieve a 4-5% improvement over STPN through their re-implemenation of the algorithm, however only have a ~2% improve with their marginalized average attention on THUMOS. I would like to know the cause in the increase over the original STPN results: is it a case of not being able to replicate the results of STPN or do the different parameter choices, such as use of leakly RELU, 20 snippets instead of 400 and only rejecting classes whose video-level probabilities are below 0.01 instead of 0.1, cause this big of an increase in results? There is also little evidence that the actual proposal (contextual information) is the reason for the reported improvement. - There seems to be several gaps in the review of current literature. Firstly, the authors refer to Wei et al. 2017 and Zhang et al. 2018b as works which erase the most salient regions to be able to explore regions other than the most salient. The authors state that the problem with these methods is that they are not end-to-end trainable, however Li et al. 2018 'Tell Me Where to Look': Guided Attention Inference Network' proposes a method which erases regions which is trainable end-to-end. Secondly, the authors do not mention the recent work W-TALC which performs weakly-supervised action localization and outperforms STPN. It would be good to have a baseline against this method. - The qualitative results in this paper are confusing and not convincing. It is true that the MAAN's activation sequence shows peaks which correspond to groundtruth and are not present in other methods. However, the MAAN activation sequence also shows several extra peaks not present in other methods and also not present in the groundtruth, therefore it looks like it is keener to predict the presence of the action causing more true positives, but also more false positives. It would be good to see some discussion of these failure cases and/or more qualitative results. The current figure could be easily compressed by only showing one instance of the ground-truth instead of one next to each method. I like the idea of the paper however I am currently unconvinced by the results that this is the correct method to solve the problem.
ICLR
Title Rethinking the limiting dynamics of SGD: modified loss, phase space oscillations, and anomalous diffusion Abstract In this work we explore the limiting dynamics of deep neural networks trained with stochastic gradient descent (SGD). As observed previously, long after performance has converged, networks continue to move through parameter space by a process of anomalous diffusion in which distance travelled grows as a power law in the number of gradient updates with a nontrivial exponent. We reveal an intricate interaction between the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix at the end of training that explains this anomalous diffusion. To build this understanding, we first derive a continuoustime model for SGD with finite learning rates and batch sizes as an underdamped Langevin equation. We study this equation in the setting of linear regression, where we can derive exact, analytic expressions for the phase space dynamics of the parameters and their instantaneous velocities from initialization to stationarity. Using the Fokker-Planck equation, we show that the key ingredient driving these dynamics is not the original training loss, but rather the combination of a modified loss, which implicitly regularizes the velocity, and probability currents, which cause oscillations in phase space. We identify qualitative and quantitative predictions of this theory in the dynamics of a ResNet-18 model trained on ImageNet. Through the lens of statistical physics, we uncover a mechanistic origin for the anomalous limiting dynamics of deep neural networks trained with SGD. 1 INTRODUCTION Deep neural networks have demonstrated remarkable generalization across a variety of datasets and tasks. Essential to their success has been a collection of good practices on how to train these models with stochastic gradient descent (SGD). Yet, despite their importance, these practices are mainly based on heuristic arguments and trial and error search. Without a general theory connecting the hyperparameters of optimization, the architecture of the network, and the geometry of the dataset, theory-driven design of deep learning systems is impossible. Existing theoretical works studying this interaction have leveraged the random structure of neural networks at initialization [1, 2, 3] and in their infinite width limits in order to study their dynamics [4, 5, 6, 7, 8]. Here we take a different approach and study the training dynamics of pre-trained networks that are ready to be used for inference. By leveraging the mathematical structures found at the end of training, we uncover an intricate interaction between the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix that corroborates previously identified empirical behavior such as anomalous limiting dynamics. Not only is understanding the limiting dynamics of SGD a critical stepping stone to building a complete theory for the learning dynamics of neural networks, but recently there have been a series of works demonstrating that the performance of pre-trained networks can be improved through averaging and ensembling [9, 10, 11]. Combining empirical exploration and theoretical tools from statistical physics, we identify and uncover a mechanistic explanation for the limiting dynamics of neural networks trained with SGD. 2 DIFFUSIVE BEHAVIOR IN THE LIMITING DYNAMICS OF SGD A network that has converged in performance will continue to move through parameter space [12, 13, 14, 15]. To demonstrate this behavior, we resume training of pre-trained convolutional networks while tracking the network trajectory through parameter space. Let θ∗ ∈ Rm be the parameter vector for a pre-trained network and θk ∈ Rm be the parameter vector after k steps of resumed training. We track two metrics of the training trajectory, namely the local parameter displacement δk between consecutive steps, and the global displacement ∆k after k steps from the pre-trained initialization: δk = θk − θk−1, ∆k = θk − θ∗. (1) As shown in Fig. 1, neither of these differences converge to zero across a variety of architectures, indicating that despite performance convergence, the networks continue to move through parameter space, both locally and globally. The squared norm of the local displacement ‖δk‖22 remains near a constant value, indicating the network is essentially moving at a constant instantaneous speed. This observation is quite similar to the “equilibrium" phenomenon or “constant angular update" observed in Li et al. [17] and Wan et al. [13] respectively. However, these works only studied the displacement for parameters immediately preceding a normalization layer. The constant instantaneous speed behavior we observe is for all parameters in the model and is even present in models without normalization layers. While the squared norm of the local displacement is essentially constant, the squared norm of the global displacement ‖∆k‖22 is monotonically growing for all networks, implying even once trained, the network continues to diverge from where it has been. Indeed Fig. 1 indicates a power law relationship between global displacement and number of steps, given by ‖∆k‖22 ∝ kc. As we’ll see in section 8, this relationship is indicative of anomalous diffusion where c corresponds to the anomalous diffusion exponent. Standard Brownian motion corresponds to c = 1. Similar observation were made by Baity-Jesi et al. [14] who noticed distinct phases of the training trajectory evident in the dynamics of the global displacement and Chen et al. [15] who found that the exponent of diffusion changes through the course of training. A parallel observation is given by Hoffer et al. [18] for the beginning of training, where they measure the global dis- placement from the initialization of an untrained network and observe a rate ∝ log(k), a form of ultra-slow diffusion. These empirical observations raise the natural questions, where is the network moving to and why? To answer these questions we will build a diffusion based theory of SGD, study these dynamics in the setting of linear regression, and use lessons learned in this fundamental setting to understand the limiting dynamics of neural networks. 3 RELATED WORK There is a long line of literature studying both theoretically and empirically the learning dynamics of deep neural networks trained with SGD. Our analysis and experiments build upon this literature. Continuous models for SGD. Many works consider how to improve the classic gradient flow model for SGD to more realistically reflect momentum [19], discretization due to finite learning rates [20, 21], and stochasticity due to random batches [22, 23]. One line of work has studied the dynamics of networks in their infinite width limits through dynamical mean field theory [24, 25, 26, 27], while a different approach has used stochastic differential equations (SDEs) to model SGD directly, the approach we take in this work. However, recently, the validity of this approach has been questioned. The main argument, as nicely explained in Yaida [28], is that most SDE approximations simultaneously assume that ∆t → 0+, while maintaining that the learning rate η = ∆t is finite. The works Simsekli et al. [29] and Li et al. [30] have questioned the correctness of the using the central limit theorem (CLT) to model the gradient noise as Gaussian, arguing respectively that the heavy-tailed structure in the gradient noise and the weak dependence between batches leads the CLT to break down. In our work, we maintain the CLT assumption holds, which we discuss fur- ther in appendix A, but importantly we avoid the pitfalls of many previous SDE approximations by simultaneously modeling the effect of finite learning rates and stochasticity. Limiting dynamics. A series of works have applied SDE models of SGD to study the limiting dynamics of neural networks. In the seminal work by Mandt et al. [31], the limiting dynamics were modeled with a multivariate Ornstein-Uhlenbeck process by combining a first-order SDE model for SGD with assumptions on the geometry of the loss and covariance matrix for the gradient noise. This analysis was extended by Jastrzębski et al. [12] through additional assumptions on the covariance matrix to gain tractable insights and applied by Ali et al. [32] to the simpler setting of linear regression, which has a quadratic loss. A different approach was taken by Chaudhari and Soatto [33], which did not formulate the dynamics as an OU process, nor assume directly a structure on the loss or gradient noise. Rather, this analysis studied the same first-order SDE via the Fokker-Planck equation to propose the existence of a modified loss and probability currents driving the limiting dynamics, but did not provide explicit expressions. Our analysis deepens and combines ideas from all these works, where our key insight is to lift the dynamics into phase space. By studying the dynamics of the parameters and their velocities, and by applying the analysis first in the setting of linear regression where assumptions are provably true, we are able to identify analytic expressions and explicit insights which lead to concrete predictions and testable hypothesis. Stationary dynamics. A different line of work avoids modeling the limiting dynamics of SGD with an SDE and instead chooses to leverage the property of stationarity. These works [28, 34, 35, 36] assume that eventually the probability distribution governing the model parameters reaches stationarity such that the discrete SGD process is simply sampling from this distribution. Yaida [28] used this approach to derive fluctuation-dissipation relations that link measurable quantities of the parameters and hyperparameters of SGD. Liu et al. [35] used this approach to derive properties for the stationary distribution of SGD with a quadratic loss. Similar to our analysis, this work identifies that the stationary distribution for the parameters reflects a modified loss function dependent on the relationship between the covariance matrix of the gradient noise and the Hessian matrix for the original loss. Empirical exploration. Another set of works analyzing the limiting dynamics of SGD has taken a purely empirical approach. Building on the intuition that flat minima generalize better than sharp minima, Keskar et al. [37] demonstrated empirically that the hyperparameters of optimization influence the eigenvalue spectrum of the Hessian matrix at the end of training. Many subsequent works have studied the Hessian eigenspectrum during and at the end of training. Jastrzębski et al. [38], Cohen et al. [39] studied the dynamics of the top eigenvalues during training. Sagun et al. [40], Papyan [41], Ghorbani et al. [42] demonstrated the spectrum has a bulk of values near zero plus a small number of larger outliers. Gur-Ari et al. [43] demonstrated that the learning dynamics are constrained to the subspace spanned by the top eigenvectors, but found no special properties of the dynamics within this subspace. In our work we also determine that the top eigensubspace of the Hessian plays a crucial role in the limiting dynamics and by projecting the dynamics into this subspace in phase space, we see that the motion is not random, but consists of incoherent oscillations leading to anomalous diffusion. 4 MODELING SGD AS AN UNDERDAMPED LANGEVIN EQUATION Following the route of previous works [31, 12, 33] studying the limiting dynamics of neural networks, we first seek to model SGD as a continuous stochastic process. We consider a network parameterized by θ ∈ Rm, a training dataset {x1, . . . , xN} of size N , and a training loss L(θ) = 1N ∑N i=1 `(θ, xi) with corresponding gradient g(θ) = ∂L ∂θ . The state of the network at the kth step of training is defined by the position vector θk and velocity vector vk of the same dimension. The gradient descent update with learning rate η, momentum β, and weight decayλ is given by vk+1 = βvk − g(θk)− λθk, θk+1 = θk + ηvk+1, (2) where we initialize the network such that v0 = 0 and θ0 is the parameter initialization. In order to understand the dynamics of the network through position and velocity space, which we will refer to as phase space, we express these discrete recursive equations as the discretization of some unknown ordinary differential equation (ODE), sometimes referred to as a modified equation as in [44, 20]. While this ODE models the gradient descent process even at finite learning rates, it fails to account for the stochasticity introduced by choosing a random batch B of size S drawn uniformly from the set of N training points. This sampling yields the stochastic gradient gB(θ) = 1S ∑ i∈B∇`(θ, xi). To model this effect, we make the following assumption: Assumption 1 (CLT). We assume the batch gradient is a noisy version of the true gradient such that gB(θ)− g(θ) is a Gaussian random variable with mean 0 and covariance 1SΣ(θ). Incorporating this model of stochastic gradients into the previous finite difference equation and applying the stochastic counterparts to Euler discretizations, results in the standard drift-diffusion stochastic differential equation (SDE), referred to as an underdamped Langevin equation, d [ θ v ] = [ v − 2η(1+β) (g(θ) + λθ + (1− β)v) ] dt+ [ 0 0 0 2√ ηS(1+β) √ Σ(θ) ] dWt, (3) where Wt is a standard Wiener process. This is the continuous model we will study in this work: Assumption 2 (SDE). We assume the underdamped Langevin equation (3) accurately models the trajectory of the network driven by SGD through phase space such that θ(ηk) ≈ θk and v(ηk) ≈ vk. See appendix A for further discussion on the nuances of modeling SGD with an SDE. 5 LINEAR REGRESSION WITH SGD IS AN ORNSTEIN-UHLENBECK PROCESS Equipped with a model for SGD, we seek to understand its dynamics in the fundamental setting of linear regression, one of the few cases where we have a complete model for the interaction of the dataset, architecture, and optimizer. Let X ∈ RN×d be the input data, Y ∈ RN be the output labels, and θ ∈ Rd be our vector of regression coefficients. The least squares loss is the convex quadratic lossL(θ) = 12N ‖Y −Xθ‖2 with gradient g(θ) = Hθ−b, whereH = X ᵀX N and b = XᵀY N . Plugging this expression for the gradient into the underdamped Langevin equation (3), and rearranging terms, results in the multivariate Ornstein-Uhlenbeck (OU) process, d [ θt vt ] = − [ 0 −I 2 η(1+β) (H + λI) 2(1−β) η(1+β)I ] ︸ ︷︷ ︸ A ([ θt vt ] − [ µ 0 ]) dt+ √ 2κ−1 √√√√√ [ 0 0 0 2(1−β)η(1+β)Σ(θ) ] ︸ ︷︷ ︸ D dWt, (4) where A and D are the drift and diffusion matrices respectively, κ = S(1− β2) is an inverse temperature constant, and µ = (H + λI)−1b is the ridge regression solution. The solution to an OU process is a Gaussian process. By solving for the temporal dynamics of the first and second moments of the process, we can obtain an analytic expression for the trajectory at any time t. In particular, we can decompose the trajectory as the sum of a deterministic and stochastic component defined by the first and second moments respectively. Deterministic component. Using the form of A we can decompose the expectation as a sum of harmonic oscillators in the eigenbasis {q1, . . . , qm} of the Hessian, E [[ θt vt ]] = [ µ 0 ] + m∑ i=1 ( ai(t) [ qi 0 ] + bi(t) [ 0 qi ]) . (5) Here the coefficients ai(t) and bi(t) depend on the optimization hyperparameters η, β, λ, S and the respective eigenvalue of the Hessian ρi as further explained in appendix F. We verify this expression nearly perfectly matches empirics on complex datasets under various hyperparameter settings as shown in Fig. 2. Stochastic component. The cross-covariance of the process between two points in time t ≤ s, is Cov ([ θt vt ] , [ θs vs ]) =κ−1 ( B−e−AtBe−Aᵀt ) eA ᵀ(t−s), (6) where B solves the Lyapunov equation AB +BAᵀ = 2D. In order to gain analytic expressions for B in terms of the optimization hyperparameters, eigendecomposition of the Hessian, and covariance of the gradient noise, we must introduce the following assumption: Assumption 3 (Simultaneously Diagonalizable). We assume the covariance of the gradient noise is spatially independent Σ(θ) = Σ and commutes with the Hessian HΣ = ΣH , therefore sharing a common eigenbasis. 6 UNDERSTANDING STATIONARITY VIA THE FOKKER-PLANCK EQUATION The OU process is unique in that it is one of the few SDEs which we can solve exactly. As shown in section 5, we were able to derive exact expressions for the dynamics of linear regression trained with SGD from initialization to stationarity by simply solving for the first and second moments. While the expression for the first moment provides an understanding of the intricate oscillatory relationship in the deterministic component of the process, the second moment, driving the stochastic component, is much more opaque. An alternative route to solving the OU process that potentially provides more insight is the Fokker-Planck equation. The Fokker-Planck (FP) equation is a PDE describing the time evolution for the probability distribution of a particle governed by Langevin dynamics. For an arbitrary potential Φ and diffusion matrix D, the Fokker-Planck equation (under an Itô integration prescription) is ∂tp = ∇ · ( ∇Φp+∇ · ( κ−1Dp ))︸ ︷︷ ︸ −J , (7) where p represents the time-dependent probability distribution, and J is a vector field commonly referred to as the probability current. The FP equation is especially useful for explicitly solving for the stationary solution, assuming one exists, of the Langevin dynamics. The stationary solution pss by definition obeys ∂tpss = 0 or equivalently ∇ · Jss = 0. From this second definition we see that there are two distinct settings of stationarity: detailed balance when Jss = 0 everywhere, or broken detailed balance when ∇ · Jss = 0 and Jss 6= 0. For a general OU process, the potential is a convex quadratic function Φ(x) = xᵀAx defined by the drift matrix A. When the diffusion matrix is isotropic (D ∝ I) and spatially independent (∇ · D = 0) the resulting stationary solution is a Gibbs distribution pss(x) ∝ e−κΦ(x) determined by the original loss Φ(x) and is in detailed balance. Lesser known properties of the OU process arise when the diffusion matrix is anisotropic or spatially dependent [45, 46]. In this setting the solution is still a Gaussian process, but the stationary solution, if it exists, is no longer defined by the Gibbs distribution of the original loss Φ(x), but actually a modified loss Ψ(x). Furthermore, the stationary solution may be in broken detailed balance leading to a non-zero probability current Jss(x). Depending on the relationship between the drift matrix A and the diffusion matrix D the resulting dynamics of the OU process can have very nontrivial behavior. In the setting of linear regression, anisotropy in the data distribution will lead to anisotropy in the gradient noise and thus an anisotropic diffusion matrix. This implies that for most datasets we should expect that the SGD trajectory is not driven by the original least squares loss, but by a modified loss and converges to a stationary solution with broken detailed balance, as predicted by Chaudhari and Soatto [33]. Using the explicit expressions for the drift A and diffusion D matrices we can compute analytically the modified loss and stationary probability current, Ψ(θ, v) = ([ θ v ] − [ µ 0 ])ᵀ( U 2 )([ θ v ] − [ µ 0 ]) , Jss(θ, v) = −QU ([ θ v ] − [ µ 0 ]) pss, (8) where Q is a skew-symmetric matrix and U is a positive definite matrix defined as, Q = [ 0 −Σ(θ) Σ(θ) 0 ] , U = [ 2 η(1+β)Σ(θ) −1 (H + λI) 0 0 Σ(θ)−1 ] . (9) These new fundamental matrices, Q and U , relate to the original drift A and diffusion D matrices through the unique decomposition A = (D + Q)U , introduced by Ao [47] and Kwon et al. [48]. Using this decomposition we can easily show that B = U−1 solves the Lyapunov equation and indeed the stationary solution pss is the Gibbs distribution defined by the modified loss Ψ(θ, v) in equation (8). Further, the stationary cross-covariance solved in section 5 reflects the oscillatory dynamics introduced by the stationary probability currents Jss(θ, v) in equation (8). Taken together, we gain the intuition that the limiting dynamics of SGD in linear regression are driven by a modified loss subject to oscillatory probability currents. 7 EVIDENCE OF A MODIFIED LOSS AND OSCILLATIONS IN DEEP LEARNING Does the theory derived in the linear regression setting (sections 5, 6) help explain the empirical phenomena observed in the non-linear setting of deep neural networks (section 2)? In order for the theory built in the previous sections to apply to the limiting dynamics of neural networks, we must introduce simplifying assumptions on the loss landscape and gradient noise at the end of training: Assumption 4 (Quadratic Loss). We assume that at the end of training the loss for a neural network can be approximated by the quadratic loss L(θ) = (θ − µ)ᵀ ( H 2 ) (θ − µ), where H 0 is the training loss Hessian and µ is some unknown mean vector, corresponding to a local minimum. Assumption 5 (Covariance Structure). We assume the covariance of the gradient noise is proportional to the Hessian of the quadratic loss Σ(θ) = σ2H where σ ∈ R+ is some unknown scalar. Under these simplifications, then the expressions derived in the linear regression setting would apply to the limiting dynamics of deep neural networks and depend only on quantities that we can easily estimate empirically. Of course, these simplifications are quite strong, but without arguing their theoretical validity, we can empirically test their qualitative implications: (1) a modified isotropic loss driving the limiting dynamics through parameter space, (2) implicit regularization of the velocity trajectory, and (3) oscillatory phase space dynamics determined by the Hessian eigen-structure. Modified loss. As discussed in section 6, due to the anisotropy of the diffusion matrix, the loss landscape driving the dynamics at the end of training is not the original training loss L(θ), but a modified loss Ψ(θ, v) in phase space. As shown in equation (8), the modified loss decouples into a term Ψθ that only depends on the parameters θ and a term Ψv that only depends on the velocities v. Under assumption 5, the parameter dependent component is proportional to the convex quadratic, Ψθ ∝ (θ − µ)ᵀ ( H−1(H + λI) η(1 + β) ) (θ − µ) . (10) This quadratic function has the same mean µ as the training loss, but a different curvature. Using this expression, notice that when λ ≈ 0, the modified loss is isotropic in the column space of H , regardless of what the nonzero eigenspectrum of H is. This striking prediction suggests that no matter how anisotropic the original training loss – as reflected by poor conditioning of the Hessian eigenspectrum – the training trajectory of the network will behave isotropically, since it is driven not by the original anisotropic loss, but a modified isotropic loss. We test this prediction by studying the limiting dynamics of a pre-trained ResNet-18 model with batch normalization that we continue to train on ImageNet according to the last setting of its hyperparameters [49]. Let θ∗ represent the initial pre-trained parameters of the network, depicted with the white dot in figures 3 and 4. We estimate1 the top thirty eigenvectors q1, . . . , q30 of the Hessian matrix H∗ evaluated at θ∗ and project the limiting trajectory for the parameters onto the plane spanned by the top q1 and bottom q30 eigenvectors to maximize the illustrated anisotropy with our estimates. We sample the train and test loss in this subspace for a region around the projected trajectory. Additionally, using the hyperparameters of the optimization, the eigenvalues ρ1 and ρ30, and the estimate for the mean µ = θ∗−H−1∗ g∗ (g∗ is the gradient evaluated at θ∗), we also sample from the modified loss equation (10) in the same region. Figure 3 shows the projected parameter trajectory on the sampled train, test and modified losses. Contour lines of both the train and test loss exhibit anisotropic structure, with sharper curvature along eigenvector q1 compared to eigenvector q30, as expected. However, as predicted, the trajectory appears to cover both directions equally. This striking isotropy of the trajectory within a highly anisotropic slice of the loss landscape indicates qualitatively that the trajectory evolves in a modified isotropic loss landscape. Implicit velocity regularization. A second qualitative prediction of the theory is that the velocity is regulated by the inverse Hessian of the training loss. Of course there are no explicit terms in either the train or test losses that depend on the velocity. Yet, the modified loss contains a component, Ψv ∝ vᵀH−1v, that only depends on the velocities This additional term can be understood as a form of implicit regularization on the velocity trajectory. Indeed, when we project the velocity trajectory onto the plane spanned by the q1 and q30 eigenvectors, as shown in Fig. 4, we see that the trajectory closely resembles the curvature of the inverse Hessian H−1. The modified loss is effectively penalizing SGD for moving in eigenvectors of the Hessian with small eigenvalues. A similar qualitative effect was recently proposed by Barrett and Dherin [21] as a consequence of the discretization error due to finite learning rates. Phase space oscillations. A final implication of the theory is that at stationarity the network is in broken detailed balance leading to non-zero probability currents flowing through phase space: Jss(θ, v) = [ v − 2η(1+β) (H + λI) (θ − µ) ] pss. (11) These probability currents encourage oscillatory dynamics in the phase space planes characterized by the eigenvectors of the Hessian, at rates proportional to their eigenvalues. We consider the same projected trajectory of the ResNet-18 model visualized in figures 3 and 4, but plot the trajectory in phase space for the two eigenvectors q1 and q30 separately. Shown in Fig. 5, we see that both trajectories look like noisy clockwise rotations. Qualitatively, the trajectories for the different eigenvectors appear to be rotating at different rates. The integral curves of the stationary probability current are one-dimensional paths confined to level sets of the modified loss. These paths might cross themselves, in which case they are limit cycles, or they could cover the entire surface of the level sets, in which case they are space-filling curves. This distinction depends on the relative frequencies of the oscillations, as determined by the pairwise 1To estimate the eigenvectors of H∗ we use subspace iteration, and limit ourselves to 30 eigenvectors to constrain computation time. See appendix H for details. ratios of the eigenvalues of the Hessian. For real-world datasets, with a large spectrum of incommensurate frequencies, we expect to be in the latter setting, thus contradicting the suggestion that SGD in deep networks converges to limit cycles, as claimed in Chaudhari and Soatto [33]. 8 UNDERSTANDING THE DIFFUSIVE BEHAVIOUR OF THE LIMITING DYNAMICS Taken together the empirical results shown in section 7 indicate that many of the same qualitative behaviors of SGD identified theoretically for linear regression are evident in the limiting dynamics of neural networks. Can this theory quantitatively explain the results we identified in section 2? Constant instantaneous speed. As noted in section 2, we observed that at the end of training, across various architectures, the squared norm of the local displacement ‖δt‖22 remains essentially constant. Assuming the limiting dynamics are described by the stationary solution the expectation of the local displacement is Ess [ ‖δt‖2 ] = η2 S(1− β2)σ 2tr (H) , (12) as derived in appendix G. We cannot test this prediction directly as we do not know σ2 and computing tr(H) is computationally prohibitive. However, we can estimate σ2tr(H) by resuming training for a model, measuring the average ‖δt‖2, and then inverting equation (12). Using this single estimate, we find that for a sweep of models with varying hyperparameters, equation (12) accurately predicts their instantaneous speed. Indeed, Fig. 6 shows an exact match between the empirics and theory, which strongly suggests that despite changing hyperparameters at the end of training, the model remains in the same quadratic basin. Exponent of anomalous diffusion. The expected value for the global displacement under the stationary solution can also be analytically expressed in terms of the optimization hyperparameters and the eigendecomposition of the Hessian as, Ess [ ‖∆t‖2 ] = η2 S(1− β2)σ 2 ( tr (H) t+ 2t t∑ k=1 ( 1− k t ) m∑ l=1 ρlCl(k) ) , (13) where Cl(k) is a trigonometric function describing the velocity of a harmonic oscillator with damping ratio ζl = (1 − β)/ √ 2η(1 + β) (pl + λ), see appendix G for details. As shown empirically in section 2, the squared norm ‖∆t‖2 monotonically increases as a power law in the number of steps, suggesting its expectation is proportional to tc for some unknown, constant c. The exponent c determines the regime of diffusion for the process. When c = 1, the process corresponds to standard Brownian diffusion. For c > 1 or c < 1 the process corresponds to anomalous super-diffusion or sub-diffusion respectively. Unfortunately, it is not immediately clear how to extract the explicit exponent c from equation (13). However, by exploring the functional form of Cl(k) and its relationship to the hyperparameters of optimization through the damping ratio ζl, we can determine overall trends in the diffusion exponent c. Akin to how the exponent c determines the regime of diffusion, the damping ratio ζl determines the regime for the harmonic oscillator describing the stationary velocity-velocity correlation in the lth eigenvector of the Hessian. When ζl = 1, the oscillator is critically damped implying the velocity correlations converge to zero as quickly as possible. In the extreme setting of Cl(k) = 0 for all l, k, then equation (13) simplifies to standard Brownian diffusion, Ess [ ‖∆t‖2 ] ∝ t. When ζl > 1, the oscillator is overdamped implying the velocity correlations dampen slowly and remain positive even over long temporal lags. Such long lasting temporal correlations in velocity lead to faster global displacement. Indeed, in the extreme setting of Cl(k) = 1 for all l, k, then equation (13) simplifies to a form of anomalous super-diffusion, Ess [ ‖∆t‖2 ] ∝ t2. When ζl < 1, the oscillator is underdamped implying the velocity correlations will oscillate quickly between positive and negative values. Indeed, the only way equation (13) could describe anomalous sub-diffusion is if Cl(k) took on negative values for certain l, k. Using the same sweep of models described previously, we can empirically confirm that the optimization hyperparameters each influence the diffusion exponent c. As shown in Fig. 6, the learning rate, batch size, and momentum can each independently drive the exponent c into different regimes of anomalous diffusion. Notice how the influence of the learning rate and momentum on the diffusion exponent c closely resembles their respective influences on the damping ratio ζl. Interestingly, a larger learning rate leads to underdamped oscillations, and the resultant temporal velocities’ anti-correlations reduce the exponent of anomalous diffusion. Thus contrary to intuition, a larger learning rate actually leads to slower global transport in parameter space. The batch size on the other hand, has no influence on the damping ratio, but leads to an interesting, non-monotonic influence on the diffusion exponent. Overall, the hyperparameters of optimization and eigenspectrum of the Hessian all conspire to govern the degree of anomalous diffusion at the end of training. 9 DISCUSSION Through combined empirics and theory based on statistical physics, we uncovered an intricate interplay between the optimization hyperparameters, structure in the gradient noise, and the Hessian matrix at the end of training. Significance. The significance of our work lies in (1) the identification/verification of multiple empirical phenomena (constant instantaneous speed, anomalous diffusion in global displacement, isotropic parameter exploration despite anisotopic loss, velocity regularization, and slower global parameter exploration with faster learning rates) present in the limiting dynamics of deep neural networks, (2) the emphasis on studying the dynamics in velocity space in addition to parameter space, and (3) concrete quantitative as well as qualitative predictions of an SDE based theory that we empirically verified in deep networks trained on large scale datasets (indeed some of the above nontrivial phenomena were predictions of this theory). Of course, these contributions directly build upon a series of related works studying the immensely complex process of deep learning. To this end, we further clarify the originality of our contributions with respect to some relevant works. Originality. The empirical phenomena we present provide novel insight with respect to the works of Wan et al. [13], Hoffer et al. [18], and Chen et al. [15]. We observe that all parameters in the network (not just those with scale symmetry) move at a constant instantaneous speed at the end of training and diffuse anomalously at rates determined by the hyperparameters of optimization. In contrast to the work by Liu et al. [35], we modeled the entire SGD process as an OU process which allows us to provide insight into the transient dynamics and identify oscillations in parameter and velocity space. We build on the theoretical framework used by Chaudhari and Soatto [33] and provide explicit expressions for the limiting dynamics in the simplified linear regression setting and conclude that the oscillations present in the limiting dynamics are more likely to be space-filling curves (and not limit cycles) in deep learning due to many incommensurate oscillations. Overall, by identifying key phenomena, explaining them in a simpler setting, deriving predictions of new phenomena, and providing evidence for these predictions at scale, we are furthering the scientific study of deep learning. We hope our newly derived understanding of the limiting dynamics of SGD, and its dependence on various important hyperparameters like batch size, learning rate, and momentum, can serve as a basis for future work that can turn these insights into algorithmic gains. A MODELING SGD WITH AN SDE As explained in section 4, in order to understand the dynamics of stochastic gradient descent we build a continuous Langevin equation in phase space modeling the effect of discrete updates and stochastic batches simultaneously. A.1 MODELING DISCRETIZATION To model the discretization effect we assume that the system of update equations (2) is actually a discretization of some unknown ordinary differential equation. To uncover this ODE, we combine the two update equations in (2), by incorporating a previous time step θk−1, and rearrange into the form of a finite difference discretization, as shown in equation (??). Like all discretizations, the Euler discretizations introduce error terms proportional to the step size, which in this case is the learning rate η. Taylor expanding θk+1 and θk−1 around θk, its easy to show that both Euler discretizations introduce a second-order error term proportional to η2 θ̈. θk+1 − θk η = θ̇ + η 2 θ̈ +O(η2), θk − θk−1 η = θ̇ − η 2 θ̈ +O(η2). Notice how the momentum coefficient β ∈ [0, 1] regulates the amount of backward Euler incorporated into the discretization. When β = 0, we remove all backward Euler discretization leaving just the forward Euler discretization. When β = 1, we have equal amounts of backward Euler as forward Euler resulting in a central second-order discretization2 as noticed in [19]. A.2 MODELING STOCHASTICITY In order to model the effect of stochastic batches, we first model a batch gradient with the following assumption: Assumption 1 (CLT). We assume the batch gradient is a noisy version of the true gradient such that gB(θ)− g(θ) is a Gaussian random variable with mean 0 and covariance 1SΣ(θ). The two conditions needed for the CLT to hold are not exactly met in the setting of SGD. Independent and identically distributed. Generally we perform SGD by making a complete pass through the entire dataset before using a sample again which introduces a weak dependence between samples. While the covariance matrix without replacement more accurately models the dependence between samples within a batch, it fails to account for the dependence between batches. Finite variance. A different line of work has questioned the Gaussian assumption entirely because of the need for finite variance random variables. This work instead suggests using the generalized central limit theorem implying the noise would be a heavy-tailed α-stable random variable [29]. Thus, the previous assumption is implicitly assuming the i.i.d. and finite variance conditions apply for large enough datasets and small enough batches. Under the CLT assumption, we must also replace the Euler discretizations with Euler–Maruyama discretizations. For a general stochastic process, dXt = µdt+ σdWt, the Euler–Maruyama method extends the Euler method for ODEs to SDEs, resulting in the update equation Xk+1 = Xk + ∆tµ+√ ∆tσξ, where ξ ∼ N (0, 1). Notice, the key difference is that if the temporal step size is ∆t = η, then the noise is scaled by the square root √ η. In fact, the main argument against modeling SGD with an SDE, as nicely explained in Yaida [28], is that most SDE approximations simultaneously assume that ∆t → 0+, while maintaining that the square root of the learning rate √η is finite. However, by modeling the discretization and stochastic effect simultaneously we can avoid this argument, bringing us to our second assumption: Assumption 2 (SDE). We assume the underdamped Langevin equation (3) accurately models the trajectory of the network driven by SGD through phase space such that θ(ηk) ≈ θk and v(ηk) ≈ vk. This approach of modeling discretization and stochasticity simultaneously is called stochastic modified equations, as further explained in Li et al. [22]. 2The difference between a forward Euler and backward Euler discretization is a second-order central discretization, ( θk+1−θk η ) − ( θk−θk−1 η ) = η ( θk+1−2θk+θk−1 η2 ) = ηθ̈ +O(η2). B STRUCTURE IN THE COVARIANCE OF THE GRADIENT NOISE As we’ve mentioned before, SGD introduces highly structured noise into an optimization process, often assumed to be an essential ingredient for its ability to avoid local minima. Assumption 5 (Covariance Structure). We assume the covariance of the gradient noise is proportional to the Hessian of the quadratic loss Σ(θ) = σ2H where σ ∈ R+ is some unknown scalar. In the setting of linear regression, this is a very natural assumption. If we assume the classic generative model for linear regression data yi = x ᵀ i θ̄+σ where, θ̄ ∈ Rd is the true model and ∼ N (0, 1), then provably Σ(θ) ≈ σ2H . Proof. We can estimate the covariance as Σ(θ) ≈ 1N ∑N i=1 gig ᵀ i − ggᵀ. Near stationarity ggᵀ 1 N ∑N i=1 gig ᵀ i , and thus, Σ(θ) ≈ 1 N N∑ i=1 gig ᵀ i . Under the generative model yi = x ᵀ i θ̄ + σ where ∼ N (0, 1) and σ ∈ R+, then the gradient gi is gi = (x ᵀ i (θ − θ̄)− σ )xi, and the matrix gig ᵀ i is gig ᵀ i = (x ᵀ i (θ − θ̄)− σ )2(xixᵀi ). Assuming θ ≈ θ̄ at stationarity, then (xᵀi (θ − θ̄)− σ )2 ≈ σ2. Thus, Σ(θ) ≈ σ 2 N N∑ i=1 xix ᵀ i = σ2 N XᵀX = σ2H Also notice that weight decay is independent of the data or batch and thus simply shifts the gradient distribution, but leaves the covariance of the gradient noise unchanged. While the above analysis is in the linear regression setting, for deep neural networks it is reasonable to make the same assumption. See the appendix of Jastrzębski et al. [12] for a discussion on this assumption in the non-linear setting. Recent work by Ali et al. [32] also studies the dynamics of SGD (without momentum) in the setting of linear regression. This work, while studying the classic first-order stochastic differential equation, made a point to not introduce an assumption on the diffusion matrix. In particular, they make the point that even in the setting of linear regression, a constant covariance matrix will fail to capture the actual dynamics. To illustrate this point they consider the univariate responseless least squares problem, minimize θ∈R 1 2n n∑ i=1 (xiθ) 2. As they explain, the SGD update for this problem would be θk+1 = θk − η S (∑ i∈B xi ) θk = k∏ i=1 (1− η( 1S ∑ i∈B xi))θ0, from which they conclude for a small enough learning rate η, then with probability one θk → 0. They contrast this with the Ornstein-Uhlenbeck process given by a constant covariance matrix where while the mean for θk converges to zero its variance converges to a positive constant. So is this discrepancy evidence that an Ornstein-Uhlenbeck process with a constant covariance matrix fails to capture the updates of SGD? In many ways this problem is not a simple example, rather a pathological edge case. Consider the generative model that would give rise to this problem, y = 0x+ 0ξ = 0. In otherwords, the true model θ̄ = 0 and the standard deviation for the noise σ = 0. This would imply by the assumption used in our paper that there would be zero diffusion and the resulting SDE would simplify to a deterministic ODE that exponentially converges to zero. C A QUADRATIC LOSS AT THE END OF TRAINING Assumption 4 (Quadratic Loss). We assume that at the end of training the loss for a neural network can be approximated by the quadratic loss L(θ) = (θ − µ)ᵀ ( H 2 ) (θ − µ), where H 0 is the training loss Hessian and µ is some unknown mean vector, corresponding to a local minimum. This assumption has been amply used in previous works such as Mandt et al. [31], Jastrzębski et al. [12], and Poggio et al. [50]. Particularly, Mandt et al. [31] discuss how this assumption makes sense for smooth loss functions for which the stationary solution to the stochastic process reaches a deep local minimum from which it is difficult to escape. It is a well-studied fact, both empirically and theoretically, that the Hessian is low-rank near local minima as noted by Sagun et al. [51], and Kunin et al. [20]. This degeneracy results in flat directions of equal loss. Kunin et al. [20] discuss how differentiable symmetries, architectural features that keep the loss constant under certain weight transformations, give rise to these flat directions. Importantly, the Hessian and the covariance matrix share the same null space, and thus we can always restrict ourselves to the image space of the Hessian, where the drift and diffusion matrix will be full rank. Further discussion on the relationship between the Hessian and the covariance matrix can be found in Thomas et al. [52]. It is also a well known empirical fact that even at the end of training the Hessian can have negative eigenvalues [41]. This empirical observation is at odds with our assumption that the Hessian is positive semi-definite H 0. Further analysis is needed to alleviate this inconsistency. D SOLVING AN ORNSTEIN-UHLENBECK PROCESS WITH ANISOTROPIC NOISE We will study the multivariate Ornstein-Uhlenbeck process described by the stochastic differential equation dXt = A(µ−Xt)dt+ √ 2κ−1DdWt X0 = x0, (14) whereA ∈ Sm++ is a positive definite drift matrix, µ ∈ Rm is a mean vector, κ ∈ R+ is some positive constant, and D ∈ Sm++ is a positive definite diffusion matrix. This OU process is unique in that it is one of the few SDEs we can solve explicitly. We can derive an expression for XT as, XT = e −ATx0 + ( I − e−AT ) µ+ ∫ T 0 eA(t−T ) √ 2κ−1DdWt. (15) Proof. Consider the function f(t, x) = eAtx where eA is a matrix exponential. Then by Itô’s Lemma3 we can evaluate the derivative of f(t,Xt) as df(t,Xt) = ( AeAtXt + e AtA(µ−Xt) ) dt+ eAt √ 2κ−1DdWt = AeAtµdt+ eAt √ 2κ−1DdWt Integrating this expression from t = 0 to t = T gives f(T,XT )− f(0, X0) = ∫ T 0 AeAtµdt+ ∫ T 0 eAt √ 2κ−1DdWt eATXT − x0 = ( eAT − I ) µ+ ∫ T 0 eAt √ 2κ−1DdWt which rearranged gives the expression for XT . From this expression it is clear that XT is a Gaussian process. The mean of the process is E [XT ] = e −ATx0 + ( I − e−AT ) µ, (16) and the covariance and cross-covariance of the process are Var(XT ) = κ −1 ∫ T 0 eA(t−T )2DeA ᵀ(t−T )dt, (17) Cov(XT , XS) = κ −1 ∫ min(T,S) 0 eA(t−T )2DeA ᵀ(t−S)dt. (18) These last two expressions are derived by Itô Isometry4. D.1 THE LYAPUNOV EQUATION We can explicitly solve the integral expressions for the covariance and cross-covariance exactly by solving for the unique matrix B ∈ Sm++ that solves the Lyapunov equation, AB +BAᵀ = 2D. (19) If B solves the Lyapunov equation, notice d dt ( eA(t−T )BeA ᵀ(t−S) ) = eA(t−T )ABeA ᵀ(t−S) + eA(t−T )BAᵀeA ᵀ(t−S) = eA(t−T )2DeA ᵀ(t−S) Using this derivative, the integral expressions for the covariance and cross-covariance simplify as, Var(XT ) = κ −1 ( B − e−ATBe−AᵀT ) , (20) Cov(XT , XS) = κ −1 ( B − e−ATBe−AᵀT ) eA ᵀ(T−S), (21) where we implicitly assume T ≤ S. 3Itô’s Lemma states that for any Itô drift-diffusion process dXt = µtdt + σtdWt and twice differentiable scalar function f(t, x), then df(t,Xt) = ( ft + µtfx + σ2t 2 fxx ) dt+ σtfxdWt. 4Itô Isometry states for any standard Itô process Xt, then E [(∫ t 0 XtdWt )2] = E [∫ t 0 X2t dt ] . D.2 DECOMPOSING THE DRIFT MATRIX While the Lyapunov equation simplifies the expressions for the covariance and cross-covariance, it does not explain how to actually solve for the unknown matrix B. Following a method proposed by Kwon et al. [48], we will show how to solve for B explicitly in terms of the drift A and diffusion D. The drift matrix A can be uniquely decomposed as, A = (D +Q)U (22) whereD is our symmetric diffusion matrix,Q is a skew-symmetric matrix (i.e. Q = −Qᵀ), and U is a positive definite matrix. Using this decomposition, then B = U−1, solves the Lyapunov equation. Proof. Plug B = U−1 into the left-hand side of equation (19), AU−1 + U−1Aᵀ = (D +Q)UU−1 + U−1U(D −Q) = (D +Q) + (D −Q) = 2D Here we used the symmetry of A,D,U and the skew-symmetry of Q. All that is left is to do is solve for the unknown matricesQ and U . First notice the following identity, AD −DA = QA+AQ (23) Proof. Multiplying A = (D +Q)U on the right by (D −Q) gives, A(D −Q) = (D +Q)U(D −Q) = (D +Q)Aᵀ, which rearranged and using A = Aᵀ gives the desired equation. Let V ΛV ᵀ be the eigendecomposition of A and define the matrices D̃ = V ᵀDV and Q̃ = V ᵀQV . These matrices observe the following relationship, Q̃ij = λi − λj ρi + λj D̃ij . (24) Proof. Replace A in the previous equality with its eigendecompsoition, V ΛV ᵀD −DV ΛV ᵀ = QV ΛV ᵀ + V ΛV ᵀQ. Multiply this equation on the right by V and on the left by V ᵀ, ΛD̃ − D̃Λ = Q̃Λ + ΛQ̃. Looking at this equality element-wise and using the fact that Λ is diagonal gives the scalar equality for any i, j, (λi − λj)D̃ij = (λi + λj)Q̃ij , which rearranged gives the desired expression. Thus, Q and U are given by, Q = V Q̃V ᵀ, U = (D +Q)−1A. (25) This decomposition always holds uniquely when A,D 0, as λi−λjλi+λj exists and (D +Q) is invertible. See [48] for a discussion on the singularities of this decomposition. D.3 STATIONARY SOLUTION Using the Lyapunov equation and the drift decomposition, then XT ∼ pT , where pT = N ( e−ATx0 + ( I − e−AT ) µ, κ−1 ( U−1 − e−ATU−1e−AᵀT )) . (26) In the limit as T →∞, then e−AT → 0 and pT → pss where pss = N ( µ, κ−1U−1 ) . (27) Similarly, the cross-covariance converges to the stationary cross-covariance, Covss(XT , XS) = κ −1BeA ᵀ(T−S). (28) E A VARIATIONAL FORMULATION OF THE OU PROCESS WITH ANISOTROPIC NOISE In this section we will describe an alternative, variational, route towards solving the dynamics of the OU process studied in appendix D. Let Φ : Rn → R be an arbitrary, non-negative potential and consider the stochastic differential equation describing the Langevin dynamics of a particle in this potential field, dXt = −∇Φ(Xt)dt+ √ 2κ−1D(Xt)dWt, X0 = x0, (29) where D(Xt) is an arbitrary, spatially-dependent, diffusion matrix, κ is a temperature constant, and x0 ∈ Rm is the particle’s initial position. The Fokker-Planck equation describes the time evolution for the probability distribution p of the particle’s position such that p(x, t) = P(Xt = x). The FP equation is the partial differential equation5, ∂tp = ∇ · ( ∇Φ(Xt)p+ κ−1∇ · (D(Xt)p) ) , p(x, 0) = δ(x0), (30) where ∇· denotes the divergence and δ(x0) is a dirac delta distribution centered at the initialization x0. To assist in the exploration of the FP equation we define the vector field, J(x, t) = −∇Φ(Xt)p−∇ · (D(Xt)p) , (31) which is commonly referred to as the probability current. Notice, that this gives an alternative expression for the FP equation, ∂tp = −∇·J , demonstrating that J(x, t) defines the flow of probability mass through space and time. This interpretation is especially useful for solving for the stationary solution pss, which is the unique distribution that satisfies, ∂tpss = −∇ · Jss = 0, (32) where Jss is the probability current for pss. The stationary condition can be obtained in two distinct ways: 1. Detailed balance. This is when Jss(x) = 0 for all x ∈ Ω. This is analogous to reversibility for discrete Markov chains, which implies that the probability mass flowing from a state i to any state j is the same as the probability mass flowing from state j to state i. 2. Broken detailed balance. This is when ∇ · Jss(x) = 0 but Jss(x) 6= 0 for all x ∈ Ω. This is analogous to irreversibility for discrete Markov chains, which only implies that the total probability mass flowing out of state i equals to the total probability mass flowing into state i. The distinction between these two cases is critical for understanding the limiting dynamics of the process. E.1 THE VARIATIONAL FORMULATION OF THE FOKKER-PLANCK EQUATION WITH ISOTROPIC DIFFUSION We will now consider the restricted setting of standard, isotropic diffusion (D = I). It is easy enough to check that in this setting the stationary solution is pss(x) = e−κΦ(x) Z , Z = ∫ Ω e−κΦ(x)dx, (33) where pss is called a Gibbs distribution and Z is the partition function. Under this distribution, the stationary probability current is zero (Jss(x) = 0) and thus the process is in detailed balance. Interestingly, the Gibbs distribution pss has another interpretation as the unique minimizer of the the Gibbs free energy functional, F (p) = E [Φ]− κ−1H(p), (34) where E [Φ] is the expectation of the potential Φ under the distribution p and H(p) = − ∫ Ω p(x)log(p(x))dx is the Shannon entropy of p. 5This PDE is also known as the Forward Kolmogorov equation. Proof. To prove that indeed pss is the unique minimizer of the Gibbs free energy functional, consider the following equivalent expression F (p) = ∫ Ω p(x)Φ(x)dx+ κ−1 ∫ Ω p(x)log(p(x))dx = κ−1 ∫ Ω p(x) (log(p(x))− log(pss(x))) dx− κ−1 ∫ Ω log(Z) = κ−1DKL(p ‖ pss)− κ−1log(Z) From this expressions, it is clear that the Kullback–Leibler divergence is uniquely minimized when p = pss. In other words, with isotropic diffusion the stationary solution pss can be thought of as the limiting distribution given by the Fokker-Planck equation or the unique minimizer of an energetic-entropic functional. Seminal work by Jordan et al. [53] deepened this connection between the Fokker-Planck equation and the Gibbs free energy functional. In particular, their work demonstrates that the solution p(x, t) to the Fokker-Planck equation is the Wasserstein gradient flow trajectory on the Gibbs free energy functional. Steepest descent is always defined with respect to a distance metric. For example, the update equation, xk+1 = xk − η∇Φ(xk), for classic gradient descent on a potential Φ(x), can be formulated as the solution to the minimization problem xk+1 = argminxηΦ(x) + 1 2d(x, xk) 2 where d(x, xk) = ‖x− xk‖ is the Euclidean distance metric. Gradient flow is the continuous-time limit of gradient descent where we take η → 0+. Similarly, Wasserstein gradient flow is the continuous-time limit of steepest descent optimization defined by the Wasserstein metric. The Wasserstein metric is a distance metric between probability measures defined as, W 22 (µ1, µ2) = inf p∈Π(µ1,µ2) ∫ Rn×Rn |x− y|2p(dx, dy), (35) where µ1 and µ2 are two probability measures on Rn with finite second moments and Π(µ1, µ2) defines the set of joint probability measures with marginals µ1 and µ2. Thus, given an initial distribution and learning rate η, we can use the Wasserstein metric to derive a sequence of distributions minimizing some functional in the sense of steepest descent. In the continuous-time limit as η → 0+ this sequence defines a continuous trajectory of probability distributions minimizing the functional. Jordan et al. [54] proved, through the following theorem, that this process applied to the Gibbs free energy functional converges to the solution to the Fokker-Planck equation with the same initialization: Theorem 1 (JKO). Given an initial condition p0 with finite second moment and an η > 0, define the iterative scheme pη with iterates defined by pk = argminpη ( E [Φ]− κ−1H(p) ) +W 22 (p, p k−1). As η → 0+, then pη → p weakly in L1 where p is the solution to the Fokker-Planck equation with the same initial condition. See [54] for further explanation and [53] for a complete derivation. E.2 EXTENDING THE VARIATIONAL FORMULATION TO THE SETTING OF ANISOTROPIC DIFFUSION While the JKO theorem provides a very powerful lens through which to view solutions to the FokkerPlanck equation, and thus distributions for particles governed by Langevin dynamics, it only applies in the very restricted setting of isotropic diffusion. In this section we will review work by Chaudhari and Soatto [33] extending the variational interpretation to the setting of anisotropic diffusion. Consider when D(Xt) is an anisotropic, spatially-dependent diffusion matrix. In this setting, the original Gibbs distribution given in equation (33) does not necessarily satisfy the stationarity condition equation (32). In fact, it is not immediately clear what the stationary solution is or if the dynamics even have one. Thus, Chaudhari and Soatto [33] make the following assumption: Stationary Assumption. Assume there exists a unique distribution pss that is the stationary solution to the Fokker-Planck equation irregardless of initial conditions. Under this assumption we can implicitly define the potential Ψ(x) = −κ−1log(pss(x)). Using this modified potential we can express the stationary solution as a Gibbs distribution, pss(x) ∝ e−κΨ(x). (36) Under this implicit definition we can define the stationary probability current as Jss(x) = j(x)pss(x) where j(x) = −∇Φ(x)− κ−1∇ ·D(x) +D(x)∇Ψ(x). (37) The vector field j(x) reflects the discrepancy between the original potential Φ and the modified potential Ψ according to the diffusion D(x). Notice that in the isotropic case, when D(x) = I , then Φ = Ψ and j(x) = 0. Chaudhari and Soatto [33] introduce another property of j(x) through assumption, Conservative Assumption. Assume that the force j(x) is conservative (i.e. ∇ · j(x) = 0). Using this assumption, Chaudhari and Soatto [33] extends the variational formulation provided by the JKO theorem to the anisotropic setting, Theorem 2 (CS). Given an initial condition p0 with finite second moment, then the energeticentropic functional, F (p) = Ep [Ψ(x)]− κ−1H(p) monotonically decreases throughout the trajectory given by the solution to the Fokker-Planck equation with the given initial condition. In other words, the Fokker-Plank equation (30) with anisotropic diffusion can be interpreted as minimizing the expectation of a modified loss Ψ, while being implicitly regularized towards distributions that maximize entropy. The derivation requires we assume a stationary solution pss exists and that the force j(x) implicitly defined by pss is conservative. However, rather than implicitly define Ψ(x) and j(x) through assumption, if we can explicitly construct a modified loss Ψ(x) such that the resulting j(x) satisfies certain conditions, then the stationary solution exists and the variational formulation will apply as well. We formalize this statement with the following theorem, Theorem 3 (Explicit Construction). If there exists a potential Ψ(x) such that either j(x) = 0 or ∇ · j(x) = 0 and ∇Ψ(x) ⊥ j(x), then pss is the Gibbs distribution ∝ e−κΨ(x) and the variational formulation given in Theorem 2 applies. E.3 APPLYING THE VARIATIONAL FORMULATION TO THE OU PROCESS Through explicit construction we now seek to find analytic expressions for the modified loss Ψ(x) and force j(x) hypothesised by Chaudhari and Soatto [33] in the fundamental setting of an OU process with anisotropic diffusion, as described in section D. We assume the diffusion matrix is anisotropic, but spatially independent, ∇ · D(x) = 0. For the OU process the original potential generating the drift is Φ(x) = (x− µ)ᵀA2 (x− µ). (38) Recall, that in order to extend the variational formulation we must construct some potential Ψ(x) such that∇ · j(x) = 0 and∇Ψ ⊥ j(x). It is possible to construct Ψ(x) using the unique decomposition of the drift matrix A = (D +Q)U discussed in appendix D. Define the modified potential, Ψ(x) = (x− µ)ᵀ U2 (x− µ). (39) Using this potential, the force j(x) is j(x) = −A(x− µ) +DU(x− µ) = −QU(x− µ). (40) Notice that j(x) is conservative, ∇ · j(x) = ∇ · −QU (x− µ) = 0 because Q is skew-symmetric. Additionally, j(x) is orthogonal, j(x)ᵀ∇Ψ(x) = (x− µ)ᵀ UᵀQU (x− µ) = 0, again because Q is skew-symmetric. Thus, we have determined a modified potential Ψ(x) that results in a conservative orthogonal force j(x) satisfying the conditions for Theorem 3. Indeed the stationary Gibbs distribution given by Theorem 3 agrees with equation (27) derived via the first and second moments in appendix D, e−κΨ(x) ∝ N ( µ, κ−1U−1 ) In addition to the variational formulation, this interpretation further details explicitly the stationary probability current, Jss(x) = j(x)pss, and whether or not the the stationary solution is in broken detailed balance. F EXPLICIT EXPRESSIONS FOR THE OU PROCESS GENERATED BY SGD We will now consider the specific OU process generated by SGD with linear regression. Here we repeat the setup as explained in section 5. Let X ∈ RN×d, Y ∈ RN be the input data, output labels respectively and θ ∈ Rd be our vector of regression coefficients. The least squares loss is the convex quadratic loss L(θ) = 12N ‖Y −Xθ‖2 with gradient g(θ) = Hθ − b, where H = XᵀXN and b = X ᵀY N . Plugging this expression for the gradient into the underdamped Langevin equation (3), and rearranging terms, results in the multivariate Ornstein-Uhlenbeck (OU) process, d [ θt vt ] = A ([ µ 0 ] − [ θt vt ]) dt+ √ 2κ−1DdWt, (41) where A and D are the drift and diffusion matrices respectively, A = [ 0 −I 2 η(1+β) (H + λI) 2(1−β) η(1+β)I ] , D = [ 0 0 0 2(1−β)η(1+β)Σ(θ) ] , (42) κ = S(1− β2) is a temperature constant, and µ = (H + λI)−1b is the ridge regression solution. F.1 SOLVING FOR THE MODIFIED LOSS AND CONSERVATIVE FORCE In order to apply the expressions derived for a general OU process in appendix D and E, we must first decompose the drift as A = (D + Q)U . Under the simplification Σ(θ) = σ2H discussed in appendix B, then the matrices Q and U , as defined below, achieve this, Q = [ 0 −σ2H σ2H 0 ] , U = [ 2 η(1+β)σ2H −1 (H + λI) 0 0 1σ2H −1 ] . (43) Using these matrices we can now derive explicit expressions for the modified loss Ψ(θ, v) and conservative force j(θ, v). First notice that the least squares loss with L2 regularization is proportional to the convex quadratic, Φ(θ) = (θ − µ)ᵀ(H + λI)(θ − µ). (44) The modified loss Ψ is composed of two terms, one that only depends on the position, Ψθ(θ) = (θ − µ)ᵀ ( H−1(H + λI) η(1 + β)σ2 ) (θ − µ) , (45) and another that only depends on the velocity, Ψv(v) = v ᵀ ( H−1 σ2 ) v. (46) The conservative force j(θ, v) is j(θ, v) = [ v − 2η(1+β) (H + λI) (θ − µ) ] , (47) and thus the stationary probability current is Jss(θ, v) = j(θ, v)pss. F.2 DECOMPOSING THE TRAJECTORY INTO THE EIGENBASIS OF THE HESSIAN As shown in appendix D, the temporal distribution for the OU process at some time T ≥ 0 is, pT ([ θ v ]) = N ( e−AT [ θ0 v0 ] + ( I − e−AT ) [µ 0 ] , κ−1 ( U−1 − e−ATU−1e−AᵀT )) . Here we will now use the eigenbasis {q1, . . . , qm} of the Hessian with eigenvalues {ρ1, . . . , ρm} to derive explicit expressions for the mean and covariance of the process through time. Deterministic component. We can rearrange the expectation as E [[ θ v ]] = [ µ 0 ] + e−AT [ θ0 − µ v0 ] . Notice that the second, time-dependent term is actually the solution to the system of ODEs ˙[θ v ] = −A [ θ v ] with initial condition [θ0 − µ v0]ᵀ. This system of ODEs can be block diagonalized by factorizing A = OSOᵀ where O is orthogonal and S is block diagonal defined as O = q1 0 . . . qm 0 . . . 0 q1 . . . 0 qm S = 0 −1 2 η(1+β) (ρ1 + λ) 2(1−β) η(1+β) . . . . . . . . . 0 −1 2 η(1+β) (ρm + λ) 2(1−β) η(1+β) In otherwords in the plane spanned by [qi 0] ᵀ and [0 qi] ᵀ the system of ODEs decouples into the 2D system ˙[ai bi ] = [ 0 1 − 2η(1+β) (ρi + λ) − 2(1−β) η(1+β) ] [ ai bi ] This system has a simple physical interpretation as a damped harmonic oscillator. If we let bi = ȧi, then we can unravel this system into the second order ODE äi + 2 1− β η(1 + β) ȧi + 2 η(1 + β) (ρi + λ)ai = 0 which is in standard form (i.e. ẍ + 2γẋ + ω2x = 0) for γ = 1−βη(1+β) and ωi = √ 2 η(1+β) (ρi + λ). Let ai(0) = 〈θ0 − µ, qi〉 and bi(0) = 〈v0, qi〉, then the solution in terms of γ and ωi is ai(t) = e−γt ( ai(0) cosh (√ γ2 − ω2i t ) + γai(0)+bi(0)√ γ2−ω2i sinh (√ γ2 − ω2i t )) γ > ωi e−γt(ai(0) + (γai(0) + bi(0))t) γ = ωi e−γt ( ai(0) cos (√ ω2i − γ2t ) + γai(0)+bi(0)√ ω2i−γ2 sin (√ ω2i − γ2t )) γ < ωi Differentiating these equations gives us solutions for bi(t) bi(t) = e−γt ( bi(0) cosh (√ γ2 − ω2i t ) − ω 2 i ai(0)+γbi(0)√ γ2−ω2i sinh (√ γ2 − ω2i t )) γ > ωi e−γt ( bi(0)− ( ω2i ai(0) + γbi(0) ) t ) γ = ωi e−γt ( bi(0) cos (√ ω2i − γ2t ) − ω 2 i ai(0)+γbi(0)√ ω2i−γ2 sin (√ ω2i − γ2t )) γ < ωi Combining all these results, we can now analytically decompose the expectation as the sum, E [[ θ v ]] = [ µ 0 ] + m∑ i=1 ( ai(t) [ qi 0 ] + bi(t) [ 0 qi ]) . Intuitively, this equation describes a damped rotation (spiral) around the OLS solution in the planes defined by the the eigenvectors of the Hessian at a rate proportional to the respective eigenvalue. Stochastic component. Using the previous block diagonal decomposition A = OSOᵀ we can simplify the variance as Var ([ θ v ]) = κ−1 ( U−1 − e−ATU−1e−AᵀT ) = κ−1 ( U−1 − e−OSOᵀTU−1e−OSᵀOᵀT ) = κ−1O ( OᵀU−1O − e−ST (OᵀU−1O)e−ST ᵀ ) Oᵀ Interestingly, the matrix OᵀU−1O is also block diagonal, OᵀU−1O = Oᵀ [ η(1+β)σ2 2 (H + λI) −1 H 0 0 σ2H ] O = η(1+β)σ2 2 ρ1 ρ1+λ 0 0 σ2ρ1 . . . . . . . . . η(1+β)σ2 2 ρm ρm+λ 0 0 σ2ρm Thus, similar to the mean, we can simply consider the variance in each of the planes spanned by [qi 0] ᵀ and [0 qi] ᵀ. If we define the block matrices, Di = [ ησ2 2S(1−β) ρi ρi+λ 0 0 σ 2 S(1−β2)ρi ] Si = [ 0 1 − 2η(1+β) (ρi + λ) − 2(1−β) η(1+β) ] then the projected variance matrix in this plane simplifies as Var ([ qᵀi θ qᵀi v ]) = Di − e−SiTDie−SiT ᵀ Using the solution to a damped harmonic osccilator discussed previously, we can express the matrix exponential e−SiT explicitly in terms of γ = 1−βη(1+β) and ωi = √ 2 η(1+β) (ρi + λ). If we let αi =√ |γ2 − ω2i |, then the matrix exponential is e−Sit = e−γt [ cosh (αit) + γ αi sinh (αit) 1 αi sinh (αit) −ω 2 i αi sinh (αit) cosh (αit)− γαi sinh (αit) ] γ > ωi e−γt [ 1 + γt t −ω2i t 1− γt ] γ = ωi e−γt [ cos (αit) + γ αi sin (αit) 1 αi sin (αit) −ω 2 i αi sin (αit) cos (αit)− γαi sin (αit) ] γ < ωi G ANALYZING PROPERTIES OF THE STATIONARY SOLUTION Assuming the stationary solution is given by equation (??) we can solve for the expected value of the norm of the local displacement and gain some intuition for the expected value of the norm of global displacement. G.1 INSTANTANEOUS SPEED Ess [ ‖δk‖2 ] = Ess [ ‖θk+1 − θk‖2 ] = η2Ess [ ‖vk+1‖2 ] = η2tr ( Ess [ vk+1v ᵀ k+1 ]) = η2tr (Varss (vk+1) + Ess [vk+1] Ess [vk+1] ᵀ ) = η2tr ( κ−1U−1 ) = η2 S(1− β2) tr ( σ2H ) Note that this follows directly from the definition of δk in equation (1) and the mean and variance of the stationary solution in equation ( ??), as well as the follow-up derivation in appendix F. G.2 ANOMALOUS DIFFUSION Notice, that the global movement ∆t = θt−θ0 can be broken up into the sum of the local movements ∆t = ∑t i=1 δi, where δi = θi − θi−1. Applying this decomposition, Ess [ ‖∆t‖2 ] = Ess ∣∣∣∣∣ ∣∣∣∣∣ t∑ i=1 δi ∣∣∣∣∣ ∣∣∣∣∣ 2 = t∑ i=1 Ess [ ‖δi‖2 ] + t∑ i 6=j Ess [〈δi, δj〉] As we solved for previously, Ess [ ‖δi‖2 ] = η2Ess [ ‖vi‖2 ] = η2tr (Varss(vi)) = η2 S(1− β2) tr ( σ2H ) . By a similar simplification, we can express the second term in terms of the stationary crosscovariance, Ess [〈δi, δj〉] = η2Ess [〈vi, vj〉] = η2tr (Covss(vi, vj)) . Thus, to simplify this expression we just need to consider the velocity-velocity covariance Covss(vi, vj). At stationarity, the cross-covariance for the system in phase space, zi = [θi vi] is Covss(zi, zj) = κ −1U−1e−A ᵀ|i−j| where κ = S(1− β2), and U = [ 2 η(1+β)σ2H −1 (H + λI) 0 0 1σ2H −1 ] A = [ 0 −I 2 η(1+β) (H + λI) 2(1−β) η(1+β)I ] As discussed when solving for the mean of the OU trajectory, the drift matrix A can be block diagonalized as A = OSOᵀ where O is orthogonal and S is block diagonal defined as O = q1 0 . . . qm 0 . . . 0 q1 . . . 0 qm , S = 0 −1 2 η(1+β) (ρ1 + λ) 2(1−β) η(1+β) . . . . . . . . . 0 −1 2 η(1+β) (ρm + λ) 2(1−β) η(1+β) . Notice also that O diagonalizes U−1 such that, Λ = OᵀU−1O = η(1+β)σ2 2 ρ1 ρ1+λ 0 0 σ2ρ1 . . . . . . . . . η(1+β)σ2 2 ρm ρm+λ 0 0 σ2ρm . Applying these decompositions, properties of matrix exponentials, and the cyclic invariance of the trace, allows us to express the trace of the cross-covariance as tr (Covss(zi, zj)) = κ −1tr ( U−1e−A ᵀ|i−j| ) = κ−1tr ( U−1Oe−S ᵀ|i−j|Oᵀ ) = κ−1tr ( Λe−S ᵀ|i−j| ) = κ−1 n∑ k=1 tr ( Λke −Sᵀk |i−j| ) where Λk and Sk are the blocks associated with each eigenvector of H . As solved for previously in the variance of the OU process, we can express the matrix exponential e−Sk|i−j| explicitly in terms of γ = 1−βη(
1. What is the focus of the paper in terms of the research question or problem addressed? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical foundation and experimental support? 3. What are the limitations of the paper, specifically regarding the scope and applicability of the theoretical results? 4. How does the reviewer assess the clarity and impact of the paper's contributions and findings? 5. Are there any specific concerns or misunderstandings regarding certain claims made in the paper that the reviewer would like to see addressed?
Summary Of The Paper Review
Summary Of The Paper This paper derive a continuous model for SGD in linear regression settings to build understanding on some interesting phenomenon of deep neural network trained with SGD. Review Strength The theoretical results are solid and well-motivated; The experiments are thorough, and consistent with theoretical results; The contribution of this work are carefully clarified. Weakness The only pity is that all the theoretical results are derived from a linear regression model. The impact of this work will significantly increase if these interesting theoretical results can be derived in more general cases. Further concerns I notice some disruptive claims are raised in the last paragraph. Please can you elaborate them in order to raise my score? The following two concern me the most: “the network stays within a local region” is wrong; "Finally, the intuition that faster learning rates would lead to faster global displacement in parameter space is also wrong; instead induced velocity anti-correlations lead to slower global displacement"
ICLR
Title Rethinking the limiting dynamics of SGD: modified loss, phase space oscillations, and anomalous diffusion Abstract In this work we explore the limiting dynamics of deep neural networks trained with stochastic gradient descent (SGD). As observed previously, long after performance has converged, networks continue to move through parameter space by a process of anomalous diffusion in which distance travelled grows as a power law in the number of gradient updates with a nontrivial exponent. We reveal an intricate interaction between the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix at the end of training that explains this anomalous diffusion. To build this understanding, we first derive a continuoustime model for SGD with finite learning rates and batch sizes as an underdamped Langevin equation. We study this equation in the setting of linear regression, where we can derive exact, analytic expressions for the phase space dynamics of the parameters and their instantaneous velocities from initialization to stationarity. Using the Fokker-Planck equation, we show that the key ingredient driving these dynamics is not the original training loss, but rather the combination of a modified loss, which implicitly regularizes the velocity, and probability currents, which cause oscillations in phase space. We identify qualitative and quantitative predictions of this theory in the dynamics of a ResNet-18 model trained on ImageNet. Through the lens of statistical physics, we uncover a mechanistic origin for the anomalous limiting dynamics of deep neural networks trained with SGD. 1 INTRODUCTION Deep neural networks have demonstrated remarkable generalization across a variety of datasets and tasks. Essential to their success has been a collection of good practices on how to train these models with stochastic gradient descent (SGD). Yet, despite their importance, these practices are mainly based on heuristic arguments and trial and error search. Without a general theory connecting the hyperparameters of optimization, the architecture of the network, and the geometry of the dataset, theory-driven design of deep learning systems is impossible. Existing theoretical works studying this interaction have leveraged the random structure of neural networks at initialization [1, 2, 3] and in their infinite width limits in order to study their dynamics [4, 5, 6, 7, 8]. Here we take a different approach and study the training dynamics of pre-trained networks that are ready to be used for inference. By leveraging the mathematical structures found at the end of training, we uncover an intricate interaction between the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix that corroborates previously identified empirical behavior such as anomalous limiting dynamics. Not only is understanding the limiting dynamics of SGD a critical stepping stone to building a complete theory for the learning dynamics of neural networks, but recently there have been a series of works demonstrating that the performance of pre-trained networks can be improved through averaging and ensembling [9, 10, 11]. Combining empirical exploration and theoretical tools from statistical physics, we identify and uncover a mechanistic explanation for the limiting dynamics of neural networks trained with SGD. 2 DIFFUSIVE BEHAVIOR IN THE LIMITING DYNAMICS OF SGD A network that has converged in performance will continue to move through parameter space [12, 13, 14, 15]. To demonstrate this behavior, we resume training of pre-trained convolutional networks while tracking the network trajectory through parameter space. Let θ∗ ∈ Rm be the parameter vector for a pre-trained network and θk ∈ Rm be the parameter vector after k steps of resumed training. We track two metrics of the training trajectory, namely the local parameter displacement δk between consecutive steps, and the global displacement ∆k after k steps from the pre-trained initialization: δk = θk − θk−1, ∆k = θk − θ∗. (1) As shown in Fig. 1, neither of these differences converge to zero across a variety of architectures, indicating that despite performance convergence, the networks continue to move through parameter space, both locally and globally. The squared norm of the local displacement ‖δk‖22 remains near a constant value, indicating the network is essentially moving at a constant instantaneous speed. This observation is quite similar to the “equilibrium" phenomenon or “constant angular update" observed in Li et al. [17] and Wan et al. [13] respectively. However, these works only studied the displacement for parameters immediately preceding a normalization layer. The constant instantaneous speed behavior we observe is for all parameters in the model and is even present in models without normalization layers. While the squared norm of the local displacement is essentially constant, the squared norm of the global displacement ‖∆k‖22 is monotonically growing for all networks, implying even once trained, the network continues to diverge from where it has been. Indeed Fig. 1 indicates a power law relationship between global displacement and number of steps, given by ‖∆k‖22 ∝ kc. As we’ll see in section 8, this relationship is indicative of anomalous diffusion where c corresponds to the anomalous diffusion exponent. Standard Brownian motion corresponds to c = 1. Similar observation were made by Baity-Jesi et al. [14] who noticed distinct phases of the training trajectory evident in the dynamics of the global displacement and Chen et al. [15] who found that the exponent of diffusion changes through the course of training. A parallel observation is given by Hoffer et al. [18] for the beginning of training, where they measure the global dis- placement from the initialization of an untrained network and observe a rate ∝ log(k), a form of ultra-slow diffusion. These empirical observations raise the natural questions, where is the network moving to and why? To answer these questions we will build a diffusion based theory of SGD, study these dynamics in the setting of linear regression, and use lessons learned in this fundamental setting to understand the limiting dynamics of neural networks. 3 RELATED WORK There is a long line of literature studying both theoretically and empirically the learning dynamics of deep neural networks trained with SGD. Our analysis and experiments build upon this literature. Continuous models for SGD. Many works consider how to improve the classic gradient flow model for SGD to more realistically reflect momentum [19], discretization due to finite learning rates [20, 21], and stochasticity due to random batches [22, 23]. One line of work has studied the dynamics of networks in their infinite width limits through dynamical mean field theory [24, 25, 26, 27], while a different approach has used stochastic differential equations (SDEs) to model SGD directly, the approach we take in this work. However, recently, the validity of this approach has been questioned. The main argument, as nicely explained in Yaida [28], is that most SDE approximations simultaneously assume that ∆t → 0+, while maintaining that the learning rate η = ∆t is finite. The works Simsekli et al. [29] and Li et al. [30] have questioned the correctness of the using the central limit theorem (CLT) to model the gradient noise as Gaussian, arguing respectively that the heavy-tailed structure in the gradient noise and the weak dependence between batches leads the CLT to break down. In our work, we maintain the CLT assumption holds, which we discuss fur- ther in appendix A, but importantly we avoid the pitfalls of many previous SDE approximations by simultaneously modeling the effect of finite learning rates and stochasticity. Limiting dynamics. A series of works have applied SDE models of SGD to study the limiting dynamics of neural networks. In the seminal work by Mandt et al. [31], the limiting dynamics were modeled with a multivariate Ornstein-Uhlenbeck process by combining a first-order SDE model for SGD with assumptions on the geometry of the loss and covariance matrix for the gradient noise. This analysis was extended by Jastrzębski et al. [12] through additional assumptions on the covariance matrix to gain tractable insights and applied by Ali et al. [32] to the simpler setting of linear regression, which has a quadratic loss. A different approach was taken by Chaudhari and Soatto [33], which did not formulate the dynamics as an OU process, nor assume directly a structure on the loss or gradient noise. Rather, this analysis studied the same first-order SDE via the Fokker-Planck equation to propose the existence of a modified loss and probability currents driving the limiting dynamics, but did not provide explicit expressions. Our analysis deepens and combines ideas from all these works, where our key insight is to lift the dynamics into phase space. By studying the dynamics of the parameters and their velocities, and by applying the analysis first in the setting of linear regression where assumptions are provably true, we are able to identify analytic expressions and explicit insights which lead to concrete predictions and testable hypothesis. Stationary dynamics. A different line of work avoids modeling the limiting dynamics of SGD with an SDE and instead chooses to leverage the property of stationarity. These works [28, 34, 35, 36] assume that eventually the probability distribution governing the model parameters reaches stationarity such that the discrete SGD process is simply sampling from this distribution. Yaida [28] used this approach to derive fluctuation-dissipation relations that link measurable quantities of the parameters and hyperparameters of SGD. Liu et al. [35] used this approach to derive properties for the stationary distribution of SGD with a quadratic loss. Similar to our analysis, this work identifies that the stationary distribution for the parameters reflects a modified loss function dependent on the relationship between the covariance matrix of the gradient noise and the Hessian matrix for the original loss. Empirical exploration. Another set of works analyzing the limiting dynamics of SGD has taken a purely empirical approach. Building on the intuition that flat minima generalize better than sharp minima, Keskar et al. [37] demonstrated empirically that the hyperparameters of optimization influence the eigenvalue spectrum of the Hessian matrix at the end of training. Many subsequent works have studied the Hessian eigenspectrum during and at the end of training. Jastrzębski et al. [38], Cohen et al. [39] studied the dynamics of the top eigenvalues during training. Sagun et al. [40], Papyan [41], Ghorbani et al. [42] demonstrated the spectrum has a bulk of values near zero plus a small number of larger outliers. Gur-Ari et al. [43] demonstrated that the learning dynamics are constrained to the subspace spanned by the top eigenvectors, but found no special properties of the dynamics within this subspace. In our work we also determine that the top eigensubspace of the Hessian plays a crucial role in the limiting dynamics and by projecting the dynamics into this subspace in phase space, we see that the motion is not random, but consists of incoherent oscillations leading to anomalous diffusion. 4 MODELING SGD AS AN UNDERDAMPED LANGEVIN EQUATION Following the route of previous works [31, 12, 33] studying the limiting dynamics of neural networks, we first seek to model SGD as a continuous stochastic process. We consider a network parameterized by θ ∈ Rm, a training dataset {x1, . . . , xN} of size N , and a training loss L(θ) = 1N ∑N i=1 `(θ, xi) with corresponding gradient g(θ) = ∂L ∂θ . The state of the network at the kth step of training is defined by the position vector θk and velocity vector vk of the same dimension. The gradient descent update with learning rate η, momentum β, and weight decayλ is given by vk+1 = βvk − g(θk)− λθk, θk+1 = θk + ηvk+1, (2) where we initialize the network such that v0 = 0 and θ0 is the parameter initialization. In order to understand the dynamics of the network through position and velocity space, which we will refer to as phase space, we express these discrete recursive equations as the discretization of some unknown ordinary differential equation (ODE), sometimes referred to as a modified equation as in [44, 20]. While this ODE models the gradient descent process even at finite learning rates, it fails to account for the stochasticity introduced by choosing a random batch B of size S drawn uniformly from the set of N training points. This sampling yields the stochastic gradient gB(θ) = 1S ∑ i∈B∇`(θ, xi). To model this effect, we make the following assumption: Assumption 1 (CLT). We assume the batch gradient is a noisy version of the true gradient such that gB(θ)− g(θ) is a Gaussian random variable with mean 0 and covariance 1SΣ(θ). Incorporating this model of stochastic gradients into the previous finite difference equation and applying the stochastic counterparts to Euler discretizations, results in the standard drift-diffusion stochastic differential equation (SDE), referred to as an underdamped Langevin equation, d [ θ v ] = [ v − 2η(1+β) (g(θ) + λθ + (1− β)v) ] dt+ [ 0 0 0 2√ ηS(1+β) √ Σ(θ) ] dWt, (3) where Wt is a standard Wiener process. This is the continuous model we will study in this work: Assumption 2 (SDE). We assume the underdamped Langevin equation (3) accurately models the trajectory of the network driven by SGD through phase space such that θ(ηk) ≈ θk and v(ηk) ≈ vk. See appendix A for further discussion on the nuances of modeling SGD with an SDE. 5 LINEAR REGRESSION WITH SGD IS AN ORNSTEIN-UHLENBECK PROCESS Equipped with a model for SGD, we seek to understand its dynamics in the fundamental setting of linear regression, one of the few cases where we have a complete model for the interaction of the dataset, architecture, and optimizer. Let X ∈ RN×d be the input data, Y ∈ RN be the output labels, and θ ∈ Rd be our vector of regression coefficients. The least squares loss is the convex quadratic lossL(θ) = 12N ‖Y −Xθ‖2 with gradient g(θ) = Hθ−b, whereH = X ᵀX N and b = XᵀY N . Plugging this expression for the gradient into the underdamped Langevin equation (3), and rearranging terms, results in the multivariate Ornstein-Uhlenbeck (OU) process, d [ θt vt ] = − [ 0 −I 2 η(1+β) (H + λI) 2(1−β) η(1+β)I ] ︸ ︷︷ ︸ A ([ θt vt ] − [ µ 0 ]) dt+ √ 2κ−1 √√√√√ [ 0 0 0 2(1−β)η(1+β)Σ(θ) ] ︸ ︷︷ ︸ D dWt, (4) where A and D are the drift and diffusion matrices respectively, κ = S(1− β2) is an inverse temperature constant, and µ = (H + λI)−1b is the ridge regression solution. The solution to an OU process is a Gaussian process. By solving for the temporal dynamics of the first and second moments of the process, we can obtain an analytic expression for the trajectory at any time t. In particular, we can decompose the trajectory as the sum of a deterministic and stochastic component defined by the first and second moments respectively. Deterministic component. Using the form of A we can decompose the expectation as a sum of harmonic oscillators in the eigenbasis {q1, . . . , qm} of the Hessian, E [[ θt vt ]] = [ µ 0 ] + m∑ i=1 ( ai(t) [ qi 0 ] + bi(t) [ 0 qi ]) . (5) Here the coefficients ai(t) and bi(t) depend on the optimization hyperparameters η, β, λ, S and the respective eigenvalue of the Hessian ρi as further explained in appendix F. We verify this expression nearly perfectly matches empirics on complex datasets under various hyperparameter settings as shown in Fig. 2. Stochastic component. The cross-covariance of the process between two points in time t ≤ s, is Cov ([ θt vt ] , [ θs vs ]) =κ−1 ( B−e−AtBe−Aᵀt ) eA ᵀ(t−s), (6) where B solves the Lyapunov equation AB +BAᵀ = 2D. In order to gain analytic expressions for B in terms of the optimization hyperparameters, eigendecomposition of the Hessian, and covariance of the gradient noise, we must introduce the following assumption: Assumption 3 (Simultaneously Diagonalizable). We assume the covariance of the gradient noise is spatially independent Σ(θ) = Σ and commutes with the Hessian HΣ = ΣH , therefore sharing a common eigenbasis. 6 UNDERSTANDING STATIONARITY VIA THE FOKKER-PLANCK EQUATION The OU process is unique in that it is one of the few SDEs which we can solve exactly. As shown in section 5, we were able to derive exact expressions for the dynamics of linear regression trained with SGD from initialization to stationarity by simply solving for the first and second moments. While the expression for the first moment provides an understanding of the intricate oscillatory relationship in the deterministic component of the process, the second moment, driving the stochastic component, is much more opaque. An alternative route to solving the OU process that potentially provides more insight is the Fokker-Planck equation. The Fokker-Planck (FP) equation is a PDE describing the time evolution for the probability distribution of a particle governed by Langevin dynamics. For an arbitrary potential Φ and diffusion matrix D, the Fokker-Planck equation (under an Itô integration prescription) is ∂tp = ∇ · ( ∇Φp+∇ · ( κ−1Dp ))︸ ︷︷ ︸ −J , (7) where p represents the time-dependent probability distribution, and J is a vector field commonly referred to as the probability current. The FP equation is especially useful for explicitly solving for the stationary solution, assuming one exists, of the Langevin dynamics. The stationary solution pss by definition obeys ∂tpss = 0 or equivalently ∇ · Jss = 0. From this second definition we see that there are two distinct settings of stationarity: detailed balance when Jss = 0 everywhere, or broken detailed balance when ∇ · Jss = 0 and Jss 6= 0. For a general OU process, the potential is a convex quadratic function Φ(x) = xᵀAx defined by the drift matrix A. When the diffusion matrix is isotropic (D ∝ I) and spatially independent (∇ · D = 0) the resulting stationary solution is a Gibbs distribution pss(x) ∝ e−κΦ(x) determined by the original loss Φ(x) and is in detailed balance. Lesser known properties of the OU process arise when the diffusion matrix is anisotropic or spatially dependent [45, 46]. In this setting the solution is still a Gaussian process, but the stationary solution, if it exists, is no longer defined by the Gibbs distribution of the original loss Φ(x), but actually a modified loss Ψ(x). Furthermore, the stationary solution may be in broken detailed balance leading to a non-zero probability current Jss(x). Depending on the relationship between the drift matrix A and the diffusion matrix D the resulting dynamics of the OU process can have very nontrivial behavior. In the setting of linear regression, anisotropy in the data distribution will lead to anisotropy in the gradient noise and thus an anisotropic diffusion matrix. This implies that for most datasets we should expect that the SGD trajectory is not driven by the original least squares loss, but by a modified loss and converges to a stationary solution with broken detailed balance, as predicted by Chaudhari and Soatto [33]. Using the explicit expressions for the drift A and diffusion D matrices we can compute analytically the modified loss and stationary probability current, Ψ(θ, v) = ([ θ v ] − [ µ 0 ])ᵀ( U 2 )([ θ v ] − [ µ 0 ]) , Jss(θ, v) = −QU ([ θ v ] − [ µ 0 ]) pss, (8) where Q is a skew-symmetric matrix and U is a positive definite matrix defined as, Q = [ 0 −Σ(θ) Σ(θ) 0 ] , U = [ 2 η(1+β)Σ(θ) −1 (H + λI) 0 0 Σ(θ)−1 ] . (9) These new fundamental matrices, Q and U , relate to the original drift A and diffusion D matrices through the unique decomposition A = (D + Q)U , introduced by Ao [47] and Kwon et al. [48]. Using this decomposition we can easily show that B = U−1 solves the Lyapunov equation and indeed the stationary solution pss is the Gibbs distribution defined by the modified loss Ψ(θ, v) in equation (8). Further, the stationary cross-covariance solved in section 5 reflects the oscillatory dynamics introduced by the stationary probability currents Jss(θ, v) in equation (8). Taken together, we gain the intuition that the limiting dynamics of SGD in linear regression are driven by a modified loss subject to oscillatory probability currents. 7 EVIDENCE OF A MODIFIED LOSS AND OSCILLATIONS IN DEEP LEARNING Does the theory derived in the linear regression setting (sections 5, 6) help explain the empirical phenomena observed in the non-linear setting of deep neural networks (section 2)? In order for the theory built in the previous sections to apply to the limiting dynamics of neural networks, we must introduce simplifying assumptions on the loss landscape and gradient noise at the end of training: Assumption 4 (Quadratic Loss). We assume that at the end of training the loss for a neural network can be approximated by the quadratic loss L(θ) = (θ − µ)ᵀ ( H 2 ) (θ − µ), where H 0 is the training loss Hessian and µ is some unknown mean vector, corresponding to a local minimum. Assumption 5 (Covariance Structure). We assume the covariance of the gradient noise is proportional to the Hessian of the quadratic loss Σ(θ) = σ2H where σ ∈ R+ is some unknown scalar. Under these simplifications, then the expressions derived in the linear regression setting would apply to the limiting dynamics of deep neural networks and depend only on quantities that we can easily estimate empirically. Of course, these simplifications are quite strong, but without arguing their theoretical validity, we can empirically test their qualitative implications: (1) a modified isotropic loss driving the limiting dynamics through parameter space, (2) implicit regularization of the velocity trajectory, and (3) oscillatory phase space dynamics determined by the Hessian eigen-structure. Modified loss. As discussed in section 6, due to the anisotropy of the diffusion matrix, the loss landscape driving the dynamics at the end of training is not the original training loss L(θ), but a modified loss Ψ(θ, v) in phase space. As shown in equation (8), the modified loss decouples into a term Ψθ that only depends on the parameters θ and a term Ψv that only depends on the velocities v. Under assumption 5, the parameter dependent component is proportional to the convex quadratic, Ψθ ∝ (θ − µ)ᵀ ( H−1(H + λI) η(1 + β) ) (θ − µ) . (10) This quadratic function has the same mean µ as the training loss, but a different curvature. Using this expression, notice that when λ ≈ 0, the modified loss is isotropic in the column space of H , regardless of what the nonzero eigenspectrum of H is. This striking prediction suggests that no matter how anisotropic the original training loss – as reflected by poor conditioning of the Hessian eigenspectrum – the training trajectory of the network will behave isotropically, since it is driven not by the original anisotropic loss, but a modified isotropic loss. We test this prediction by studying the limiting dynamics of a pre-trained ResNet-18 model with batch normalization that we continue to train on ImageNet according to the last setting of its hyperparameters [49]. Let θ∗ represent the initial pre-trained parameters of the network, depicted with the white dot in figures 3 and 4. We estimate1 the top thirty eigenvectors q1, . . . , q30 of the Hessian matrix H∗ evaluated at θ∗ and project the limiting trajectory for the parameters onto the plane spanned by the top q1 and bottom q30 eigenvectors to maximize the illustrated anisotropy with our estimates. We sample the train and test loss in this subspace for a region around the projected trajectory. Additionally, using the hyperparameters of the optimization, the eigenvalues ρ1 and ρ30, and the estimate for the mean µ = θ∗−H−1∗ g∗ (g∗ is the gradient evaluated at θ∗), we also sample from the modified loss equation (10) in the same region. Figure 3 shows the projected parameter trajectory on the sampled train, test and modified losses. Contour lines of both the train and test loss exhibit anisotropic structure, with sharper curvature along eigenvector q1 compared to eigenvector q30, as expected. However, as predicted, the trajectory appears to cover both directions equally. This striking isotropy of the trajectory within a highly anisotropic slice of the loss landscape indicates qualitatively that the trajectory evolves in a modified isotropic loss landscape. Implicit velocity regularization. A second qualitative prediction of the theory is that the velocity is regulated by the inverse Hessian of the training loss. Of course there are no explicit terms in either the train or test losses that depend on the velocity. Yet, the modified loss contains a component, Ψv ∝ vᵀH−1v, that only depends on the velocities This additional term can be understood as a form of implicit regularization on the velocity trajectory. Indeed, when we project the velocity trajectory onto the plane spanned by the q1 and q30 eigenvectors, as shown in Fig. 4, we see that the trajectory closely resembles the curvature of the inverse Hessian H−1. The modified loss is effectively penalizing SGD for moving in eigenvectors of the Hessian with small eigenvalues. A similar qualitative effect was recently proposed by Barrett and Dherin [21] as a consequence of the discretization error due to finite learning rates. Phase space oscillations. A final implication of the theory is that at stationarity the network is in broken detailed balance leading to non-zero probability currents flowing through phase space: Jss(θ, v) = [ v − 2η(1+β) (H + λI) (θ − µ) ] pss. (11) These probability currents encourage oscillatory dynamics in the phase space planes characterized by the eigenvectors of the Hessian, at rates proportional to their eigenvalues. We consider the same projected trajectory of the ResNet-18 model visualized in figures 3 and 4, but plot the trajectory in phase space for the two eigenvectors q1 and q30 separately. Shown in Fig. 5, we see that both trajectories look like noisy clockwise rotations. Qualitatively, the trajectories for the different eigenvectors appear to be rotating at different rates. The integral curves of the stationary probability current are one-dimensional paths confined to level sets of the modified loss. These paths might cross themselves, in which case they are limit cycles, or they could cover the entire surface of the level sets, in which case they are space-filling curves. This distinction depends on the relative frequencies of the oscillations, as determined by the pairwise 1To estimate the eigenvectors of H∗ we use subspace iteration, and limit ourselves to 30 eigenvectors to constrain computation time. See appendix H for details. ratios of the eigenvalues of the Hessian. For real-world datasets, with a large spectrum of incommensurate frequencies, we expect to be in the latter setting, thus contradicting the suggestion that SGD in deep networks converges to limit cycles, as claimed in Chaudhari and Soatto [33]. 8 UNDERSTANDING THE DIFFUSIVE BEHAVIOUR OF THE LIMITING DYNAMICS Taken together the empirical results shown in section 7 indicate that many of the same qualitative behaviors of SGD identified theoretically for linear regression are evident in the limiting dynamics of neural networks. Can this theory quantitatively explain the results we identified in section 2? Constant instantaneous speed. As noted in section 2, we observed that at the end of training, across various architectures, the squared norm of the local displacement ‖δt‖22 remains essentially constant. Assuming the limiting dynamics are described by the stationary solution the expectation of the local displacement is Ess [ ‖δt‖2 ] = η2 S(1− β2)σ 2tr (H) , (12) as derived in appendix G. We cannot test this prediction directly as we do not know σ2 and computing tr(H) is computationally prohibitive. However, we can estimate σ2tr(H) by resuming training for a model, measuring the average ‖δt‖2, and then inverting equation (12). Using this single estimate, we find that for a sweep of models with varying hyperparameters, equation (12) accurately predicts their instantaneous speed. Indeed, Fig. 6 shows an exact match between the empirics and theory, which strongly suggests that despite changing hyperparameters at the end of training, the model remains in the same quadratic basin. Exponent of anomalous diffusion. The expected value for the global displacement under the stationary solution can also be analytically expressed in terms of the optimization hyperparameters and the eigendecomposition of the Hessian as, Ess [ ‖∆t‖2 ] = η2 S(1− β2)σ 2 ( tr (H) t+ 2t t∑ k=1 ( 1− k t ) m∑ l=1 ρlCl(k) ) , (13) where Cl(k) is a trigonometric function describing the velocity of a harmonic oscillator with damping ratio ζl = (1 − β)/ √ 2η(1 + β) (pl + λ), see appendix G for details. As shown empirically in section 2, the squared norm ‖∆t‖2 monotonically increases as a power law in the number of steps, suggesting its expectation is proportional to tc for some unknown, constant c. The exponent c determines the regime of diffusion for the process. When c = 1, the process corresponds to standard Brownian diffusion. For c > 1 or c < 1 the process corresponds to anomalous super-diffusion or sub-diffusion respectively. Unfortunately, it is not immediately clear how to extract the explicit exponent c from equation (13). However, by exploring the functional form of Cl(k) and its relationship to the hyperparameters of optimization through the damping ratio ζl, we can determine overall trends in the diffusion exponent c. Akin to how the exponent c determines the regime of diffusion, the damping ratio ζl determines the regime for the harmonic oscillator describing the stationary velocity-velocity correlation in the lth eigenvector of the Hessian. When ζl = 1, the oscillator is critically damped implying the velocity correlations converge to zero as quickly as possible. In the extreme setting of Cl(k) = 0 for all l, k, then equation (13) simplifies to standard Brownian diffusion, Ess [ ‖∆t‖2 ] ∝ t. When ζl > 1, the oscillator is overdamped implying the velocity correlations dampen slowly and remain positive even over long temporal lags. Such long lasting temporal correlations in velocity lead to faster global displacement. Indeed, in the extreme setting of Cl(k) = 1 for all l, k, then equation (13) simplifies to a form of anomalous super-diffusion, Ess [ ‖∆t‖2 ] ∝ t2. When ζl < 1, the oscillator is underdamped implying the velocity correlations will oscillate quickly between positive and negative values. Indeed, the only way equation (13) could describe anomalous sub-diffusion is if Cl(k) took on negative values for certain l, k. Using the same sweep of models described previously, we can empirically confirm that the optimization hyperparameters each influence the diffusion exponent c. As shown in Fig. 6, the learning rate, batch size, and momentum can each independently drive the exponent c into different regimes of anomalous diffusion. Notice how the influence of the learning rate and momentum on the diffusion exponent c closely resembles their respective influences on the damping ratio ζl. Interestingly, a larger learning rate leads to underdamped oscillations, and the resultant temporal velocities’ anti-correlations reduce the exponent of anomalous diffusion. Thus contrary to intuition, a larger learning rate actually leads to slower global transport in parameter space. The batch size on the other hand, has no influence on the damping ratio, but leads to an interesting, non-monotonic influence on the diffusion exponent. Overall, the hyperparameters of optimization and eigenspectrum of the Hessian all conspire to govern the degree of anomalous diffusion at the end of training. 9 DISCUSSION Through combined empirics and theory based on statistical physics, we uncovered an intricate interplay between the optimization hyperparameters, structure in the gradient noise, and the Hessian matrix at the end of training. Significance. The significance of our work lies in (1) the identification/verification of multiple empirical phenomena (constant instantaneous speed, anomalous diffusion in global displacement, isotropic parameter exploration despite anisotopic loss, velocity regularization, and slower global parameter exploration with faster learning rates) present in the limiting dynamics of deep neural networks, (2) the emphasis on studying the dynamics in velocity space in addition to parameter space, and (3) concrete quantitative as well as qualitative predictions of an SDE based theory that we empirically verified in deep networks trained on large scale datasets (indeed some of the above nontrivial phenomena were predictions of this theory). Of course, these contributions directly build upon a series of related works studying the immensely complex process of deep learning. To this end, we further clarify the originality of our contributions with respect to some relevant works. Originality. The empirical phenomena we present provide novel insight with respect to the works of Wan et al. [13], Hoffer et al. [18], and Chen et al. [15]. We observe that all parameters in the network (not just those with scale symmetry) move at a constant instantaneous speed at the end of training and diffuse anomalously at rates determined by the hyperparameters of optimization. In contrast to the work by Liu et al. [35], we modeled the entire SGD process as an OU process which allows us to provide insight into the transient dynamics and identify oscillations in parameter and velocity space. We build on the theoretical framework used by Chaudhari and Soatto [33] and provide explicit expressions for the limiting dynamics in the simplified linear regression setting and conclude that the oscillations present in the limiting dynamics are more likely to be space-filling curves (and not limit cycles) in deep learning due to many incommensurate oscillations. Overall, by identifying key phenomena, explaining them in a simpler setting, deriving predictions of new phenomena, and providing evidence for these predictions at scale, we are furthering the scientific study of deep learning. We hope our newly derived understanding of the limiting dynamics of SGD, and its dependence on various important hyperparameters like batch size, learning rate, and momentum, can serve as a basis for future work that can turn these insights into algorithmic gains. A MODELING SGD WITH AN SDE As explained in section 4, in order to understand the dynamics of stochastic gradient descent we build a continuous Langevin equation in phase space modeling the effect of discrete updates and stochastic batches simultaneously. A.1 MODELING DISCRETIZATION To model the discretization effect we assume that the system of update equations (2) is actually a discretization of some unknown ordinary differential equation. To uncover this ODE, we combine the two update equations in (2), by incorporating a previous time step θk−1, and rearrange into the form of a finite difference discretization, as shown in equation (??). Like all discretizations, the Euler discretizations introduce error terms proportional to the step size, which in this case is the learning rate η. Taylor expanding θk+1 and θk−1 around θk, its easy to show that both Euler discretizations introduce a second-order error term proportional to η2 θ̈. θk+1 − θk η = θ̇ + η 2 θ̈ +O(η2), θk − θk−1 η = θ̇ − η 2 θ̈ +O(η2). Notice how the momentum coefficient β ∈ [0, 1] regulates the amount of backward Euler incorporated into the discretization. When β = 0, we remove all backward Euler discretization leaving just the forward Euler discretization. When β = 1, we have equal amounts of backward Euler as forward Euler resulting in a central second-order discretization2 as noticed in [19]. A.2 MODELING STOCHASTICITY In order to model the effect of stochastic batches, we first model a batch gradient with the following assumption: Assumption 1 (CLT). We assume the batch gradient is a noisy version of the true gradient such that gB(θ)− g(θ) is a Gaussian random variable with mean 0 and covariance 1SΣ(θ). The two conditions needed for the CLT to hold are not exactly met in the setting of SGD. Independent and identically distributed. Generally we perform SGD by making a complete pass through the entire dataset before using a sample again which introduces a weak dependence between samples. While the covariance matrix without replacement more accurately models the dependence between samples within a batch, it fails to account for the dependence between batches. Finite variance. A different line of work has questioned the Gaussian assumption entirely because of the need for finite variance random variables. This work instead suggests using the generalized central limit theorem implying the noise would be a heavy-tailed α-stable random variable [29]. Thus, the previous assumption is implicitly assuming the i.i.d. and finite variance conditions apply for large enough datasets and small enough batches. Under the CLT assumption, we must also replace the Euler discretizations with Euler–Maruyama discretizations. For a general stochastic process, dXt = µdt+ σdWt, the Euler–Maruyama method extends the Euler method for ODEs to SDEs, resulting in the update equation Xk+1 = Xk + ∆tµ+√ ∆tσξ, where ξ ∼ N (0, 1). Notice, the key difference is that if the temporal step size is ∆t = η, then the noise is scaled by the square root √ η. In fact, the main argument against modeling SGD with an SDE, as nicely explained in Yaida [28], is that most SDE approximations simultaneously assume that ∆t → 0+, while maintaining that the square root of the learning rate √η is finite. However, by modeling the discretization and stochastic effect simultaneously we can avoid this argument, bringing us to our second assumption: Assumption 2 (SDE). We assume the underdamped Langevin equation (3) accurately models the trajectory of the network driven by SGD through phase space such that θ(ηk) ≈ θk and v(ηk) ≈ vk. This approach of modeling discretization and stochasticity simultaneously is called stochastic modified equations, as further explained in Li et al. [22]. 2The difference between a forward Euler and backward Euler discretization is a second-order central discretization, ( θk+1−θk η ) − ( θk−θk−1 η ) = η ( θk+1−2θk+θk−1 η2 ) = ηθ̈ +O(η2). B STRUCTURE IN THE COVARIANCE OF THE GRADIENT NOISE As we’ve mentioned before, SGD introduces highly structured noise into an optimization process, often assumed to be an essential ingredient for its ability to avoid local minima. Assumption 5 (Covariance Structure). We assume the covariance of the gradient noise is proportional to the Hessian of the quadratic loss Σ(θ) = σ2H where σ ∈ R+ is some unknown scalar. In the setting of linear regression, this is a very natural assumption. If we assume the classic generative model for linear regression data yi = x ᵀ i θ̄+σ where, θ̄ ∈ Rd is the true model and ∼ N (0, 1), then provably Σ(θ) ≈ σ2H . Proof. We can estimate the covariance as Σ(θ) ≈ 1N ∑N i=1 gig ᵀ i − ggᵀ. Near stationarity ggᵀ 1 N ∑N i=1 gig ᵀ i , and thus, Σ(θ) ≈ 1 N N∑ i=1 gig ᵀ i . Under the generative model yi = x ᵀ i θ̄ + σ where ∼ N (0, 1) and σ ∈ R+, then the gradient gi is gi = (x ᵀ i (θ − θ̄)− σ )xi, and the matrix gig ᵀ i is gig ᵀ i = (x ᵀ i (θ − θ̄)− σ )2(xixᵀi ). Assuming θ ≈ θ̄ at stationarity, then (xᵀi (θ − θ̄)− σ )2 ≈ σ2. Thus, Σ(θ) ≈ σ 2 N N∑ i=1 xix ᵀ i = σ2 N XᵀX = σ2H Also notice that weight decay is independent of the data or batch and thus simply shifts the gradient distribution, but leaves the covariance of the gradient noise unchanged. While the above analysis is in the linear regression setting, for deep neural networks it is reasonable to make the same assumption. See the appendix of Jastrzębski et al. [12] for a discussion on this assumption in the non-linear setting. Recent work by Ali et al. [32] also studies the dynamics of SGD (without momentum) in the setting of linear regression. This work, while studying the classic first-order stochastic differential equation, made a point to not introduce an assumption on the diffusion matrix. In particular, they make the point that even in the setting of linear regression, a constant covariance matrix will fail to capture the actual dynamics. To illustrate this point they consider the univariate responseless least squares problem, minimize θ∈R 1 2n n∑ i=1 (xiθ) 2. As they explain, the SGD update for this problem would be θk+1 = θk − η S (∑ i∈B xi ) θk = k∏ i=1 (1− η( 1S ∑ i∈B xi))θ0, from which they conclude for a small enough learning rate η, then with probability one θk → 0. They contrast this with the Ornstein-Uhlenbeck process given by a constant covariance matrix where while the mean for θk converges to zero its variance converges to a positive constant. So is this discrepancy evidence that an Ornstein-Uhlenbeck process with a constant covariance matrix fails to capture the updates of SGD? In many ways this problem is not a simple example, rather a pathological edge case. Consider the generative model that would give rise to this problem, y = 0x+ 0ξ = 0. In otherwords, the true model θ̄ = 0 and the standard deviation for the noise σ = 0. This would imply by the assumption used in our paper that there would be zero diffusion and the resulting SDE would simplify to a deterministic ODE that exponentially converges to zero. C A QUADRATIC LOSS AT THE END OF TRAINING Assumption 4 (Quadratic Loss). We assume that at the end of training the loss for a neural network can be approximated by the quadratic loss L(θ) = (θ − µ)ᵀ ( H 2 ) (θ − µ), where H 0 is the training loss Hessian and µ is some unknown mean vector, corresponding to a local minimum. This assumption has been amply used in previous works such as Mandt et al. [31], Jastrzębski et al. [12], and Poggio et al. [50]. Particularly, Mandt et al. [31] discuss how this assumption makes sense for smooth loss functions for which the stationary solution to the stochastic process reaches a deep local minimum from which it is difficult to escape. It is a well-studied fact, both empirically and theoretically, that the Hessian is low-rank near local minima as noted by Sagun et al. [51], and Kunin et al. [20]. This degeneracy results in flat directions of equal loss. Kunin et al. [20] discuss how differentiable symmetries, architectural features that keep the loss constant under certain weight transformations, give rise to these flat directions. Importantly, the Hessian and the covariance matrix share the same null space, and thus we can always restrict ourselves to the image space of the Hessian, where the drift and diffusion matrix will be full rank. Further discussion on the relationship between the Hessian and the covariance matrix can be found in Thomas et al. [52]. It is also a well known empirical fact that even at the end of training the Hessian can have negative eigenvalues [41]. This empirical observation is at odds with our assumption that the Hessian is positive semi-definite H 0. Further analysis is needed to alleviate this inconsistency. D SOLVING AN ORNSTEIN-UHLENBECK PROCESS WITH ANISOTROPIC NOISE We will study the multivariate Ornstein-Uhlenbeck process described by the stochastic differential equation dXt = A(µ−Xt)dt+ √ 2κ−1DdWt X0 = x0, (14) whereA ∈ Sm++ is a positive definite drift matrix, µ ∈ Rm is a mean vector, κ ∈ R+ is some positive constant, and D ∈ Sm++ is a positive definite diffusion matrix. This OU process is unique in that it is one of the few SDEs we can solve explicitly. We can derive an expression for XT as, XT = e −ATx0 + ( I − e−AT ) µ+ ∫ T 0 eA(t−T ) √ 2κ−1DdWt. (15) Proof. Consider the function f(t, x) = eAtx where eA is a matrix exponential. Then by Itô’s Lemma3 we can evaluate the derivative of f(t,Xt) as df(t,Xt) = ( AeAtXt + e AtA(µ−Xt) ) dt+ eAt √ 2κ−1DdWt = AeAtµdt+ eAt √ 2κ−1DdWt Integrating this expression from t = 0 to t = T gives f(T,XT )− f(0, X0) = ∫ T 0 AeAtµdt+ ∫ T 0 eAt √ 2κ−1DdWt eATXT − x0 = ( eAT − I ) µ+ ∫ T 0 eAt √ 2κ−1DdWt which rearranged gives the expression for XT . From this expression it is clear that XT is a Gaussian process. The mean of the process is E [XT ] = e −ATx0 + ( I − e−AT ) µ, (16) and the covariance and cross-covariance of the process are Var(XT ) = κ −1 ∫ T 0 eA(t−T )2DeA ᵀ(t−T )dt, (17) Cov(XT , XS) = κ −1 ∫ min(T,S) 0 eA(t−T )2DeA ᵀ(t−S)dt. (18) These last two expressions are derived by Itô Isometry4. D.1 THE LYAPUNOV EQUATION We can explicitly solve the integral expressions for the covariance and cross-covariance exactly by solving for the unique matrix B ∈ Sm++ that solves the Lyapunov equation, AB +BAᵀ = 2D. (19) If B solves the Lyapunov equation, notice d dt ( eA(t−T )BeA ᵀ(t−S) ) = eA(t−T )ABeA ᵀ(t−S) + eA(t−T )BAᵀeA ᵀ(t−S) = eA(t−T )2DeA ᵀ(t−S) Using this derivative, the integral expressions for the covariance and cross-covariance simplify as, Var(XT ) = κ −1 ( B − e−ATBe−AᵀT ) , (20) Cov(XT , XS) = κ −1 ( B − e−ATBe−AᵀT ) eA ᵀ(T−S), (21) where we implicitly assume T ≤ S. 3Itô’s Lemma states that for any Itô drift-diffusion process dXt = µtdt + σtdWt and twice differentiable scalar function f(t, x), then df(t,Xt) = ( ft + µtfx + σ2t 2 fxx ) dt+ σtfxdWt. 4Itô Isometry states for any standard Itô process Xt, then E [(∫ t 0 XtdWt )2] = E [∫ t 0 X2t dt ] . D.2 DECOMPOSING THE DRIFT MATRIX While the Lyapunov equation simplifies the expressions for the covariance and cross-covariance, it does not explain how to actually solve for the unknown matrix B. Following a method proposed by Kwon et al. [48], we will show how to solve for B explicitly in terms of the drift A and diffusion D. The drift matrix A can be uniquely decomposed as, A = (D +Q)U (22) whereD is our symmetric diffusion matrix,Q is a skew-symmetric matrix (i.e. Q = −Qᵀ), and U is a positive definite matrix. Using this decomposition, then B = U−1, solves the Lyapunov equation. Proof. Plug B = U−1 into the left-hand side of equation (19), AU−1 + U−1Aᵀ = (D +Q)UU−1 + U−1U(D −Q) = (D +Q) + (D −Q) = 2D Here we used the symmetry of A,D,U and the skew-symmetry of Q. All that is left is to do is solve for the unknown matricesQ and U . First notice the following identity, AD −DA = QA+AQ (23) Proof. Multiplying A = (D +Q)U on the right by (D −Q) gives, A(D −Q) = (D +Q)U(D −Q) = (D +Q)Aᵀ, which rearranged and using A = Aᵀ gives the desired equation. Let V ΛV ᵀ be the eigendecomposition of A and define the matrices D̃ = V ᵀDV and Q̃ = V ᵀQV . These matrices observe the following relationship, Q̃ij = λi − λj ρi + λj D̃ij . (24) Proof. Replace A in the previous equality with its eigendecompsoition, V ΛV ᵀD −DV ΛV ᵀ = QV ΛV ᵀ + V ΛV ᵀQ. Multiply this equation on the right by V and on the left by V ᵀ, ΛD̃ − D̃Λ = Q̃Λ + ΛQ̃. Looking at this equality element-wise and using the fact that Λ is diagonal gives the scalar equality for any i, j, (λi − λj)D̃ij = (λi + λj)Q̃ij , which rearranged gives the desired expression. Thus, Q and U are given by, Q = V Q̃V ᵀ, U = (D +Q)−1A. (25) This decomposition always holds uniquely when A,D 0, as λi−λjλi+λj exists and (D +Q) is invertible. See [48] for a discussion on the singularities of this decomposition. D.3 STATIONARY SOLUTION Using the Lyapunov equation and the drift decomposition, then XT ∼ pT , where pT = N ( e−ATx0 + ( I − e−AT ) µ, κ−1 ( U−1 − e−ATU−1e−AᵀT )) . (26) In the limit as T →∞, then e−AT → 0 and pT → pss where pss = N ( µ, κ−1U−1 ) . (27) Similarly, the cross-covariance converges to the stationary cross-covariance, Covss(XT , XS) = κ −1BeA ᵀ(T−S). (28) E A VARIATIONAL FORMULATION OF THE OU PROCESS WITH ANISOTROPIC NOISE In this section we will describe an alternative, variational, route towards solving the dynamics of the OU process studied in appendix D. Let Φ : Rn → R be an arbitrary, non-negative potential and consider the stochastic differential equation describing the Langevin dynamics of a particle in this potential field, dXt = −∇Φ(Xt)dt+ √ 2κ−1D(Xt)dWt, X0 = x0, (29) where D(Xt) is an arbitrary, spatially-dependent, diffusion matrix, κ is a temperature constant, and x0 ∈ Rm is the particle’s initial position. The Fokker-Planck equation describes the time evolution for the probability distribution p of the particle’s position such that p(x, t) = P(Xt = x). The FP equation is the partial differential equation5, ∂tp = ∇ · ( ∇Φ(Xt)p+ κ−1∇ · (D(Xt)p) ) , p(x, 0) = δ(x0), (30) where ∇· denotes the divergence and δ(x0) is a dirac delta distribution centered at the initialization x0. To assist in the exploration of the FP equation we define the vector field, J(x, t) = −∇Φ(Xt)p−∇ · (D(Xt)p) , (31) which is commonly referred to as the probability current. Notice, that this gives an alternative expression for the FP equation, ∂tp = −∇·J , demonstrating that J(x, t) defines the flow of probability mass through space and time. This interpretation is especially useful for solving for the stationary solution pss, which is the unique distribution that satisfies, ∂tpss = −∇ · Jss = 0, (32) where Jss is the probability current for pss. The stationary condition can be obtained in two distinct ways: 1. Detailed balance. This is when Jss(x) = 0 for all x ∈ Ω. This is analogous to reversibility for discrete Markov chains, which implies that the probability mass flowing from a state i to any state j is the same as the probability mass flowing from state j to state i. 2. Broken detailed balance. This is when ∇ · Jss(x) = 0 but Jss(x) 6= 0 for all x ∈ Ω. This is analogous to irreversibility for discrete Markov chains, which only implies that the total probability mass flowing out of state i equals to the total probability mass flowing into state i. The distinction between these two cases is critical for understanding the limiting dynamics of the process. E.1 THE VARIATIONAL FORMULATION OF THE FOKKER-PLANCK EQUATION WITH ISOTROPIC DIFFUSION We will now consider the restricted setting of standard, isotropic diffusion (D = I). It is easy enough to check that in this setting the stationary solution is pss(x) = e−κΦ(x) Z , Z = ∫ Ω e−κΦ(x)dx, (33) where pss is called a Gibbs distribution and Z is the partition function. Under this distribution, the stationary probability current is zero (Jss(x) = 0) and thus the process is in detailed balance. Interestingly, the Gibbs distribution pss has another interpretation as the unique minimizer of the the Gibbs free energy functional, F (p) = E [Φ]− κ−1H(p), (34) where E [Φ] is the expectation of the potential Φ under the distribution p and H(p) = − ∫ Ω p(x)log(p(x))dx is the Shannon entropy of p. 5This PDE is also known as the Forward Kolmogorov equation. Proof. To prove that indeed pss is the unique minimizer of the Gibbs free energy functional, consider the following equivalent expression F (p) = ∫ Ω p(x)Φ(x)dx+ κ−1 ∫ Ω p(x)log(p(x))dx = κ−1 ∫ Ω p(x) (log(p(x))− log(pss(x))) dx− κ−1 ∫ Ω log(Z) = κ−1DKL(p ‖ pss)− κ−1log(Z) From this expressions, it is clear that the Kullback–Leibler divergence is uniquely minimized when p = pss. In other words, with isotropic diffusion the stationary solution pss can be thought of as the limiting distribution given by the Fokker-Planck equation or the unique minimizer of an energetic-entropic functional. Seminal work by Jordan et al. [53] deepened this connection between the Fokker-Planck equation and the Gibbs free energy functional. In particular, their work demonstrates that the solution p(x, t) to the Fokker-Planck equation is the Wasserstein gradient flow trajectory on the Gibbs free energy functional. Steepest descent is always defined with respect to a distance metric. For example, the update equation, xk+1 = xk − η∇Φ(xk), for classic gradient descent on a potential Φ(x), can be formulated as the solution to the minimization problem xk+1 = argminxηΦ(x) + 1 2d(x, xk) 2 where d(x, xk) = ‖x− xk‖ is the Euclidean distance metric. Gradient flow is the continuous-time limit of gradient descent where we take η → 0+. Similarly, Wasserstein gradient flow is the continuous-time limit of steepest descent optimization defined by the Wasserstein metric. The Wasserstein metric is a distance metric between probability measures defined as, W 22 (µ1, µ2) = inf p∈Π(µ1,µ2) ∫ Rn×Rn |x− y|2p(dx, dy), (35) where µ1 and µ2 are two probability measures on Rn with finite second moments and Π(µ1, µ2) defines the set of joint probability measures with marginals µ1 and µ2. Thus, given an initial distribution and learning rate η, we can use the Wasserstein metric to derive a sequence of distributions minimizing some functional in the sense of steepest descent. In the continuous-time limit as η → 0+ this sequence defines a continuous trajectory of probability distributions minimizing the functional. Jordan et al. [54] proved, through the following theorem, that this process applied to the Gibbs free energy functional converges to the solution to the Fokker-Planck equation with the same initialization: Theorem 1 (JKO). Given an initial condition p0 with finite second moment and an η > 0, define the iterative scheme pη with iterates defined by pk = argminpη ( E [Φ]− κ−1H(p) ) +W 22 (p, p k−1). As η → 0+, then pη → p weakly in L1 where p is the solution to the Fokker-Planck equation with the same initial condition. See [54] for further explanation and [53] for a complete derivation. E.2 EXTENDING THE VARIATIONAL FORMULATION TO THE SETTING OF ANISOTROPIC DIFFUSION While the JKO theorem provides a very powerful lens through which to view solutions to the FokkerPlanck equation, and thus distributions for particles governed by Langevin dynamics, it only applies in the very restricted setting of isotropic diffusion. In this section we will review work by Chaudhari and Soatto [33] extending the variational interpretation to the setting of anisotropic diffusion. Consider when D(Xt) is an anisotropic, spatially-dependent diffusion matrix. In this setting, the original Gibbs distribution given in equation (33) does not necessarily satisfy the stationarity condition equation (32). In fact, it is not immediately clear what the stationary solution is or if the dynamics even have one. Thus, Chaudhari and Soatto [33] make the following assumption: Stationary Assumption. Assume there exists a unique distribution pss that is the stationary solution to the Fokker-Planck equation irregardless of initial conditions. Under this assumption we can implicitly define the potential Ψ(x) = −κ−1log(pss(x)). Using this modified potential we can express the stationary solution as a Gibbs distribution, pss(x) ∝ e−κΨ(x). (36) Under this implicit definition we can define the stationary probability current as Jss(x) = j(x)pss(x) where j(x) = −∇Φ(x)− κ−1∇ ·D(x) +D(x)∇Ψ(x). (37) The vector field j(x) reflects the discrepancy between the original potential Φ and the modified potential Ψ according to the diffusion D(x). Notice that in the isotropic case, when D(x) = I , then Φ = Ψ and j(x) = 0. Chaudhari and Soatto [33] introduce another property of j(x) through assumption, Conservative Assumption. Assume that the force j(x) is conservative (i.e. ∇ · j(x) = 0). Using this assumption, Chaudhari and Soatto [33] extends the variational formulation provided by the JKO theorem to the anisotropic setting, Theorem 2 (CS). Given an initial condition p0 with finite second moment, then the energeticentropic functional, F (p) = Ep [Ψ(x)]− κ−1H(p) monotonically decreases throughout the trajectory given by the solution to the Fokker-Planck equation with the given initial condition. In other words, the Fokker-Plank equation (30) with anisotropic diffusion can be interpreted as minimizing the expectation of a modified loss Ψ, while being implicitly regularized towards distributions that maximize entropy. The derivation requires we assume a stationary solution pss exists and that the force j(x) implicitly defined by pss is conservative. However, rather than implicitly define Ψ(x) and j(x) through assumption, if we can explicitly construct a modified loss Ψ(x) such that the resulting j(x) satisfies certain conditions, then the stationary solution exists and the variational formulation will apply as well. We formalize this statement with the following theorem, Theorem 3 (Explicit Construction). If there exists a potential Ψ(x) such that either j(x) = 0 or ∇ · j(x) = 0 and ∇Ψ(x) ⊥ j(x), then pss is the Gibbs distribution ∝ e−κΨ(x) and the variational formulation given in Theorem 2 applies. E.3 APPLYING THE VARIATIONAL FORMULATION TO THE OU PROCESS Through explicit construction we now seek to find analytic expressions for the modified loss Ψ(x) and force j(x) hypothesised by Chaudhari and Soatto [33] in the fundamental setting of an OU process with anisotropic diffusion, as described in section D. We assume the diffusion matrix is anisotropic, but spatially independent, ∇ · D(x) = 0. For the OU process the original potential generating the drift is Φ(x) = (x− µ)ᵀA2 (x− µ). (38) Recall, that in order to extend the variational formulation we must construct some potential Ψ(x) such that∇ · j(x) = 0 and∇Ψ ⊥ j(x). It is possible to construct Ψ(x) using the unique decomposition of the drift matrix A = (D +Q)U discussed in appendix D. Define the modified potential, Ψ(x) = (x− µ)ᵀ U2 (x− µ). (39) Using this potential, the force j(x) is j(x) = −A(x− µ) +DU(x− µ) = −QU(x− µ). (40) Notice that j(x) is conservative, ∇ · j(x) = ∇ · −QU (x− µ) = 0 because Q is skew-symmetric. Additionally, j(x) is orthogonal, j(x)ᵀ∇Ψ(x) = (x− µ)ᵀ UᵀQU (x− µ) = 0, again because Q is skew-symmetric. Thus, we have determined a modified potential Ψ(x) that results in a conservative orthogonal force j(x) satisfying the conditions for Theorem 3. Indeed the stationary Gibbs distribution given by Theorem 3 agrees with equation (27) derived via the first and second moments in appendix D, e−κΨ(x) ∝ N ( µ, κ−1U−1 ) In addition to the variational formulation, this interpretation further details explicitly the stationary probability current, Jss(x) = j(x)pss, and whether or not the the stationary solution is in broken detailed balance. F EXPLICIT EXPRESSIONS FOR THE OU PROCESS GENERATED BY SGD We will now consider the specific OU process generated by SGD with linear regression. Here we repeat the setup as explained in section 5. Let X ∈ RN×d, Y ∈ RN be the input data, output labels respectively and θ ∈ Rd be our vector of regression coefficients. The least squares loss is the convex quadratic loss L(θ) = 12N ‖Y −Xθ‖2 with gradient g(θ) = Hθ − b, where H = XᵀXN and b = X ᵀY N . Plugging this expression for the gradient into the underdamped Langevin equation (3), and rearranging terms, results in the multivariate Ornstein-Uhlenbeck (OU) process, d [ θt vt ] = A ([ µ 0 ] − [ θt vt ]) dt+ √ 2κ−1DdWt, (41) where A and D are the drift and diffusion matrices respectively, A = [ 0 −I 2 η(1+β) (H + λI) 2(1−β) η(1+β)I ] , D = [ 0 0 0 2(1−β)η(1+β)Σ(θ) ] , (42) κ = S(1− β2) is a temperature constant, and µ = (H + λI)−1b is the ridge regression solution. F.1 SOLVING FOR THE MODIFIED LOSS AND CONSERVATIVE FORCE In order to apply the expressions derived for a general OU process in appendix D and E, we must first decompose the drift as A = (D + Q)U . Under the simplification Σ(θ) = σ2H discussed in appendix B, then the matrices Q and U , as defined below, achieve this, Q = [ 0 −σ2H σ2H 0 ] , U = [ 2 η(1+β)σ2H −1 (H + λI) 0 0 1σ2H −1 ] . (43) Using these matrices we can now derive explicit expressions for the modified loss Ψ(θ, v) and conservative force j(θ, v). First notice that the least squares loss with L2 regularization is proportional to the convex quadratic, Φ(θ) = (θ − µ)ᵀ(H + λI)(θ − µ). (44) The modified loss Ψ is composed of two terms, one that only depends on the position, Ψθ(θ) = (θ − µ)ᵀ ( H−1(H + λI) η(1 + β)σ2 ) (θ − µ) , (45) and another that only depends on the velocity, Ψv(v) = v ᵀ ( H−1 σ2 ) v. (46) The conservative force j(θ, v) is j(θ, v) = [ v − 2η(1+β) (H + λI) (θ − µ) ] , (47) and thus the stationary probability current is Jss(θ, v) = j(θ, v)pss. F.2 DECOMPOSING THE TRAJECTORY INTO THE EIGENBASIS OF THE HESSIAN As shown in appendix D, the temporal distribution for the OU process at some time T ≥ 0 is, pT ([ θ v ]) = N ( e−AT [ θ0 v0 ] + ( I − e−AT ) [µ 0 ] , κ−1 ( U−1 − e−ATU−1e−AᵀT )) . Here we will now use the eigenbasis {q1, . . . , qm} of the Hessian with eigenvalues {ρ1, . . . , ρm} to derive explicit expressions for the mean and covariance of the process through time. Deterministic component. We can rearrange the expectation as E [[ θ v ]] = [ µ 0 ] + e−AT [ θ0 − µ v0 ] . Notice that the second, time-dependent term is actually the solution to the system of ODEs ˙[θ v ] = −A [ θ v ] with initial condition [θ0 − µ v0]ᵀ. This system of ODEs can be block diagonalized by factorizing A = OSOᵀ where O is orthogonal and S is block diagonal defined as O = q1 0 . . . qm 0 . . . 0 q1 . . . 0 qm S = 0 −1 2 η(1+β) (ρ1 + λ) 2(1−β) η(1+β) . . . . . . . . . 0 −1 2 η(1+β) (ρm + λ) 2(1−β) η(1+β) In otherwords in the plane spanned by [qi 0] ᵀ and [0 qi] ᵀ the system of ODEs decouples into the 2D system ˙[ai bi ] = [ 0 1 − 2η(1+β) (ρi + λ) − 2(1−β) η(1+β) ] [ ai bi ] This system has a simple physical interpretation as a damped harmonic oscillator. If we let bi = ȧi, then we can unravel this system into the second order ODE äi + 2 1− β η(1 + β) ȧi + 2 η(1 + β) (ρi + λ)ai = 0 which is in standard form (i.e. ẍ + 2γẋ + ω2x = 0) for γ = 1−βη(1+β) and ωi = √ 2 η(1+β) (ρi + λ). Let ai(0) = 〈θ0 − µ, qi〉 and bi(0) = 〈v0, qi〉, then the solution in terms of γ and ωi is ai(t) = e−γt ( ai(0) cosh (√ γ2 − ω2i t ) + γai(0)+bi(0)√ γ2−ω2i sinh (√ γ2 − ω2i t )) γ > ωi e−γt(ai(0) + (γai(0) + bi(0))t) γ = ωi e−γt ( ai(0) cos (√ ω2i − γ2t ) + γai(0)+bi(0)√ ω2i−γ2 sin (√ ω2i − γ2t )) γ < ωi Differentiating these equations gives us solutions for bi(t) bi(t) = e−γt ( bi(0) cosh (√ γ2 − ω2i t ) − ω 2 i ai(0)+γbi(0)√ γ2−ω2i sinh (√ γ2 − ω2i t )) γ > ωi e−γt ( bi(0)− ( ω2i ai(0) + γbi(0) ) t ) γ = ωi e−γt ( bi(0) cos (√ ω2i − γ2t ) − ω 2 i ai(0)+γbi(0)√ ω2i−γ2 sin (√ ω2i − γ2t )) γ < ωi Combining all these results, we can now analytically decompose the expectation as the sum, E [[ θ v ]] = [ µ 0 ] + m∑ i=1 ( ai(t) [ qi 0 ] + bi(t) [ 0 qi ]) . Intuitively, this equation describes a damped rotation (spiral) around the OLS solution in the planes defined by the the eigenvectors of the Hessian at a rate proportional to the respective eigenvalue. Stochastic component. Using the previous block diagonal decomposition A = OSOᵀ we can simplify the variance as Var ([ θ v ]) = κ−1 ( U−1 − e−ATU−1e−AᵀT ) = κ−1 ( U−1 − e−OSOᵀTU−1e−OSᵀOᵀT ) = κ−1O ( OᵀU−1O − e−ST (OᵀU−1O)e−ST ᵀ ) Oᵀ Interestingly, the matrix OᵀU−1O is also block diagonal, OᵀU−1O = Oᵀ [ η(1+β)σ2 2 (H + λI) −1 H 0 0 σ2H ] O = η(1+β)σ2 2 ρ1 ρ1+λ 0 0 σ2ρ1 . . . . . . . . . η(1+β)σ2 2 ρm ρm+λ 0 0 σ2ρm Thus, similar to the mean, we can simply consider the variance in each of the planes spanned by [qi 0] ᵀ and [0 qi] ᵀ. If we define the block matrices, Di = [ ησ2 2S(1−β) ρi ρi+λ 0 0 σ 2 S(1−β2)ρi ] Si = [ 0 1 − 2η(1+β) (ρi + λ) − 2(1−β) η(1+β) ] then the projected variance matrix in this plane simplifies as Var ([ qᵀi θ qᵀi v ]) = Di − e−SiTDie−SiT ᵀ Using the solution to a damped harmonic osccilator discussed previously, we can express the matrix exponential e−SiT explicitly in terms of γ = 1−βη(1+β) and ωi = √ 2 η(1+β) (ρi + λ). If we let αi =√ |γ2 − ω2i |, then the matrix exponential is e−Sit = e−γt [ cosh (αit) + γ αi sinh (αit) 1 αi sinh (αit) −ω 2 i αi sinh (αit) cosh (αit)− γαi sinh (αit) ] γ > ωi e−γt [ 1 + γt t −ω2i t 1− γt ] γ = ωi e−γt [ cos (αit) + γ αi sin (αit) 1 αi sin (αit) −ω 2 i αi sin (αit) cos (αit)− γαi sin (αit) ] γ < ωi G ANALYZING PROPERTIES OF THE STATIONARY SOLUTION Assuming the stationary solution is given by equation (??) we can solve for the expected value of the norm of the local displacement and gain some intuition for the expected value of the norm of global displacement. G.1 INSTANTANEOUS SPEED Ess [ ‖δk‖2 ] = Ess [ ‖θk+1 − θk‖2 ] = η2Ess [ ‖vk+1‖2 ] = η2tr ( Ess [ vk+1v ᵀ k+1 ]) = η2tr (Varss (vk+1) + Ess [vk+1] Ess [vk+1] ᵀ ) = η2tr ( κ−1U−1 ) = η2 S(1− β2) tr ( σ2H ) Note that this follows directly from the definition of δk in equation (1) and the mean and variance of the stationary solution in equation ( ??), as well as the follow-up derivation in appendix F. G.2 ANOMALOUS DIFFUSION Notice, that the global movement ∆t = θt−θ0 can be broken up into the sum of the local movements ∆t = ∑t i=1 δi, where δi = θi − θi−1. Applying this decomposition, Ess [ ‖∆t‖2 ] = Ess ∣∣∣∣∣ ∣∣∣∣∣ t∑ i=1 δi ∣∣∣∣∣ ∣∣∣∣∣ 2 = t∑ i=1 Ess [ ‖δi‖2 ] + t∑ i 6=j Ess [〈δi, δj〉] As we solved for previously, Ess [ ‖δi‖2 ] = η2Ess [ ‖vi‖2 ] = η2tr (Varss(vi)) = η2 S(1− β2) tr ( σ2H ) . By a similar simplification, we can express the second term in terms of the stationary crosscovariance, Ess [〈δi, δj〉] = η2Ess [〈vi, vj〉] = η2tr (Covss(vi, vj)) . Thus, to simplify this expression we just need to consider the velocity-velocity covariance Covss(vi, vj). At stationarity, the cross-covariance for the system in phase space, zi = [θi vi] is Covss(zi, zj) = κ −1U−1e−A ᵀ|i−j| where κ = S(1− β2), and U = [ 2 η(1+β)σ2H −1 (H + λI) 0 0 1σ2H −1 ] A = [ 0 −I 2 η(1+β) (H + λI) 2(1−β) η(1+β)I ] As discussed when solving for the mean of the OU trajectory, the drift matrix A can be block diagonalized as A = OSOᵀ where O is orthogonal and S is block diagonal defined as O = q1 0 . . . qm 0 . . . 0 q1 . . . 0 qm , S = 0 −1 2 η(1+β) (ρ1 + λ) 2(1−β) η(1+β) . . . . . . . . . 0 −1 2 η(1+β) (ρm + λ) 2(1−β) η(1+β) . Notice also that O diagonalizes U−1 such that, Λ = OᵀU−1O = η(1+β)σ2 2 ρ1 ρ1+λ 0 0 σ2ρ1 . . . . . . . . . η(1+β)σ2 2 ρm ρm+λ 0 0 σ2ρm . Applying these decompositions, properties of matrix exponentials, and the cyclic invariance of the trace, allows us to express the trace of the cross-covariance as tr (Covss(zi, zj)) = κ −1tr ( U−1e−A ᵀ|i−j| ) = κ−1tr ( U−1Oe−S ᵀ|i−j|Oᵀ ) = κ−1tr ( Λe−S ᵀ|i−j| ) = κ−1 n∑ k=1 tr ( Λke −Sᵀk |i−j| ) where Λk and Sk are the blocks associated with each eigenvector of H . As solved for previously in the variance of the OU process, we can express the matrix exponential e−Sk|i−j| explicitly in terms of γ = 1−βη(
1. What is the focus of the paper, and what are the authors' main contributions to the field of neural network dynamics? 2. How do the authors study the limiting dynamics of neural networks, and what kind of oscillatory dynamics do they find in the (position, velocity)-phase space? 3. What is the significance of the Ornstein-Uhlenbeck (OU) process in the paper, and how does it relate to the dynamics of ResNet18? 4. What are some potential issues with the setup of the study, and how might they limit the impact of the results? 5. Are there any clarity or novelty concerns regarding certain aspects of the paper, such as the observation of continued weight movement or the expectation about training trajectories reflecting underlying anisotropy? 6. How do the authors attempt to connect their theoretical results with empirical results on deep networks, and what kind of predictions do they make about the diffusive behavior of the limiting dynamics? 7. Are there any presentation concerns regarding the separation of experimental from theoretical results, particularly when discussing the exponent of anomalous diffusion?
Summary Of The Paper Review
Summary Of The Paper The starting point of the present paper is the observation that the parameters of a neural network continue to change when training the network with stochastic gradient descent (SGD) even after the performance of the network has stabilised at a final value. The authors study these "limiting dynamics" in linear regression, which they model as an underdamped Langevin equation resulting in an Ornstein-Uhlenbeck (OU) process and find an oscillatory dynamics in the (position, velocity)-phase space. The authors discuss how to connect their findings in this simple model with the dynamics of Resnet18 (see below). Finally, they attempt to predict features of the diffusive behaviour of the ResNet18 from their model (Sec. 8) Review I welcome the general direction of the work: trying to understand the dynamics of neural networks is a complex undertaking that a large community of researchers is pursuing, so the study of simplified models is a promising avenue. Setup of the study In studying the limiting dynamics however, the authors limit themselves to a setup where I don't see any immediate connections to learning or representations of neural networks. The dynamics the authors describe happen after resuming training of a pre-trained neural network, thus I feel like their setup restricts the potential impact of the results of this study. Clarity / novelty of the results I found the article hard to read at times because the authors repeatedly qualify their observations as "surprising", "contrary to common intuition", "nonintuitive", etc. I think these qualifiers can be mistaken as claims of novelty etc. and would hence use them more sparingly. For example, the fact that neural networks continue to move through their weight space has been well-established for quite some time now, cf. for example [Jastrzebski et al. '17, Chaudhari & Soatto, 18, Baity-Jesi et al. '18, and many more]. Hence I didn't find the observation in Figure 1 "surprising" (p 2 after the equation), which underlines the subjectiveness of these claims - or else I maybe reading the figure incorrectly? Another example: I would consider it well-established that OU processes whose diffusion matrix is not isotropic do not follow the naïve Gibbs distribution, but instead equilibrate in a modified potential (see for example Section 5.3 "Potential conditions" of Gardiner's "Handbook of stochastic methods" etc.) Furthermore, modified losses arising through SGD dynamics have been studied in a number of recent deep learning papers, some of which are cited by the authors Other claims about the significance of the results should equally be clarified in my opinion, for example: The expectation that the training trajectory would reflect the underlying anisotropy of the training loss driving the dynamics is also wrong;" (p. 9) In my understanding, this study is only concerned with the limiting dynamics of learning, and hence conclusions about the training cannot be drawn immediately? Separation of experimental from theoretical results The authors should be lauded for trying to connect their theoretical results with empirical results on deep networks. Again though, I think the presentation of the results should be revised to clarify which predictions are actually derived from theory. Take the exponent of anomalous diffusion (bottom of Fig 6): it cannot be estimated of the global displacement (12), as the authors explain. Instead, the authors evaluate the dependence of the diffusion constant on learning rate, batch size and momentum parameter directly from a simulation by fitting a power-law to the empirical displacement. I would present this result separately, as it is not a theoretical prediction, and thus presenting it in a section entitled "Predicting the diffusive behaviour of the limiting dynamics" could cause confusion in my opinion.
ICLR
Title Rethinking the limiting dynamics of SGD: modified loss, phase space oscillations, and anomalous diffusion Abstract In this work we explore the limiting dynamics of deep neural networks trained with stochastic gradient descent (SGD). As observed previously, long after performance has converged, networks continue to move through parameter space by a process of anomalous diffusion in which distance travelled grows as a power law in the number of gradient updates with a nontrivial exponent. We reveal an intricate interaction between the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix at the end of training that explains this anomalous diffusion. To build this understanding, we first derive a continuoustime model for SGD with finite learning rates and batch sizes as an underdamped Langevin equation. We study this equation in the setting of linear regression, where we can derive exact, analytic expressions for the phase space dynamics of the parameters and their instantaneous velocities from initialization to stationarity. Using the Fokker-Planck equation, we show that the key ingredient driving these dynamics is not the original training loss, but rather the combination of a modified loss, which implicitly regularizes the velocity, and probability currents, which cause oscillations in phase space. We identify qualitative and quantitative predictions of this theory in the dynamics of a ResNet-18 model trained on ImageNet. Through the lens of statistical physics, we uncover a mechanistic origin for the anomalous limiting dynamics of deep neural networks trained with SGD. 1 INTRODUCTION Deep neural networks have demonstrated remarkable generalization across a variety of datasets and tasks. Essential to their success has been a collection of good practices on how to train these models with stochastic gradient descent (SGD). Yet, despite their importance, these practices are mainly based on heuristic arguments and trial and error search. Without a general theory connecting the hyperparameters of optimization, the architecture of the network, and the geometry of the dataset, theory-driven design of deep learning systems is impossible. Existing theoretical works studying this interaction have leveraged the random structure of neural networks at initialization [1, 2, 3] and in their infinite width limits in order to study their dynamics [4, 5, 6, 7, 8]. Here we take a different approach and study the training dynamics of pre-trained networks that are ready to be used for inference. By leveraging the mathematical structures found at the end of training, we uncover an intricate interaction between the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix that corroborates previously identified empirical behavior such as anomalous limiting dynamics. Not only is understanding the limiting dynamics of SGD a critical stepping stone to building a complete theory for the learning dynamics of neural networks, but recently there have been a series of works demonstrating that the performance of pre-trained networks can be improved through averaging and ensembling [9, 10, 11]. Combining empirical exploration and theoretical tools from statistical physics, we identify and uncover a mechanistic explanation for the limiting dynamics of neural networks trained with SGD. 2 DIFFUSIVE BEHAVIOR IN THE LIMITING DYNAMICS OF SGD A network that has converged in performance will continue to move through parameter space [12, 13, 14, 15]. To demonstrate this behavior, we resume training of pre-trained convolutional networks while tracking the network trajectory through parameter space. Let θ∗ ∈ Rm be the parameter vector for a pre-trained network and θk ∈ Rm be the parameter vector after k steps of resumed training. We track two metrics of the training trajectory, namely the local parameter displacement δk between consecutive steps, and the global displacement ∆k after k steps from the pre-trained initialization: δk = θk − θk−1, ∆k = θk − θ∗. (1) As shown in Fig. 1, neither of these differences converge to zero across a variety of architectures, indicating that despite performance convergence, the networks continue to move through parameter space, both locally and globally. The squared norm of the local displacement ‖δk‖22 remains near a constant value, indicating the network is essentially moving at a constant instantaneous speed. This observation is quite similar to the “equilibrium" phenomenon or “constant angular update" observed in Li et al. [17] and Wan et al. [13] respectively. However, these works only studied the displacement for parameters immediately preceding a normalization layer. The constant instantaneous speed behavior we observe is for all parameters in the model and is even present in models without normalization layers. While the squared norm of the local displacement is essentially constant, the squared norm of the global displacement ‖∆k‖22 is monotonically growing for all networks, implying even once trained, the network continues to diverge from where it has been. Indeed Fig. 1 indicates a power law relationship between global displacement and number of steps, given by ‖∆k‖22 ∝ kc. As we’ll see in section 8, this relationship is indicative of anomalous diffusion where c corresponds to the anomalous diffusion exponent. Standard Brownian motion corresponds to c = 1. Similar observation were made by Baity-Jesi et al. [14] who noticed distinct phases of the training trajectory evident in the dynamics of the global displacement and Chen et al. [15] who found that the exponent of diffusion changes through the course of training. A parallel observation is given by Hoffer et al. [18] for the beginning of training, where they measure the global dis- placement from the initialization of an untrained network and observe a rate ∝ log(k), a form of ultra-slow diffusion. These empirical observations raise the natural questions, where is the network moving to and why? To answer these questions we will build a diffusion based theory of SGD, study these dynamics in the setting of linear regression, and use lessons learned in this fundamental setting to understand the limiting dynamics of neural networks. 3 RELATED WORK There is a long line of literature studying both theoretically and empirically the learning dynamics of deep neural networks trained with SGD. Our analysis and experiments build upon this literature. Continuous models for SGD. Many works consider how to improve the classic gradient flow model for SGD to more realistically reflect momentum [19], discretization due to finite learning rates [20, 21], and stochasticity due to random batches [22, 23]. One line of work has studied the dynamics of networks in their infinite width limits through dynamical mean field theory [24, 25, 26, 27], while a different approach has used stochastic differential equations (SDEs) to model SGD directly, the approach we take in this work. However, recently, the validity of this approach has been questioned. The main argument, as nicely explained in Yaida [28], is that most SDE approximations simultaneously assume that ∆t → 0+, while maintaining that the learning rate η = ∆t is finite. The works Simsekli et al. [29] and Li et al. [30] have questioned the correctness of the using the central limit theorem (CLT) to model the gradient noise as Gaussian, arguing respectively that the heavy-tailed structure in the gradient noise and the weak dependence between batches leads the CLT to break down. In our work, we maintain the CLT assumption holds, which we discuss fur- ther in appendix A, but importantly we avoid the pitfalls of many previous SDE approximations by simultaneously modeling the effect of finite learning rates and stochasticity. Limiting dynamics. A series of works have applied SDE models of SGD to study the limiting dynamics of neural networks. In the seminal work by Mandt et al. [31], the limiting dynamics were modeled with a multivariate Ornstein-Uhlenbeck process by combining a first-order SDE model for SGD with assumptions on the geometry of the loss and covariance matrix for the gradient noise. This analysis was extended by Jastrzębski et al. [12] through additional assumptions on the covariance matrix to gain tractable insights and applied by Ali et al. [32] to the simpler setting of linear regression, which has a quadratic loss. A different approach was taken by Chaudhari and Soatto [33], which did not formulate the dynamics as an OU process, nor assume directly a structure on the loss or gradient noise. Rather, this analysis studied the same first-order SDE via the Fokker-Planck equation to propose the existence of a modified loss and probability currents driving the limiting dynamics, but did not provide explicit expressions. Our analysis deepens and combines ideas from all these works, where our key insight is to lift the dynamics into phase space. By studying the dynamics of the parameters and their velocities, and by applying the analysis first in the setting of linear regression where assumptions are provably true, we are able to identify analytic expressions and explicit insights which lead to concrete predictions and testable hypothesis. Stationary dynamics. A different line of work avoids modeling the limiting dynamics of SGD with an SDE and instead chooses to leverage the property of stationarity. These works [28, 34, 35, 36] assume that eventually the probability distribution governing the model parameters reaches stationarity such that the discrete SGD process is simply sampling from this distribution. Yaida [28] used this approach to derive fluctuation-dissipation relations that link measurable quantities of the parameters and hyperparameters of SGD. Liu et al. [35] used this approach to derive properties for the stationary distribution of SGD with a quadratic loss. Similar to our analysis, this work identifies that the stationary distribution for the parameters reflects a modified loss function dependent on the relationship between the covariance matrix of the gradient noise and the Hessian matrix for the original loss. Empirical exploration. Another set of works analyzing the limiting dynamics of SGD has taken a purely empirical approach. Building on the intuition that flat minima generalize better than sharp minima, Keskar et al. [37] demonstrated empirically that the hyperparameters of optimization influence the eigenvalue spectrum of the Hessian matrix at the end of training. Many subsequent works have studied the Hessian eigenspectrum during and at the end of training. Jastrzębski et al. [38], Cohen et al. [39] studied the dynamics of the top eigenvalues during training. Sagun et al. [40], Papyan [41], Ghorbani et al. [42] demonstrated the spectrum has a bulk of values near zero plus a small number of larger outliers. Gur-Ari et al. [43] demonstrated that the learning dynamics are constrained to the subspace spanned by the top eigenvectors, but found no special properties of the dynamics within this subspace. In our work we also determine that the top eigensubspace of the Hessian plays a crucial role in the limiting dynamics and by projecting the dynamics into this subspace in phase space, we see that the motion is not random, but consists of incoherent oscillations leading to anomalous diffusion. 4 MODELING SGD AS AN UNDERDAMPED LANGEVIN EQUATION Following the route of previous works [31, 12, 33] studying the limiting dynamics of neural networks, we first seek to model SGD as a continuous stochastic process. We consider a network parameterized by θ ∈ Rm, a training dataset {x1, . . . , xN} of size N , and a training loss L(θ) = 1N ∑N i=1 `(θ, xi) with corresponding gradient g(θ) = ∂L ∂θ . The state of the network at the kth step of training is defined by the position vector θk and velocity vector vk of the same dimension. The gradient descent update with learning rate η, momentum β, and weight decayλ is given by vk+1 = βvk − g(θk)− λθk, θk+1 = θk + ηvk+1, (2) where we initialize the network such that v0 = 0 and θ0 is the parameter initialization. In order to understand the dynamics of the network through position and velocity space, which we will refer to as phase space, we express these discrete recursive equations as the discretization of some unknown ordinary differential equation (ODE), sometimes referred to as a modified equation as in [44, 20]. While this ODE models the gradient descent process even at finite learning rates, it fails to account for the stochasticity introduced by choosing a random batch B of size S drawn uniformly from the set of N training points. This sampling yields the stochastic gradient gB(θ) = 1S ∑ i∈B∇`(θ, xi). To model this effect, we make the following assumption: Assumption 1 (CLT). We assume the batch gradient is a noisy version of the true gradient such that gB(θ)− g(θ) is a Gaussian random variable with mean 0 and covariance 1SΣ(θ). Incorporating this model of stochastic gradients into the previous finite difference equation and applying the stochastic counterparts to Euler discretizations, results in the standard drift-diffusion stochastic differential equation (SDE), referred to as an underdamped Langevin equation, d [ θ v ] = [ v − 2η(1+β) (g(θ) + λθ + (1− β)v) ] dt+ [ 0 0 0 2√ ηS(1+β) √ Σ(θ) ] dWt, (3) where Wt is a standard Wiener process. This is the continuous model we will study in this work: Assumption 2 (SDE). We assume the underdamped Langevin equation (3) accurately models the trajectory of the network driven by SGD through phase space such that θ(ηk) ≈ θk and v(ηk) ≈ vk. See appendix A for further discussion on the nuances of modeling SGD with an SDE. 5 LINEAR REGRESSION WITH SGD IS AN ORNSTEIN-UHLENBECK PROCESS Equipped with a model for SGD, we seek to understand its dynamics in the fundamental setting of linear regression, one of the few cases where we have a complete model for the interaction of the dataset, architecture, and optimizer. Let X ∈ RN×d be the input data, Y ∈ RN be the output labels, and θ ∈ Rd be our vector of regression coefficients. The least squares loss is the convex quadratic lossL(θ) = 12N ‖Y −Xθ‖2 with gradient g(θ) = Hθ−b, whereH = X ᵀX N and b = XᵀY N . Plugging this expression for the gradient into the underdamped Langevin equation (3), and rearranging terms, results in the multivariate Ornstein-Uhlenbeck (OU) process, d [ θt vt ] = − [ 0 −I 2 η(1+β) (H + λI) 2(1−β) η(1+β)I ] ︸ ︷︷ ︸ A ([ θt vt ] − [ µ 0 ]) dt+ √ 2κ−1 √√√√√ [ 0 0 0 2(1−β)η(1+β)Σ(θ) ] ︸ ︷︷ ︸ D dWt, (4) where A and D are the drift and diffusion matrices respectively, κ = S(1− β2) is an inverse temperature constant, and µ = (H + λI)−1b is the ridge regression solution. The solution to an OU process is a Gaussian process. By solving for the temporal dynamics of the first and second moments of the process, we can obtain an analytic expression for the trajectory at any time t. In particular, we can decompose the trajectory as the sum of a deterministic and stochastic component defined by the first and second moments respectively. Deterministic component. Using the form of A we can decompose the expectation as a sum of harmonic oscillators in the eigenbasis {q1, . . . , qm} of the Hessian, E [[ θt vt ]] = [ µ 0 ] + m∑ i=1 ( ai(t) [ qi 0 ] + bi(t) [ 0 qi ]) . (5) Here the coefficients ai(t) and bi(t) depend on the optimization hyperparameters η, β, λ, S and the respective eigenvalue of the Hessian ρi as further explained in appendix F. We verify this expression nearly perfectly matches empirics on complex datasets under various hyperparameter settings as shown in Fig. 2. Stochastic component. The cross-covariance of the process between two points in time t ≤ s, is Cov ([ θt vt ] , [ θs vs ]) =κ−1 ( B−e−AtBe−Aᵀt ) eA ᵀ(t−s), (6) where B solves the Lyapunov equation AB +BAᵀ = 2D. In order to gain analytic expressions for B in terms of the optimization hyperparameters, eigendecomposition of the Hessian, and covariance of the gradient noise, we must introduce the following assumption: Assumption 3 (Simultaneously Diagonalizable). We assume the covariance of the gradient noise is spatially independent Σ(θ) = Σ and commutes with the Hessian HΣ = ΣH , therefore sharing a common eigenbasis. 6 UNDERSTANDING STATIONARITY VIA THE FOKKER-PLANCK EQUATION The OU process is unique in that it is one of the few SDEs which we can solve exactly. As shown in section 5, we were able to derive exact expressions for the dynamics of linear regression trained with SGD from initialization to stationarity by simply solving for the first and second moments. While the expression for the first moment provides an understanding of the intricate oscillatory relationship in the deterministic component of the process, the second moment, driving the stochastic component, is much more opaque. An alternative route to solving the OU process that potentially provides more insight is the Fokker-Planck equation. The Fokker-Planck (FP) equation is a PDE describing the time evolution for the probability distribution of a particle governed by Langevin dynamics. For an arbitrary potential Φ and diffusion matrix D, the Fokker-Planck equation (under an Itô integration prescription) is ∂tp = ∇ · ( ∇Φp+∇ · ( κ−1Dp ))︸ ︷︷ ︸ −J , (7) where p represents the time-dependent probability distribution, and J is a vector field commonly referred to as the probability current. The FP equation is especially useful for explicitly solving for the stationary solution, assuming one exists, of the Langevin dynamics. The stationary solution pss by definition obeys ∂tpss = 0 or equivalently ∇ · Jss = 0. From this second definition we see that there are two distinct settings of stationarity: detailed balance when Jss = 0 everywhere, or broken detailed balance when ∇ · Jss = 0 and Jss 6= 0. For a general OU process, the potential is a convex quadratic function Φ(x) = xᵀAx defined by the drift matrix A. When the diffusion matrix is isotropic (D ∝ I) and spatially independent (∇ · D = 0) the resulting stationary solution is a Gibbs distribution pss(x) ∝ e−κΦ(x) determined by the original loss Φ(x) and is in detailed balance. Lesser known properties of the OU process arise when the diffusion matrix is anisotropic or spatially dependent [45, 46]. In this setting the solution is still a Gaussian process, but the stationary solution, if it exists, is no longer defined by the Gibbs distribution of the original loss Φ(x), but actually a modified loss Ψ(x). Furthermore, the stationary solution may be in broken detailed balance leading to a non-zero probability current Jss(x). Depending on the relationship between the drift matrix A and the diffusion matrix D the resulting dynamics of the OU process can have very nontrivial behavior. In the setting of linear regression, anisotropy in the data distribution will lead to anisotropy in the gradient noise and thus an anisotropic diffusion matrix. This implies that for most datasets we should expect that the SGD trajectory is not driven by the original least squares loss, but by a modified loss and converges to a stationary solution with broken detailed balance, as predicted by Chaudhari and Soatto [33]. Using the explicit expressions for the drift A and diffusion D matrices we can compute analytically the modified loss and stationary probability current, Ψ(θ, v) = ([ θ v ] − [ µ 0 ])ᵀ( U 2 )([ θ v ] − [ µ 0 ]) , Jss(θ, v) = −QU ([ θ v ] − [ µ 0 ]) pss, (8) where Q is a skew-symmetric matrix and U is a positive definite matrix defined as, Q = [ 0 −Σ(θ) Σ(θ) 0 ] , U = [ 2 η(1+β)Σ(θ) −1 (H + λI) 0 0 Σ(θ)−1 ] . (9) These new fundamental matrices, Q and U , relate to the original drift A and diffusion D matrices through the unique decomposition A = (D + Q)U , introduced by Ao [47] and Kwon et al. [48]. Using this decomposition we can easily show that B = U−1 solves the Lyapunov equation and indeed the stationary solution pss is the Gibbs distribution defined by the modified loss Ψ(θ, v) in equation (8). Further, the stationary cross-covariance solved in section 5 reflects the oscillatory dynamics introduced by the stationary probability currents Jss(θ, v) in equation (8). Taken together, we gain the intuition that the limiting dynamics of SGD in linear regression are driven by a modified loss subject to oscillatory probability currents. 7 EVIDENCE OF A MODIFIED LOSS AND OSCILLATIONS IN DEEP LEARNING Does the theory derived in the linear regression setting (sections 5, 6) help explain the empirical phenomena observed in the non-linear setting of deep neural networks (section 2)? In order for the theory built in the previous sections to apply to the limiting dynamics of neural networks, we must introduce simplifying assumptions on the loss landscape and gradient noise at the end of training: Assumption 4 (Quadratic Loss). We assume that at the end of training the loss for a neural network can be approximated by the quadratic loss L(θ) = (θ − µ)ᵀ ( H 2 ) (θ − µ), where H 0 is the training loss Hessian and µ is some unknown mean vector, corresponding to a local minimum. Assumption 5 (Covariance Structure). We assume the covariance of the gradient noise is proportional to the Hessian of the quadratic loss Σ(θ) = σ2H where σ ∈ R+ is some unknown scalar. Under these simplifications, then the expressions derived in the linear regression setting would apply to the limiting dynamics of deep neural networks and depend only on quantities that we can easily estimate empirically. Of course, these simplifications are quite strong, but without arguing their theoretical validity, we can empirically test their qualitative implications: (1) a modified isotropic loss driving the limiting dynamics through parameter space, (2) implicit regularization of the velocity trajectory, and (3) oscillatory phase space dynamics determined by the Hessian eigen-structure. Modified loss. As discussed in section 6, due to the anisotropy of the diffusion matrix, the loss landscape driving the dynamics at the end of training is not the original training loss L(θ), but a modified loss Ψ(θ, v) in phase space. As shown in equation (8), the modified loss decouples into a term Ψθ that only depends on the parameters θ and a term Ψv that only depends on the velocities v. Under assumption 5, the parameter dependent component is proportional to the convex quadratic, Ψθ ∝ (θ − µ)ᵀ ( H−1(H + λI) η(1 + β) ) (θ − µ) . (10) This quadratic function has the same mean µ as the training loss, but a different curvature. Using this expression, notice that when λ ≈ 0, the modified loss is isotropic in the column space of H , regardless of what the nonzero eigenspectrum of H is. This striking prediction suggests that no matter how anisotropic the original training loss – as reflected by poor conditioning of the Hessian eigenspectrum – the training trajectory of the network will behave isotropically, since it is driven not by the original anisotropic loss, but a modified isotropic loss. We test this prediction by studying the limiting dynamics of a pre-trained ResNet-18 model with batch normalization that we continue to train on ImageNet according to the last setting of its hyperparameters [49]. Let θ∗ represent the initial pre-trained parameters of the network, depicted with the white dot in figures 3 and 4. We estimate1 the top thirty eigenvectors q1, . . . , q30 of the Hessian matrix H∗ evaluated at θ∗ and project the limiting trajectory for the parameters onto the plane spanned by the top q1 and bottom q30 eigenvectors to maximize the illustrated anisotropy with our estimates. We sample the train and test loss in this subspace for a region around the projected trajectory. Additionally, using the hyperparameters of the optimization, the eigenvalues ρ1 and ρ30, and the estimate for the mean µ = θ∗−H−1∗ g∗ (g∗ is the gradient evaluated at θ∗), we also sample from the modified loss equation (10) in the same region. Figure 3 shows the projected parameter trajectory on the sampled train, test and modified losses. Contour lines of both the train and test loss exhibit anisotropic structure, with sharper curvature along eigenvector q1 compared to eigenvector q30, as expected. However, as predicted, the trajectory appears to cover both directions equally. This striking isotropy of the trajectory within a highly anisotropic slice of the loss landscape indicates qualitatively that the trajectory evolves in a modified isotropic loss landscape. Implicit velocity regularization. A second qualitative prediction of the theory is that the velocity is regulated by the inverse Hessian of the training loss. Of course there are no explicit terms in either the train or test losses that depend on the velocity. Yet, the modified loss contains a component, Ψv ∝ vᵀH−1v, that only depends on the velocities This additional term can be understood as a form of implicit regularization on the velocity trajectory. Indeed, when we project the velocity trajectory onto the plane spanned by the q1 and q30 eigenvectors, as shown in Fig. 4, we see that the trajectory closely resembles the curvature of the inverse Hessian H−1. The modified loss is effectively penalizing SGD for moving in eigenvectors of the Hessian with small eigenvalues. A similar qualitative effect was recently proposed by Barrett and Dherin [21] as a consequence of the discretization error due to finite learning rates. Phase space oscillations. A final implication of the theory is that at stationarity the network is in broken detailed balance leading to non-zero probability currents flowing through phase space: Jss(θ, v) = [ v − 2η(1+β) (H + λI) (θ − µ) ] pss. (11) These probability currents encourage oscillatory dynamics in the phase space planes characterized by the eigenvectors of the Hessian, at rates proportional to their eigenvalues. We consider the same projected trajectory of the ResNet-18 model visualized in figures 3 and 4, but plot the trajectory in phase space for the two eigenvectors q1 and q30 separately. Shown in Fig. 5, we see that both trajectories look like noisy clockwise rotations. Qualitatively, the trajectories for the different eigenvectors appear to be rotating at different rates. The integral curves of the stationary probability current are one-dimensional paths confined to level sets of the modified loss. These paths might cross themselves, in which case they are limit cycles, or they could cover the entire surface of the level sets, in which case they are space-filling curves. This distinction depends on the relative frequencies of the oscillations, as determined by the pairwise 1To estimate the eigenvectors of H∗ we use subspace iteration, and limit ourselves to 30 eigenvectors to constrain computation time. See appendix H for details. ratios of the eigenvalues of the Hessian. For real-world datasets, with a large spectrum of incommensurate frequencies, we expect to be in the latter setting, thus contradicting the suggestion that SGD in deep networks converges to limit cycles, as claimed in Chaudhari and Soatto [33]. 8 UNDERSTANDING THE DIFFUSIVE BEHAVIOUR OF THE LIMITING DYNAMICS Taken together the empirical results shown in section 7 indicate that many of the same qualitative behaviors of SGD identified theoretically for linear regression are evident in the limiting dynamics of neural networks. Can this theory quantitatively explain the results we identified in section 2? Constant instantaneous speed. As noted in section 2, we observed that at the end of training, across various architectures, the squared norm of the local displacement ‖δt‖22 remains essentially constant. Assuming the limiting dynamics are described by the stationary solution the expectation of the local displacement is Ess [ ‖δt‖2 ] = η2 S(1− β2)σ 2tr (H) , (12) as derived in appendix G. We cannot test this prediction directly as we do not know σ2 and computing tr(H) is computationally prohibitive. However, we can estimate σ2tr(H) by resuming training for a model, measuring the average ‖δt‖2, and then inverting equation (12). Using this single estimate, we find that for a sweep of models with varying hyperparameters, equation (12) accurately predicts their instantaneous speed. Indeed, Fig. 6 shows an exact match between the empirics and theory, which strongly suggests that despite changing hyperparameters at the end of training, the model remains in the same quadratic basin. Exponent of anomalous diffusion. The expected value for the global displacement under the stationary solution can also be analytically expressed in terms of the optimization hyperparameters and the eigendecomposition of the Hessian as, Ess [ ‖∆t‖2 ] = η2 S(1− β2)σ 2 ( tr (H) t+ 2t t∑ k=1 ( 1− k t ) m∑ l=1 ρlCl(k) ) , (13) where Cl(k) is a trigonometric function describing the velocity of a harmonic oscillator with damping ratio ζl = (1 − β)/ √ 2η(1 + β) (pl + λ), see appendix G for details. As shown empirically in section 2, the squared norm ‖∆t‖2 monotonically increases as a power law in the number of steps, suggesting its expectation is proportional to tc for some unknown, constant c. The exponent c determines the regime of diffusion for the process. When c = 1, the process corresponds to standard Brownian diffusion. For c > 1 or c < 1 the process corresponds to anomalous super-diffusion or sub-diffusion respectively. Unfortunately, it is not immediately clear how to extract the explicit exponent c from equation (13). However, by exploring the functional form of Cl(k) and its relationship to the hyperparameters of optimization through the damping ratio ζl, we can determine overall trends in the diffusion exponent c. Akin to how the exponent c determines the regime of diffusion, the damping ratio ζl determines the regime for the harmonic oscillator describing the stationary velocity-velocity correlation in the lth eigenvector of the Hessian. When ζl = 1, the oscillator is critically damped implying the velocity correlations converge to zero as quickly as possible. In the extreme setting of Cl(k) = 0 for all l, k, then equation (13) simplifies to standard Brownian diffusion, Ess [ ‖∆t‖2 ] ∝ t. When ζl > 1, the oscillator is overdamped implying the velocity correlations dampen slowly and remain positive even over long temporal lags. Such long lasting temporal correlations in velocity lead to faster global displacement. Indeed, in the extreme setting of Cl(k) = 1 for all l, k, then equation (13) simplifies to a form of anomalous super-diffusion, Ess [ ‖∆t‖2 ] ∝ t2. When ζl < 1, the oscillator is underdamped implying the velocity correlations will oscillate quickly between positive and negative values. Indeed, the only way equation (13) could describe anomalous sub-diffusion is if Cl(k) took on negative values for certain l, k. Using the same sweep of models described previously, we can empirically confirm that the optimization hyperparameters each influence the diffusion exponent c. As shown in Fig. 6, the learning rate, batch size, and momentum can each independently drive the exponent c into different regimes of anomalous diffusion. Notice how the influence of the learning rate and momentum on the diffusion exponent c closely resembles their respective influences on the damping ratio ζl. Interestingly, a larger learning rate leads to underdamped oscillations, and the resultant temporal velocities’ anti-correlations reduce the exponent of anomalous diffusion. Thus contrary to intuition, a larger learning rate actually leads to slower global transport in parameter space. The batch size on the other hand, has no influence on the damping ratio, but leads to an interesting, non-monotonic influence on the diffusion exponent. Overall, the hyperparameters of optimization and eigenspectrum of the Hessian all conspire to govern the degree of anomalous diffusion at the end of training. 9 DISCUSSION Through combined empirics and theory based on statistical physics, we uncovered an intricate interplay between the optimization hyperparameters, structure in the gradient noise, and the Hessian matrix at the end of training. Significance. The significance of our work lies in (1) the identification/verification of multiple empirical phenomena (constant instantaneous speed, anomalous diffusion in global displacement, isotropic parameter exploration despite anisotopic loss, velocity regularization, and slower global parameter exploration with faster learning rates) present in the limiting dynamics of deep neural networks, (2) the emphasis on studying the dynamics in velocity space in addition to parameter space, and (3) concrete quantitative as well as qualitative predictions of an SDE based theory that we empirically verified in deep networks trained on large scale datasets (indeed some of the above nontrivial phenomena were predictions of this theory). Of course, these contributions directly build upon a series of related works studying the immensely complex process of deep learning. To this end, we further clarify the originality of our contributions with respect to some relevant works. Originality. The empirical phenomena we present provide novel insight with respect to the works of Wan et al. [13], Hoffer et al. [18], and Chen et al. [15]. We observe that all parameters in the network (not just those with scale symmetry) move at a constant instantaneous speed at the end of training and diffuse anomalously at rates determined by the hyperparameters of optimization. In contrast to the work by Liu et al. [35], we modeled the entire SGD process as an OU process which allows us to provide insight into the transient dynamics and identify oscillations in parameter and velocity space. We build on the theoretical framework used by Chaudhari and Soatto [33] and provide explicit expressions for the limiting dynamics in the simplified linear regression setting and conclude that the oscillations present in the limiting dynamics are more likely to be space-filling curves (and not limit cycles) in deep learning due to many incommensurate oscillations. Overall, by identifying key phenomena, explaining them in a simpler setting, deriving predictions of new phenomena, and providing evidence for these predictions at scale, we are furthering the scientific study of deep learning. We hope our newly derived understanding of the limiting dynamics of SGD, and its dependence on various important hyperparameters like batch size, learning rate, and momentum, can serve as a basis for future work that can turn these insights into algorithmic gains. A MODELING SGD WITH AN SDE As explained in section 4, in order to understand the dynamics of stochastic gradient descent we build a continuous Langevin equation in phase space modeling the effect of discrete updates and stochastic batches simultaneously. A.1 MODELING DISCRETIZATION To model the discretization effect we assume that the system of update equations (2) is actually a discretization of some unknown ordinary differential equation. To uncover this ODE, we combine the two update equations in (2), by incorporating a previous time step θk−1, and rearrange into the form of a finite difference discretization, as shown in equation (??). Like all discretizations, the Euler discretizations introduce error terms proportional to the step size, which in this case is the learning rate η. Taylor expanding θk+1 and θk−1 around θk, its easy to show that both Euler discretizations introduce a second-order error term proportional to η2 θ̈. θk+1 − θk η = θ̇ + η 2 θ̈ +O(η2), θk − θk−1 η = θ̇ − η 2 θ̈ +O(η2). Notice how the momentum coefficient β ∈ [0, 1] regulates the amount of backward Euler incorporated into the discretization. When β = 0, we remove all backward Euler discretization leaving just the forward Euler discretization. When β = 1, we have equal amounts of backward Euler as forward Euler resulting in a central second-order discretization2 as noticed in [19]. A.2 MODELING STOCHASTICITY In order to model the effect of stochastic batches, we first model a batch gradient with the following assumption: Assumption 1 (CLT). We assume the batch gradient is a noisy version of the true gradient such that gB(θ)− g(θ) is a Gaussian random variable with mean 0 and covariance 1SΣ(θ). The two conditions needed for the CLT to hold are not exactly met in the setting of SGD. Independent and identically distributed. Generally we perform SGD by making a complete pass through the entire dataset before using a sample again which introduces a weak dependence between samples. While the covariance matrix without replacement more accurately models the dependence between samples within a batch, it fails to account for the dependence between batches. Finite variance. A different line of work has questioned the Gaussian assumption entirely because of the need for finite variance random variables. This work instead suggests using the generalized central limit theorem implying the noise would be a heavy-tailed α-stable random variable [29]. Thus, the previous assumption is implicitly assuming the i.i.d. and finite variance conditions apply for large enough datasets and small enough batches. Under the CLT assumption, we must also replace the Euler discretizations with Euler–Maruyama discretizations. For a general stochastic process, dXt = µdt+ σdWt, the Euler–Maruyama method extends the Euler method for ODEs to SDEs, resulting in the update equation Xk+1 = Xk + ∆tµ+√ ∆tσξ, where ξ ∼ N (0, 1). Notice, the key difference is that if the temporal step size is ∆t = η, then the noise is scaled by the square root √ η. In fact, the main argument against modeling SGD with an SDE, as nicely explained in Yaida [28], is that most SDE approximations simultaneously assume that ∆t → 0+, while maintaining that the square root of the learning rate √η is finite. However, by modeling the discretization and stochastic effect simultaneously we can avoid this argument, bringing us to our second assumption: Assumption 2 (SDE). We assume the underdamped Langevin equation (3) accurately models the trajectory of the network driven by SGD through phase space such that θ(ηk) ≈ θk and v(ηk) ≈ vk. This approach of modeling discretization and stochasticity simultaneously is called stochastic modified equations, as further explained in Li et al. [22]. 2The difference between a forward Euler and backward Euler discretization is a second-order central discretization, ( θk+1−θk η ) − ( θk−θk−1 η ) = η ( θk+1−2θk+θk−1 η2 ) = ηθ̈ +O(η2). B STRUCTURE IN THE COVARIANCE OF THE GRADIENT NOISE As we’ve mentioned before, SGD introduces highly structured noise into an optimization process, often assumed to be an essential ingredient for its ability to avoid local minima. Assumption 5 (Covariance Structure). We assume the covariance of the gradient noise is proportional to the Hessian of the quadratic loss Σ(θ) = σ2H where σ ∈ R+ is some unknown scalar. In the setting of linear regression, this is a very natural assumption. If we assume the classic generative model for linear regression data yi = x ᵀ i θ̄+σ where, θ̄ ∈ Rd is the true model and ∼ N (0, 1), then provably Σ(θ) ≈ σ2H . Proof. We can estimate the covariance as Σ(θ) ≈ 1N ∑N i=1 gig ᵀ i − ggᵀ. Near stationarity ggᵀ 1 N ∑N i=1 gig ᵀ i , and thus, Σ(θ) ≈ 1 N N∑ i=1 gig ᵀ i . Under the generative model yi = x ᵀ i θ̄ + σ where ∼ N (0, 1) and σ ∈ R+, then the gradient gi is gi = (x ᵀ i (θ − θ̄)− σ )xi, and the matrix gig ᵀ i is gig ᵀ i = (x ᵀ i (θ − θ̄)− σ )2(xixᵀi ). Assuming θ ≈ θ̄ at stationarity, then (xᵀi (θ − θ̄)− σ )2 ≈ σ2. Thus, Σ(θ) ≈ σ 2 N N∑ i=1 xix ᵀ i = σ2 N XᵀX = σ2H Also notice that weight decay is independent of the data or batch and thus simply shifts the gradient distribution, but leaves the covariance of the gradient noise unchanged. While the above analysis is in the linear regression setting, for deep neural networks it is reasonable to make the same assumption. See the appendix of Jastrzębski et al. [12] for a discussion on this assumption in the non-linear setting. Recent work by Ali et al. [32] also studies the dynamics of SGD (without momentum) in the setting of linear regression. This work, while studying the classic first-order stochastic differential equation, made a point to not introduce an assumption on the diffusion matrix. In particular, they make the point that even in the setting of linear regression, a constant covariance matrix will fail to capture the actual dynamics. To illustrate this point they consider the univariate responseless least squares problem, minimize θ∈R 1 2n n∑ i=1 (xiθ) 2. As they explain, the SGD update for this problem would be θk+1 = θk − η S (∑ i∈B xi ) θk = k∏ i=1 (1− η( 1S ∑ i∈B xi))θ0, from which they conclude for a small enough learning rate η, then with probability one θk → 0. They contrast this with the Ornstein-Uhlenbeck process given by a constant covariance matrix where while the mean for θk converges to zero its variance converges to a positive constant. So is this discrepancy evidence that an Ornstein-Uhlenbeck process with a constant covariance matrix fails to capture the updates of SGD? In many ways this problem is not a simple example, rather a pathological edge case. Consider the generative model that would give rise to this problem, y = 0x+ 0ξ = 0. In otherwords, the true model θ̄ = 0 and the standard deviation for the noise σ = 0. This would imply by the assumption used in our paper that there would be zero diffusion and the resulting SDE would simplify to a deterministic ODE that exponentially converges to zero. C A QUADRATIC LOSS AT THE END OF TRAINING Assumption 4 (Quadratic Loss). We assume that at the end of training the loss for a neural network can be approximated by the quadratic loss L(θ) = (θ − µ)ᵀ ( H 2 ) (θ − µ), where H 0 is the training loss Hessian and µ is some unknown mean vector, corresponding to a local minimum. This assumption has been amply used in previous works such as Mandt et al. [31], Jastrzębski et al. [12], and Poggio et al. [50]. Particularly, Mandt et al. [31] discuss how this assumption makes sense for smooth loss functions for which the stationary solution to the stochastic process reaches a deep local minimum from which it is difficult to escape. It is a well-studied fact, both empirically and theoretically, that the Hessian is low-rank near local minima as noted by Sagun et al. [51], and Kunin et al. [20]. This degeneracy results in flat directions of equal loss. Kunin et al. [20] discuss how differentiable symmetries, architectural features that keep the loss constant under certain weight transformations, give rise to these flat directions. Importantly, the Hessian and the covariance matrix share the same null space, and thus we can always restrict ourselves to the image space of the Hessian, where the drift and diffusion matrix will be full rank. Further discussion on the relationship between the Hessian and the covariance matrix can be found in Thomas et al. [52]. It is also a well known empirical fact that even at the end of training the Hessian can have negative eigenvalues [41]. This empirical observation is at odds with our assumption that the Hessian is positive semi-definite H 0. Further analysis is needed to alleviate this inconsistency. D SOLVING AN ORNSTEIN-UHLENBECK PROCESS WITH ANISOTROPIC NOISE We will study the multivariate Ornstein-Uhlenbeck process described by the stochastic differential equation dXt = A(µ−Xt)dt+ √ 2κ−1DdWt X0 = x0, (14) whereA ∈ Sm++ is a positive definite drift matrix, µ ∈ Rm is a mean vector, κ ∈ R+ is some positive constant, and D ∈ Sm++ is a positive definite diffusion matrix. This OU process is unique in that it is one of the few SDEs we can solve explicitly. We can derive an expression for XT as, XT = e −ATx0 + ( I − e−AT ) µ+ ∫ T 0 eA(t−T ) √ 2κ−1DdWt. (15) Proof. Consider the function f(t, x) = eAtx where eA is a matrix exponential. Then by Itô’s Lemma3 we can evaluate the derivative of f(t,Xt) as df(t,Xt) = ( AeAtXt + e AtA(µ−Xt) ) dt+ eAt √ 2κ−1DdWt = AeAtµdt+ eAt √ 2κ−1DdWt Integrating this expression from t = 0 to t = T gives f(T,XT )− f(0, X0) = ∫ T 0 AeAtµdt+ ∫ T 0 eAt √ 2κ−1DdWt eATXT − x0 = ( eAT − I ) µ+ ∫ T 0 eAt √ 2κ−1DdWt which rearranged gives the expression for XT . From this expression it is clear that XT is a Gaussian process. The mean of the process is E [XT ] = e −ATx0 + ( I − e−AT ) µ, (16) and the covariance and cross-covariance of the process are Var(XT ) = κ −1 ∫ T 0 eA(t−T )2DeA ᵀ(t−T )dt, (17) Cov(XT , XS) = κ −1 ∫ min(T,S) 0 eA(t−T )2DeA ᵀ(t−S)dt. (18) These last two expressions are derived by Itô Isometry4. D.1 THE LYAPUNOV EQUATION We can explicitly solve the integral expressions for the covariance and cross-covariance exactly by solving for the unique matrix B ∈ Sm++ that solves the Lyapunov equation, AB +BAᵀ = 2D. (19) If B solves the Lyapunov equation, notice d dt ( eA(t−T )BeA ᵀ(t−S) ) = eA(t−T )ABeA ᵀ(t−S) + eA(t−T )BAᵀeA ᵀ(t−S) = eA(t−T )2DeA ᵀ(t−S) Using this derivative, the integral expressions for the covariance and cross-covariance simplify as, Var(XT ) = κ −1 ( B − e−ATBe−AᵀT ) , (20) Cov(XT , XS) = κ −1 ( B − e−ATBe−AᵀT ) eA ᵀ(T−S), (21) where we implicitly assume T ≤ S. 3Itô’s Lemma states that for any Itô drift-diffusion process dXt = µtdt + σtdWt and twice differentiable scalar function f(t, x), then df(t,Xt) = ( ft + µtfx + σ2t 2 fxx ) dt+ σtfxdWt. 4Itô Isometry states for any standard Itô process Xt, then E [(∫ t 0 XtdWt )2] = E [∫ t 0 X2t dt ] . D.2 DECOMPOSING THE DRIFT MATRIX While the Lyapunov equation simplifies the expressions for the covariance and cross-covariance, it does not explain how to actually solve for the unknown matrix B. Following a method proposed by Kwon et al. [48], we will show how to solve for B explicitly in terms of the drift A and diffusion D. The drift matrix A can be uniquely decomposed as, A = (D +Q)U (22) whereD is our symmetric diffusion matrix,Q is a skew-symmetric matrix (i.e. Q = −Qᵀ), and U is a positive definite matrix. Using this decomposition, then B = U−1, solves the Lyapunov equation. Proof. Plug B = U−1 into the left-hand side of equation (19), AU−1 + U−1Aᵀ = (D +Q)UU−1 + U−1U(D −Q) = (D +Q) + (D −Q) = 2D Here we used the symmetry of A,D,U and the skew-symmetry of Q. All that is left is to do is solve for the unknown matricesQ and U . First notice the following identity, AD −DA = QA+AQ (23) Proof. Multiplying A = (D +Q)U on the right by (D −Q) gives, A(D −Q) = (D +Q)U(D −Q) = (D +Q)Aᵀ, which rearranged and using A = Aᵀ gives the desired equation. Let V ΛV ᵀ be the eigendecomposition of A and define the matrices D̃ = V ᵀDV and Q̃ = V ᵀQV . These matrices observe the following relationship, Q̃ij = λi − λj ρi + λj D̃ij . (24) Proof. Replace A in the previous equality with its eigendecompsoition, V ΛV ᵀD −DV ΛV ᵀ = QV ΛV ᵀ + V ΛV ᵀQ. Multiply this equation on the right by V and on the left by V ᵀ, ΛD̃ − D̃Λ = Q̃Λ + ΛQ̃. Looking at this equality element-wise and using the fact that Λ is diagonal gives the scalar equality for any i, j, (λi − λj)D̃ij = (λi + λj)Q̃ij , which rearranged gives the desired expression. Thus, Q and U are given by, Q = V Q̃V ᵀ, U = (D +Q)−1A. (25) This decomposition always holds uniquely when A,D 0, as λi−λjλi+λj exists and (D +Q) is invertible. See [48] for a discussion on the singularities of this decomposition. D.3 STATIONARY SOLUTION Using the Lyapunov equation and the drift decomposition, then XT ∼ pT , where pT = N ( e−ATx0 + ( I − e−AT ) µ, κ−1 ( U−1 − e−ATU−1e−AᵀT )) . (26) In the limit as T →∞, then e−AT → 0 and pT → pss where pss = N ( µ, κ−1U−1 ) . (27) Similarly, the cross-covariance converges to the stationary cross-covariance, Covss(XT , XS) = κ −1BeA ᵀ(T−S). (28) E A VARIATIONAL FORMULATION OF THE OU PROCESS WITH ANISOTROPIC NOISE In this section we will describe an alternative, variational, route towards solving the dynamics of the OU process studied in appendix D. Let Φ : Rn → R be an arbitrary, non-negative potential and consider the stochastic differential equation describing the Langevin dynamics of a particle in this potential field, dXt = −∇Φ(Xt)dt+ √ 2κ−1D(Xt)dWt, X0 = x0, (29) where D(Xt) is an arbitrary, spatially-dependent, diffusion matrix, κ is a temperature constant, and x0 ∈ Rm is the particle’s initial position. The Fokker-Planck equation describes the time evolution for the probability distribution p of the particle’s position such that p(x, t) = P(Xt = x). The FP equation is the partial differential equation5, ∂tp = ∇ · ( ∇Φ(Xt)p+ κ−1∇ · (D(Xt)p) ) , p(x, 0) = δ(x0), (30) where ∇· denotes the divergence and δ(x0) is a dirac delta distribution centered at the initialization x0. To assist in the exploration of the FP equation we define the vector field, J(x, t) = −∇Φ(Xt)p−∇ · (D(Xt)p) , (31) which is commonly referred to as the probability current. Notice, that this gives an alternative expression for the FP equation, ∂tp = −∇·J , demonstrating that J(x, t) defines the flow of probability mass through space and time. This interpretation is especially useful for solving for the stationary solution pss, which is the unique distribution that satisfies, ∂tpss = −∇ · Jss = 0, (32) where Jss is the probability current for pss. The stationary condition can be obtained in two distinct ways: 1. Detailed balance. This is when Jss(x) = 0 for all x ∈ Ω. This is analogous to reversibility for discrete Markov chains, which implies that the probability mass flowing from a state i to any state j is the same as the probability mass flowing from state j to state i. 2. Broken detailed balance. This is when ∇ · Jss(x) = 0 but Jss(x) 6= 0 for all x ∈ Ω. This is analogous to irreversibility for discrete Markov chains, which only implies that the total probability mass flowing out of state i equals to the total probability mass flowing into state i. The distinction between these two cases is critical for understanding the limiting dynamics of the process. E.1 THE VARIATIONAL FORMULATION OF THE FOKKER-PLANCK EQUATION WITH ISOTROPIC DIFFUSION We will now consider the restricted setting of standard, isotropic diffusion (D = I). It is easy enough to check that in this setting the stationary solution is pss(x) = e−κΦ(x) Z , Z = ∫ Ω e−κΦ(x)dx, (33) where pss is called a Gibbs distribution and Z is the partition function. Under this distribution, the stationary probability current is zero (Jss(x) = 0) and thus the process is in detailed balance. Interestingly, the Gibbs distribution pss has another interpretation as the unique minimizer of the the Gibbs free energy functional, F (p) = E [Φ]− κ−1H(p), (34) where E [Φ] is the expectation of the potential Φ under the distribution p and H(p) = − ∫ Ω p(x)log(p(x))dx is the Shannon entropy of p. 5This PDE is also known as the Forward Kolmogorov equation. Proof. To prove that indeed pss is the unique minimizer of the Gibbs free energy functional, consider the following equivalent expression F (p) = ∫ Ω p(x)Φ(x)dx+ κ−1 ∫ Ω p(x)log(p(x))dx = κ−1 ∫ Ω p(x) (log(p(x))− log(pss(x))) dx− κ−1 ∫ Ω log(Z) = κ−1DKL(p ‖ pss)− κ−1log(Z) From this expressions, it is clear that the Kullback–Leibler divergence is uniquely minimized when p = pss. In other words, with isotropic diffusion the stationary solution pss can be thought of as the limiting distribution given by the Fokker-Planck equation or the unique minimizer of an energetic-entropic functional. Seminal work by Jordan et al. [53] deepened this connection between the Fokker-Planck equation and the Gibbs free energy functional. In particular, their work demonstrates that the solution p(x, t) to the Fokker-Planck equation is the Wasserstein gradient flow trajectory on the Gibbs free energy functional. Steepest descent is always defined with respect to a distance metric. For example, the update equation, xk+1 = xk − η∇Φ(xk), for classic gradient descent on a potential Φ(x), can be formulated as the solution to the minimization problem xk+1 = argminxηΦ(x) + 1 2d(x, xk) 2 where d(x, xk) = ‖x− xk‖ is the Euclidean distance metric. Gradient flow is the continuous-time limit of gradient descent where we take η → 0+. Similarly, Wasserstein gradient flow is the continuous-time limit of steepest descent optimization defined by the Wasserstein metric. The Wasserstein metric is a distance metric between probability measures defined as, W 22 (µ1, µ2) = inf p∈Π(µ1,µ2) ∫ Rn×Rn |x− y|2p(dx, dy), (35) where µ1 and µ2 are two probability measures on Rn with finite second moments and Π(µ1, µ2) defines the set of joint probability measures with marginals µ1 and µ2. Thus, given an initial distribution and learning rate η, we can use the Wasserstein metric to derive a sequence of distributions minimizing some functional in the sense of steepest descent. In the continuous-time limit as η → 0+ this sequence defines a continuous trajectory of probability distributions minimizing the functional. Jordan et al. [54] proved, through the following theorem, that this process applied to the Gibbs free energy functional converges to the solution to the Fokker-Planck equation with the same initialization: Theorem 1 (JKO). Given an initial condition p0 with finite second moment and an η > 0, define the iterative scheme pη with iterates defined by pk = argminpη ( E [Φ]− κ−1H(p) ) +W 22 (p, p k−1). As η → 0+, then pη → p weakly in L1 where p is the solution to the Fokker-Planck equation with the same initial condition. See [54] for further explanation and [53] for a complete derivation. E.2 EXTENDING THE VARIATIONAL FORMULATION TO THE SETTING OF ANISOTROPIC DIFFUSION While the JKO theorem provides a very powerful lens through which to view solutions to the FokkerPlanck equation, and thus distributions for particles governed by Langevin dynamics, it only applies in the very restricted setting of isotropic diffusion. In this section we will review work by Chaudhari and Soatto [33] extending the variational interpretation to the setting of anisotropic diffusion. Consider when D(Xt) is an anisotropic, spatially-dependent diffusion matrix. In this setting, the original Gibbs distribution given in equation (33) does not necessarily satisfy the stationarity condition equation (32). In fact, it is not immediately clear what the stationary solution is or if the dynamics even have one. Thus, Chaudhari and Soatto [33] make the following assumption: Stationary Assumption. Assume there exists a unique distribution pss that is the stationary solution to the Fokker-Planck equation irregardless of initial conditions. Under this assumption we can implicitly define the potential Ψ(x) = −κ−1log(pss(x)). Using this modified potential we can express the stationary solution as a Gibbs distribution, pss(x) ∝ e−κΨ(x). (36) Under this implicit definition we can define the stationary probability current as Jss(x) = j(x)pss(x) where j(x) = −∇Φ(x)− κ−1∇ ·D(x) +D(x)∇Ψ(x). (37) The vector field j(x) reflects the discrepancy between the original potential Φ and the modified potential Ψ according to the diffusion D(x). Notice that in the isotropic case, when D(x) = I , then Φ = Ψ and j(x) = 0. Chaudhari and Soatto [33] introduce another property of j(x) through assumption, Conservative Assumption. Assume that the force j(x) is conservative (i.e. ∇ · j(x) = 0). Using this assumption, Chaudhari and Soatto [33] extends the variational formulation provided by the JKO theorem to the anisotropic setting, Theorem 2 (CS). Given an initial condition p0 with finite second moment, then the energeticentropic functional, F (p) = Ep [Ψ(x)]− κ−1H(p) monotonically decreases throughout the trajectory given by the solution to the Fokker-Planck equation with the given initial condition. In other words, the Fokker-Plank equation (30) with anisotropic diffusion can be interpreted as minimizing the expectation of a modified loss Ψ, while being implicitly regularized towards distributions that maximize entropy. The derivation requires we assume a stationary solution pss exists and that the force j(x) implicitly defined by pss is conservative. However, rather than implicitly define Ψ(x) and j(x) through assumption, if we can explicitly construct a modified loss Ψ(x) such that the resulting j(x) satisfies certain conditions, then the stationary solution exists and the variational formulation will apply as well. We formalize this statement with the following theorem, Theorem 3 (Explicit Construction). If there exists a potential Ψ(x) such that either j(x) = 0 or ∇ · j(x) = 0 and ∇Ψ(x) ⊥ j(x), then pss is the Gibbs distribution ∝ e−κΨ(x) and the variational formulation given in Theorem 2 applies. E.3 APPLYING THE VARIATIONAL FORMULATION TO THE OU PROCESS Through explicit construction we now seek to find analytic expressions for the modified loss Ψ(x) and force j(x) hypothesised by Chaudhari and Soatto [33] in the fundamental setting of an OU process with anisotropic diffusion, as described in section D. We assume the diffusion matrix is anisotropic, but spatially independent, ∇ · D(x) = 0. For the OU process the original potential generating the drift is Φ(x) = (x− µ)ᵀA2 (x− µ). (38) Recall, that in order to extend the variational formulation we must construct some potential Ψ(x) such that∇ · j(x) = 0 and∇Ψ ⊥ j(x). It is possible to construct Ψ(x) using the unique decomposition of the drift matrix A = (D +Q)U discussed in appendix D. Define the modified potential, Ψ(x) = (x− µ)ᵀ U2 (x− µ). (39) Using this potential, the force j(x) is j(x) = −A(x− µ) +DU(x− µ) = −QU(x− µ). (40) Notice that j(x) is conservative, ∇ · j(x) = ∇ · −QU (x− µ) = 0 because Q is skew-symmetric. Additionally, j(x) is orthogonal, j(x)ᵀ∇Ψ(x) = (x− µ)ᵀ UᵀQU (x− µ) = 0, again because Q is skew-symmetric. Thus, we have determined a modified potential Ψ(x) that results in a conservative orthogonal force j(x) satisfying the conditions for Theorem 3. Indeed the stationary Gibbs distribution given by Theorem 3 agrees with equation (27) derived via the first and second moments in appendix D, e−κΨ(x) ∝ N ( µ, κ−1U−1 ) In addition to the variational formulation, this interpretation further details explicitly the stationary probability current, Jss(x) = j(x)pss, and whether or not the the stationary solution is in broken detailed balance. F EXPLICIT EXPRESSIONS FOR THE OU PROCESS GENERATED BY SGD We will now consider the specific OU process generated by SGD with linear regression. Here we repeat the setup as explained in section 5. Let X ∈ RN×d, Y ∈ RN be the input data, output labels respectively and θ ∈ Rd be our vector of regression coefficients. The least squares loss is the convex quadratic loss L(θ) = 12N ‖Y −Xθ‖2 with gradient g(θ) = Hθ − b, where H = XᵀXN and b = X ᵀY N . Plugging this expression for the gradient into the underdamped Langevin equation (3), and rearranging terms, results in the multivariate Ornstein-Uhlenbeck (OU) process, d [ θt vt ] = A ([ µ 0 ] − [ θt vt ]) dt+ √ 2κ−1DdWt, (41) where A and D are the drift and diffusion matrices respectively, A = [ 0 −I 2 η(1+β) (H + λI) 2(1−β) η(1+β)I ] , D = [ 0 0 0 2(1−β)η(1+β)Σ(θ) ] , (42) κ = S(1− β2) is a temperature constant, and µ = (H + λI)−1b is the ridge regression solution. F.1 SOLVING FOR THE MODIFIED LOSS AND CONSERVATIVE FORCE In order to apply the expressions derived for a general OU process in appendix D and E, we must first decompose the drift as A = (D + Q)U . Under the simplification Σ(θ) = σ2H discussed in appendix B, then the matrices Q and U , as defined below, achieve this, Q = [ 0 −σ2H σ2H 0 ] , U = [ 2 η(1+β)σ2H −1 (H + λI) 0 0 1σ2H −1 ] . (43) Using these matrices we can now derive explicit expressions for the modified loss Ψ(θ, v) and conservative force j(θ, v). First notice that the least squares loss with L2 regularization is proportional to the convex quadratic, Φ(θ) = (θ − µ)ᵀ(H + λI)(θ − µ). (44) The modified loss Ψ is composed of two terms, one that only depends on the position, Ψθ(θ) = (θ − µ)ᵀ ( H−1(H + λI) η(1 + β)σ2 ) (θ − µ) , (45) and another that only depends on the velocity, Ψv(v) = v ᵀ ( H−1 σ2 ) v. (46) The conservative force j(θ, v) is j(θ, v) = [ v − 2η(1+β) (H + λI) (θ − µ) ] , (47) and thus the stationary probability current is Jss(θ, v) = j(θ, v)pss. F.2 DECOMPOSING THE TRAJECTORY INTO THE EIGENBASIS OF THE HESSIAN As shown in appendix D, the temporal distribution for the OU process at some time T ≥ 0 is, pT ([ θ v ]) = N ( e−AT [ θ0 v0 ] + ( I − e−AT ) [µ 0 ] , κ−1 ( U−1 − e−ATU−1e−AᵀT )) . Here we will now use the eigenbasis {q1, . . . , qm} of the Hessian with eigenvalues {ρ1, . . . , ρm} to derive explicit expressions for the mean and covariance of the process through time. Deterministic component. We can rearrange the expectation as E [[ θ v ]] = [ µ 0 ] + e−AT [ θ0 − µ v0 ] . Notice that the second, time-dependent term is actually the solution to the system of ODEs ˙[θ v ] = −A [ θ v ] with initial condition [θ0 − µ v0]ᵀ. This system of ODEs can be block diagonalized by factorizing A = OSOᵀ where O is orthogonal and S is block diagonal defined as O = q1 0 . . . qm 0 . . . 0 q1 . . . 0 qm S = 0 −1 2 η(1+β) (ρ1 + λ) 2(1−β) η(1+β) . . . . . . . . . 0 −1 2 η(1+β) (ρm + λ) 2(1−β) η(1+β) In otherwords in the plane spanned by [qi 0] ᵀ and [0 qi] ᵀ the system of ODEs decouples into the 2D system ˙[ai bi ] = [ 0 1 − 2η(1+β) (ρi + λ) − 2(1−β) η(1+β) ] [ ai bi ] This system has a simple physical interpretation as a damped harmonic oscillator. If we let bi = ȧi, then we can unravel this system into the second order ODE äi + 2 1− β η(1 + β) ȧi + 2 η(1 + β) (ρi + λ)ai = 0 which is in standard form (i.e. ẍ + 2γẋ + ω2x = 0) for γ = 1−βη(1+β) and ωi = √ 2 η(1+β) (ρi + λ). Let ai(0) = 〈θ0 − µ, qi〉 and bi(0) = 〈v0, qi〉, then the solution in terms of γ and ωi is ai(t) = e−γt ( ai(0) cosh (√ γ2 − ω2i t ) + γai(0)+bi(0)√ γ2−ω2i sinh (√ γ2 − ω2i t )) γ > ωi e−γt(ai(0) + (γai(0) + bi(0))t) γ = ωi e−γt ( ai(0) cos (√ ω2i − γ2t ) + γai(0)+bi(0)√ ω2i−γ2 sin (√ ω2i − γ2t )) γ < ωi Differentiating these equations gives us solutions for bi(t) bi(t) = e−γt ( bi(0) cosh (√ γ2 − ω2i t ) − ω 2 i ai(0)+γbi(0)√ γ2−ω2i sinh (√ γ2 − ω2i t )) γ > ωi e−γt ( bi(0)− ( ω2i ai(0) + γbi(0) ) t ) γ = ωi e−γt ( bi(0) cos (√ ω2i − γ2t ) − ω 2 i ai(0)+γbi(0)√ ω2i−γ2 sin (√ ω2i − γ2t )) γ < ωi Combining all these results, we can now analytically decompose the expectation as the sum, E [[ θ v ]] = [ µ 0 ] + m∑ i=1 ( ai(t) [ qi 0 ] + bi(t) [ 0 qi ]) . Intuitively, this equation describes a damped rotation (spiral) around the OLS solution in the planes defined by the the eigenvectors of the Hessian at a rate proportional to the respective eigenvalue. Stochastic component. Using the previous block diagonal decomposition A = OSOᵀ we can simplify the variance as Var ([ θ v ]) = κ−1 ( U−1 − e−ATU−1e−AᵀT ) = κ−1 ( U−1 − e−OSOᵀTU−1e−OSᵀOᵀT ) = κ−1O ( OᵀU−1O − e−ST (OᵀU−1O)e−ST ᵀ ) Oᵀ Interestingly, the matrix OᵀU−1O is also block diagonal, OᵀU−1O = Oᵀ [ η(1+β)σ2 2 (H + λI) −1 H 0 0 σ2H ] O = η(1+β)σ2 2 ρ1 ρ1+λ 0 0 σ2ρ1 . . . . . . . . . η(1+β)σ2 2 ρm ρm+λ 0 0 σ2ρm Thus, similar to the mean, we can simply consider the variance in each of the planes spanned by [qi 0] ᵀ and [0 qi] ᵀ. If we define the block matrices, Di = [ ησ2 2S(1−β) ρi ρi+λ 0 0 σ 2 S(1−β2)ρi ] Si = [ 0 1 − 2η(1+β) (ρi + λ) − 2(1−β) η(1+β) ] then the projected variance matrix in this plane simplifies as Var ([ qᵀi θ qᵀi v ]) = Di − e−SiTDie−SiT ᵀ Using the solution to a damped harmonic osccilator discussed previously, we can express the matrix exponential e−SiT explicitly in terms of γ = 1−βη(1+β) and ωi = √ 2 η(1+β) (ρi + λ). If we let αi =√ |γ2 − ω2i |, then the matrix exponential is e−Sit = e−γt [ cosh (αit) + γ αi sinh (αit) 1 αi sinh (αit) −ω 2 i αi sinh (αit) cosh (αit)− γαi sinh (αit) ] γ > ωi e−γt [ 1 + γt t −ω2i t 1− γt ] γ = ωi e−γt [ cos (αit) + γ αi sin (αit) 1 αi sin (αit) −ω 2 i αi sin (αit) cos (αit)− γαi sin (αit) ] γ < ωi G ANALYZING PROPERTIES OF THE STATIONARY SOLUTION Assuming the stationary solution is given by equation (??) we can solve for the expected value of the norm of the local displacement and gain some intuition for the expected value of the norm of global displacement. G.1 INSTANTANEOUS SPEED Ess [ ‖δk‖2 ] = Ess [ ‖θk+1 − θk‖2 ] = η2Ess [ ‖vk+1‖2 ] = η2tr ( Ess [ vk+1v ᵀ k+1 ]) = η2tr (Varss (vk+1) + Ess [vk+1] Ess [vk+1] ᵀ ) = η2tr ( κ−1U−1 ) = η2 S(1− β2) tr ( σ2H ) Note that this follows directly from the definition of δk in equation (1) and the mean and variance of the stationary solution in equation ( ??), as well as the follow-up derivation in appendix F. G.2 ANOMALOUS DIFFUSION Notice, that the global movement ∆t = θt−θ0 can be broken up into the sum of the local movements ∆t = ∑t i=1 δi, where δi = θi − θi−1. Applying this decomposition, Ess [ ‖∆t‖2 ] = Ess ∣∣∣∣∣ ∣∣∣∣∣ t∑ i=1 δi ∣∣∣∣∣ ∣∣∣∣∣ 2 = t∑ i=1 Ess [ ‖δi‖2 ] + t∑ i 6=j Ess [〈δi, δj〉] As we solved for previously, Ess [ ‖δi‖2 ] = η2Ess [ ‖vi‖2 ] = η2tr (Varss(vi)) = η2 S(1− β2) tr ( σ2H ) . By a similar simplification, we can express the second term in terms of the stationary crosscovariance, Ess [〈δi, δj〉] = η2Ess [〈vi, vj〉] = η2tr (Covss(vi, vj)) . Thus, to simplify this expression we just need to consider the velocity-velocity covariance Covss(vi, vj). At stationarity, the cross-covariance for the system in phase space, zi = [θi vi] is Covss(zi, zj) = κ −1U−1e−A ᵀ|i−j| where κ = S(1− β2), and U = [ 2 η(1+β)σ2H −1 (H + λI) 0 0 1σ2H −1 ] A = [ 0 −I 2 η(1+β) (H + λI) 2(1−β) η(1+β)I ] As discussed when solving for the mean of the OU trajectory, the drift matrix A can be block diagonalized as A = OSOᵀ where O is orthogonal and S is block diagonal defined as O = q1 0 . . . qm 0 . . . 0 q1 . . . 0 qm , S = 0 −1 2 η(1+β) (ρ1 + λ) 2(1−β) η(1+β) . . . . . . . . . 0 −1 2 η(1+β) (ρm + λ) 2(1−β) η(1+β) . Notice also that O diagonalizes U−1 such that, Λ = OᵀU−1O = η(1+β)σ2 2 ρ1 ρ1+λ 0 0 σ2ρ1 . . . . . . . . . η(1+β)σ2 2 ρm ρm+λ 0 0 σ2ρm . Applying these decompositions, properties of matrix exponentials, and the cyclic invariance of the trace, allows us to express the trace of the cross-covariance as tr (Covss(zi, zj)) = κ −1tr ( U−1e−A ᵀ|i−j| ) = κ−1tr ( U−1Oe−S ᵀ|i−j|Oᵀ ) = κ−1tr ( Λe−S ᵀ|i−j| ) = κ−1 n∑ k=1 tr ( Λke −Sᵀk |i−j| ) where Λk and Sk are the blocks associated with each eigenvector of H . As solved for previously in the variance of the OU process, we can express the matrix exponential e−Sk|i−j| explicitly in terms of γ = 1−βη(
1. What are the main contributions and novel aspects introduced by the paper on deep neural networks' long-time dynamics? 2. What are the strengths of the proposed approach, particularly regarding the characterization of diffusion and displacement? 3. Do you have any concerns or criticisms regarding the paper, such as the boldness of the title, the assumptions made, or the lack of new insights provided? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The paper studies the long-time dynamics of deep neural networks. The author(s) (1) show some empirical findings related to the mean square displacement, (2) model SGD as an underdamped Langevin Equation, relate it to an Ornstein Uhlenbeck process in a linear regression setting, and use it to study the limiting dynamics of SGD, (3) use the Fokker-Planck formalism to show that the steady state weight distribution obeys a modified loss which is isotropic in the absence of L2-like regularization, (4) provide empirical evidence that their findings are relevant also beyond the context of linear regression. According to the authors, these are the novelties provided by the paper: (a) "We find empirically that long after performance has converged, networks continue to move through parameter space by a process of anomalous diffusion in which distance travelled grows as a power law in the number of gradient updates with a nontrivial exponent." (b) "We reveal an intricate interaction between the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix at the end of training that explains this anomalous diffusion." (c) "we show that the key ingredient driving these dynamics is not the original training loss, but rather the combination of a modified loss, which implicitly regularizes the velocity, and probability currents, which cause oscillations in phase space" (d) "We identify qualitative and quantitative predictions of this theory in the dynamics of a ResNet-18 model trained on ImageNet." (e) "We uncover a mechanistic origin for the anomalous limiting dynamics of deep neural networks trained with SGD" (f) "one of the most significant results of our analysis is that, depending on the relationship between the gradient noise covariance and the Hessian, the stationary distribution in phase space will generically violate detailed balance" (g) 'Natural intuitions, such as “the network converges in parameter space” or “the network stays within a local region”, are wrong' Review The paper is well written; a very pleasant reading. I find interesting the characterization of diffusion and displacement as a function of hyperparameters. Although it is not new that SGD obeys a modified loss, I find it insightful to see that this modified loss can be isotropic despite the loss being anisotropy, and to learn under which conditions this occurs. I also appreciate the clarity of the assumptions that are needed to obtain every single result, while maintaining the discussion at an intuitive level. Major Criticisms. Result (a) is not new. The anomalous diffusion at the end of training was already shown in arXiv:1803.06969, exactly in the terms defined in this manuscript ("distance travelled grows as a power law in the number of gradient updates with a nontrivial exponent"). Result (c). That SGD obeys a modified loss was also stated in previous literature, such as Ref.[21] or, in a different framework, arxiv:1803.01927. The title is very bold, inviting a rethinking of the limiting dynamics. I would say that this is because the authors assume that the deep learning community thinks that [statement (g) up here] “the network converges in parameter space”, or “the network stays within a local region”. These are however very naive intuitions that nobody with sufficient experience in deep learning theory would believe. Also, one of their main claims is [statement (f)] that detailed balance does not hold, implicitly suggesting that the results of e.g. Refs.[16,23] could be flawed. Again, nobody believes that detailed balance holds for SGD dynamics -not even at late stages of training-, since detailed balance is a sufficient condition (not even necessary, see e.g. https://aip.scitation.org/doi/full/10.1063/1.4863991) for equiibrium, but steady state distributions generally do not satisfy it. There is a whole section in Ref.[16] devoted to showing that SGD is out of equilibrium. Another example is arXiv:1803.06969, where they state that at the end of learning the system diffuses at the bottom of the landscape with a time-dependent diffusion constant (so this is clearly not an equilibrium process, and the network does not stay confined within a local region). In other words, I don't think that statements (f) and (g) add anything new to the current knowledge. The analysis of the limiting cycles is nice, but I do not see what we learn more than was already presented in Ref.[21]. The only novelty would be that the paths at the end of learning are not limit cycles, but rather space filling curves. However, in the way as it is presented, this looks more like a conjecture than an actual result. As for the description in terms of velocities, this is nice, but I don't find it surprising that the velocities oscillate, especially in the directions of the top 30 eigenvectors. A system confined by quadratic walls will have an oscillating velocity, as any harmonic oscillator. I would find it more surprising if the authors measured the same kind of oscillation in the direction of the bottom eigenvectors. Minor comments: The authors project the limiting trajectory for the parameters onto the plane spanned by the top q 1 and bottom q 30 eigenvectors. I assume that q1 and q30 were chosen in order to maximize the anisotropy on the related plane, in order to show that the trajectory is instead isotropic. I suggest to explain that this is the reason of the choice, since at first it wasn't clear to me why this was being done. Fig.3. It's nice to see that the trajectory is isotropic when no regularization is used. But why is the trajectory not centered around the minimum of the modified loss? Also, it looks like the minimum of test and modified losses coincide. Why is this? typo in section 8: trigonomentric Implicit regularization of the velocity trajectory. I assume that the authors mean that the velocity appears in the modified loss as an L2 regularization term. Is this right? Is there any further consequence to this? In figure 6 - bottom, the dots are measured from fits of the trajectory. The dashed line is c=1 diffusion, but this is a little confusing, because the dashed line in the top three plots is the theoretical prediction of Eq11.
ICLR
Title Rethinking the limiting dynamics of SGD: modified loss, phase space oscillations, and anomalous diffusion Abstract In this work we explore the limiting dynamics of deep neural networks trained with stochastic gradient descent (SGD). As observed previously, long after performance has converged, networks continue to move through parameter space by a process of anomalous diffusion in which distance travelled grows as a power law in the number of gradient updates with a nontrivial exponent. We reveal an intricate interaction between the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix at the end of training that explains this anomalous diffusion. To build this understanding, we first derive a continuoustime model for SGD with finite learning rates and batch sizes as an underdamped Langevin equation. We study this equation in the setting of linear regression, where we can derive exact, analytic expressions for the phase space dynamics of the parameters and their instantaneous velocities from initialization to stationarity. Using the Fokker-Planck equation, we show that the key ingredient driving these dynamics is not the original training loss, but rather the combination of a modified loss, which implicitly regularizes the velocity, and probability currents, which cause oscillations in phase space. We identify qualitative and quantitative predictions of this theory in the dynamics of a ResNet-18 model trained on ImageNet. Through the lens of statistical physics, we uncover a mechanistic origin for the anomalous limiting dynamics of deep neural networks trained with SGD. 1 INTRODUCTION Deep neural networks have demonstrated remarkable generalization across a variety of datasets and tasks. Essential to their success has been a collection of good practices on how to train these models with stochastic gradient descent (SGD). Yet, despite their importance, these practices are mainly based on heuristic arguments and trial and error search. Without a general theory connecting the hyperparameters of optimization, the architecture of the network, and the geometry of the dataset, theory-driven design of deep learning systems is impossible. Existing theoretical works studying this interaction have leveraged the random structure of neural networks at initialization [1, 2, 3] and in their infinite width limits in order to study their dynamics [4, 5, 6, 7, 8]. Here we take a different approach and study the training dynamics of pre-trained networks that are ready to be used for inference. By leveraging the mathematical structures found at the end of training, we uncover an intricate interaction between the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix that corroborates previously identified empirical behavior such as anomalous limiting dynamics. Not only is understanding the limiting dynamics of SGD a critical stepping stone to building a complete theory for the learning dynamics of neural networks, but recently there have been a series of works demonstrating that the performance of pre-trained networks can be improved through averaging and ensembling [9, 10, 11]. Combining empirical exploration and theoretical tools from statistical physics, we identify and uncover a mechanistic explanation for the limiting dynamics of neural networks trained with SGD. 2 DIFFUSIVE BEHAVIOR IN THE LIMITING DYNAMICS OF SGD A network that has converged in performance will continue to move through parameter space [12, 13, 14, 15]. To demonstrate this behavior, we resume training of pre-trained convolutional networks while tracking the network trajectory through parameter space. Let θ∗ ∈ Rm be the parameter vector for a pre-trained network and θk ∈ Rm be the parameter vector after k steps of resumed training. We track two metrics of the training trajectory, namely the local parameter displacement δk between consecutive steps, and the global displacement ∆k after k steps from the pre-trained initialization: δk = θk − θk−1, ∆k = θk − θ∗. (1) As shown in Fig. 1, neither of these differences converge to zero across a variety of architectures, indicating that despite performance convergence, the networks continue to move through parameter space, both locally and globally. The squared norm of the local displacement ‖δk‖22 remains near a constant value, indicating the network is essentially moving at a constant instantaneous speed. This observation is quite similar to the “equilibrium" phenomenon or “constant angular update" observed in Li et al. [17] and Wan et al. [13] respectively. However, these works only studied the displacement for parameters immediately preceding a normalization layer. The constant instantaneous speed behavior we observe is for all parameters in the model and is even present in models without normalization layers. While the squared norm of the local displacement is essentially constant, the squared norm of the global displacement ‖∆k‖22 is monotonically growing for all networks, implying even once trained, the network continues to diverge from where it has been. Indeed Fig. 1 indicates a power law relationship between global displacement and number of steps, given by ‖∆k‖22 ∝ kc. As we’ll see in section 8, this relationship is indicative of anomalous diffusion where c corresponds to the anomalous diffusion exponent. Standard Brownian motion corresponds to c = 1. Similar observation were made by Baity-Jesi et al. [14] who noticed distinct phases of the training trajectory evident in the dynamics of the global displacement and Chen et al. [15] who found that the exponent of diffusion changes through the course of training. A parallel observation is given by Hoffer et al. [18] for the beginning of training, where they measure the global dis- placement from the initialization of an untrained network and observe a rate ∝ log(k), a form of ultra-slow diffusion. These empirical observations raise the natural questions, where is the network moving to and why? To answer these questions we will build a diffusion based theory of SGD, study these dynamics in the setting of linear regression, and use lessons learned in this fundamental setting to understand the limiting dynamics of neural networks. 3 RELATED WORK There is a long line of literature studying both theoretically and empirically the learning dynamics of deep neural networks trained with SGD. Our analysis and experiments build upon this literature. Continuous models for SGD. Many works consider how to improve the classic gradient flow model for SGD to more realistically reflect momentum [19], discretization due to finite learning rates [20, 21], and stochasticity due to random batches [22, 23]. One line of work has studied the dynamics of networks in their infinite width limits through dynamical mean field theory [24, 25, 26, 27], while a different approach has used stochastic differential equations (SDEs) to model SGD directly, the approach we take in this work. However, recently, the validity of this approach has been questioned. The main argument, as nicely explained in Yaida [28], is that most SDE approximations simultaneously assume that ∆t → 0+, while maintaining that the learning rate η = ∆t is finite. The works Simsekli et al. [29] and Li et al. [30] have questioned the correctness of the using the central limit theorem (CLT) to model the gradient noise as Gaussian, arguing respectively that the heavy-tailed structure in the gradient noise and the weak dependence between batches leads the CLT to break down. In our work, we maintain the CLT assumption holds, which we discuss fur- ther in appendix A, but importantly we avoid the pitfalls of many previous SDE approximations by simultaneously modeling the effect of finite learning rates and stochasticity. Limiting dynamics. A series of works have applied SDE models of SGD to study the limiting dynamics of neural networks. In the seminal work by Mandt et al. [31], the limiting dynamics were modeled with a multivariate Ornstein-Uhlenbeck process by combining a first-order SDE model for SGD with assumptions on the geometry of the loss and covariance matrix for the gradient noise. This analysis was extended by Jastrzębski et al. [12] through additional assumptions on the covariance matrix to gain tractable insights and applied by Ali et al. [32] to the simpler setting of linear regression, which has a quadratic loss. A different approach was taken by Chaudhari and Soatto [33], which did not formulate the dynamics as an OU process, nor assume directly a structure on the loss or gradient noise. Rather, this analysis studied the same first-order SDE via the Fokker-Planck equation to propose the existence of a modified loss and probability currents driving the limiting dynamics, but did not provide explicit expressions. Our analysis deepens and combines ideas from all these works, where our key insight is to lift the dynamics into phase space. By studying the dynamics of the parameters and their velocities, and by applying the analysis first in the setting of linear regression where assumptions are provably true, we are able to identify analytic expressions and explicit insights which lead to concrete predictions and testable hypothesis. Stationary dynamics. A different line of work avoids modeling the limiting dynamics of SGD with an SDE and instead chooses to leverage the property of stationarity. These works [28, 34, 35, 36] assume that eventually the probability distribution governing the model parameters reaches stationarity such that the discrete SGD process is simply sampling from this distribution. Yaida [28] used this approach to derive fluctuation-dissipation relations that link measurable quantities of the parameters and hyperparameters of SGD. Liu et al. [35] used this approach to derive properties for the stationary distribution of SGD with a quadratic loss. Similar to our analysis, this work identifies that the stationary distribution for the parameters reflects a modified loss function dependent on the relationship between the covariance matrix of the gradient noise and the Hessian matrix for the original loss. Empirical exploration. Another set of works analyzing the limiting dynamics of SGD has taken a purely empirical approach. Building on the intuition that flat minima generalize better than sharp minima, Keskar et al. [37] demonstrated empirically that the hyperparameters of optimization influence the eigenvalue spectrum of the Hessian matrix at the end of training. Many subsequent works have studied the Hessian eigenspectrum during and at the end of training. Jastrzębski et al. [38], Cohen et al. [39] studied the dynamics of the top eigenvalues during training. Sagun et al. [40], Papyan [41], Ghorbani et al. [42] demonstrated the spectrum has a bulk of values near zero plus a small number of larger outliers. Gur-Ari et al. [43] demonstrated that the learning dynamics are constrained to the subspace spanned by the top eigenvectors, but found no special properties of the dynamics within this subspace. In our work we also determine that the top eigensubspace of the Hessian plays a crucial role in the limiting dynamics and by projecting the dynamics into this subspace in phase space, we see that the motion is not random, but consists of incoherent oscillations leading to anomalous diffusion. 4 MODELING SGD AS AN UNDERDAMPED LANGEVIN EQUATION Following the route of previous works [31, 12, 33] studying the limiting dynamics of neural networks, we first seek to model SGD as a continuous stochastic process. We consider a network parameterized by θ ∈ Rm, a training dataset {x1, . . . , xN} of size N , and a training loss L(θ) = 1N ∑N i=1 `(θ, xi) with corresponding gradient g(θ) = ∂L ∂θ . The state of the network at the kth step of training is defined by the position vector θk and velocity vector vk of the same dimension. The gradient descent update with learning rate η, momentum β, and weight decayλ is given by vk+1 = βvk − g(θk)− λθk, θk+1 = θk + ηvk+1, (2) where we initialize the network such that v0 = 0 and θ0 is the parameter initialization. In order to understand the dynamics of the network through position and velocity space, which we will refer to as phase space, we express these discrete recursive equations as the discretization of some unknown ordinary differential equation (ODE), sometimes referred to as a modified equation as in [44, 20]. While this ODE models the gradient descent process even at finite learning rates, it fails to account for the stochasticity introduced by choosing a random batch B of size S drawn uniformly from the set of N training points. This sampling yields the stochastic gradient gB(θ) = 1S ∑ i∈B∇`(θ, xi). To model this effect, we make the following assumption: Assumption 1 (CLT). We assume the batch gradient is a noisy version of the true gradient such that gB(θ)− g(θ) is a Gaussian random variable with mean 0 and covariance 1SΣ(θ). Incorporating this model of stochastic gradients into the previous finite difference equation and applying the stochastic counterparts to Euler discretizations, results in the standard drift-diffusion stochastic differential equation (SDE), referred to as an underdamped Langevin equation, d [ θ v ] = [ v − 2η(1+β) (g(θ) + λθ + (1− β)v) ] dt+ [ 0 0 0 2√ ηS(1+β) √ Σ(θ) ] dWt, (3) where Wt is a standard Wiener process. This is the continuous model we will study in this work: Assumption 2 (SDE). We assume the underdamped Langevin equation (3) accurately models the trajectory of the network driven by SGD through phase space such that θ(ηk) ≈ θk and v(ηk) ≈ vk. See appendix A for further discussion on the nuances of modeling SGD with an SDE. 5 LINEAR REGRESSION WITH SGD IS AN ORNSTEIN-UHLENBECK PROCESS Equipped with a model for SGD, we seek to understand its dynamics in the fundamental setting of linear regression, one of the few cases where we have a complete model for the interaction of the dataset, architecture, and optimizer. Let X ∈ RN×d be the input data, Y ∈ RN be the output labels, and θ ∈ Rd be our vector of regression coefficients. The least squares loss is the convex quadratic lossL(θ) = 12N ‖Y −Xθ‖2 with gradient g(θ) = Hθ−b, whereH = X ᵀX N and b = XᵀY N . Plugging this expression for the gradient into the underdamped Langevin equation (3), and rearranging terms, results in the multivariate Ornstein-Uhlenbeck (OU) process, d [ θt vt ] = − [ 0 −I 2 η(1+β) (H + λI) 2(1−β) η(1+β)I ] ︸ ︷︷ ︸ A ([ θt vt ] − [ µ 0 ]) dt+ √ 2κ−1 √√√√√ [ 0 0 0 2(1−β)η(1+β)Σ(θ) ] ︸ ︷︷ ︸ D dWt, (4) where A and D are the drift and diffusion matrices respectively, κ = S(1− β2) is an inverse temperature constant, and µ = (H + λI)−1b is the ridge regression solution. The solution to an OU process is a Gaussian process. By solving for the temporal dynamics of the first and second moments of the process, we can obtain an analytic expression for the trajectory at any time t. In particular, we can decompose the trajectory as the sum of a deterministic and stochastic component defined by the first and second moments respectively. Deterministic component. Using the form of A we can decompose the expectation as a sum of harmonic oscillators in the eigenbasis {q1, . . . , qm} of the Hessian, E [[ θt vt ]] = [ µ 0 ] + m∑ i=1 ( ai(t) [ qi 0 ] + bi(t) [ 0 qi ]) . (5) Here the coefficients ai(t) and bi(t) depend on the optimization hyperparameters η, β, λ, S and the respective eigenvalue of the Hessian ρi as further explained in appendix F. We verify this expression nearly perfectly matches empirics on complex datasets under various hyperparameter settings as shown in Fig. 2. Stochastic component. The cross-covariance of the process between two points in time t ≤ s, is Cov ([ θt vt ] , [ θs vs ]) =κ−1 ( B−e−AtBe−Aᵀt ) eA ᵀ(t−s), (6) where B solves the Lyapunov equation AB +BAᵀ = 2D. In order to gain analytic expressions for B in terms of the optimization hyperparameters, eigendecomposition of the Hessian, and covariance of the gradient noise, we must introduce the following assumption: Assumption 3 (Simultaneously Diagonalizable). We assume the covariance of the gradient noise is spatially independent Σ(θ) = Σ and commutes with the Hessian HΣ = ΣH , therefore sharing a common eigenbasis. 6 UNDERSTANDING STATIONARITY VIA THE FOKKER-PLANCK EQUATION The OU process is unique in that it is one of the few SDEs which we can solve exactly. As shown in section 5, we were able to derive exact expressions for the dynamics of linear regression trained with SGD from initialization to stationarity by simply solving for the first and second moments. While the expression for the first moment provides an understanding of the intricate oscillatory relationship in the deterministic component of the process, the second moment, driving the stochastic component, is much more opaque. An alternative route to solving the OU process that potentially provides more insight is the Fokker-Planck equation. The Fokker-Planck (FP) equation is a PDE describing the time evolution for the probability distribution of a particle governed by Langevin dynamics. For an arbitrary potential Φ and diffusion matrix D, the Fokker-Planck equation (under an Itô integration prescription) is ∂tp = ∇ · ( ∇Φp+∇ · ( κ−1Dp ))︸ ︷︷ ︸ −J , (7) where p represents the time-dependent probability distribution, and J is a vector field commonly referred to as the probability current. The FP equation is especially useful for explicitly solving for the stationary solution, assuming one exists, of the Langevin dynamics. The stationary solution pss by definition obeys ∂tpss = 0 or equivalently ∇ · Jss = 0. From this second definition we see that there are two distinct settings of stationarity: detailed balance when Jss = 0 everywhere, or broken detailed balance when ∇ · Jss = 0 and Jss 6= 0. For a general OU process, the potential is a convex quadratic function Φ(x) = xᵀAx defined by the drift matrix A. When the diffusion matrix is isotropic (D ∝ I) and spatially independent (∇ · D = 0) the resulting stationary solution is a Gibbs distribution pss(x) ∝ e−κΦ(x) determined by the original loss Φ(x) and is in detailed balance. Lesser known properties of the OU process arise when the diffusion matrix is anisotropic or spatially dependent [45, 46]. In this setting the solution is still a Gaussian process, but the stationary solution, if it exists, is no longer defined by the Gibbs distribution of the original loss Φ(x), but actually a modified loss Ψ(x). Furthermore, the stationary solution may be in broken detailed balance leading to a non-zero probability current Jss(x). Depending on the relationship between the drift matrix A and the diffusion matrix D the resulting dynamics of the OU process can have very nontrivial behavior. In the setting of linear regression, anisotropy in the data distribution will lead to anisotropy in the gradient noise and thus an anisotropic diffusion matrix. This implies that for most datasets we should expect that the SGD trajectory is not driven by the original least squares loss, but by a modified loss and converges to a stationary solution with broken detailed balance, as predicted by Chaudhari and Soatto [33]. Using the explicit expressions for the drift A and diffusion D matrices we can compute analytically the modified loss and stationary probability current, Ψ(θ, v) = ([ θ v ] − [ µ 0 ])ᵀ( U 2 )([ θ v ] − [ µ 0 ]) , Jss(θ, v) = −QU ([ θ v ] − [ µ 0 ]) pss, (8) where Q is a skew-symmetric matrix and U is a positive definite matrix defined as, Q = [ 0 −Σ(θ) Σ(θ) 0 ] , U = [ 2 η(1+β)Σ(θ) −1 (H + λI) 0 0 Σ(θ)−1 ] . (9) These new fundamental matrices, Q and U , relate to the original drift A and diffusion D matrices through the unique decomposition A = (D + Q)U , introduced by Ao [47] and Kwon et al. [48]. Using this decomposition we can easily show that B = U−1 solves the Lyapunov equation and indeed the stationary solution pss is the Gibbs distribution defined by the modified loss Ψ(θ, v) in equation (8). Further, the stationary cross-covariance solved in section 5 reflects the oscillatory dynamics introduced by the stationary probability currents Jss(θ, v) in equation (8). Taken together, we gain the intuition that the limiting dynamics of SGD in linear regression are driven by a modified loss subject to oscillatory probability currents. 7 EVIDENCE OF A MODIFIED LOSS AND OSCILLATIONS IN DEEP LEARNING Does the theory derived in the linear regression setting (sections 5, 6) help explain the empirical phenomena observed in the non-linear setting of deep neural networks (section 2)? In order for the theory built in the previous sections to apply to the limiting dynamics of neural networks, we must introduce simplifying assumptions on the loss landscape and gradient noise at the end of training: Assumption 4 (Quadratic Loss). We assume that at the end of training the loss for a neural network can be approximated by the quadratic loss L(θ) = (θ − µ)ᵀ ( H 2 ) (θ − µ), where H 0 is the training loss Hessian and µ is some unknown mean vector, corresponding to a local minimum. Assumption 5 (Covariance Structure). We assume the covariance of the gradient noise is proportional to the Hessian of the quadratic loss Σ(θ) = σ2H where σ ∈ R+ is some unknown scalar. Under these simplifications, then the expressions derived in the linear regression setting would apply to the limiting dynamics of deep neural networks and depend only on quantities that we can easily estimate empirically. Of course, these simplifications are quite strong, but without arguing their theoretical validity, we can empirically test their qualitative implications: (1) a modified isotropic loss driving the limiting dynamics through parameter space, (2) implicit regularization of the velocity trajectory, and (3) oscillatory phase space dynamics determined by the Hessian eigen-structure. Modified loss. As discussed in section 6, due to the anisotropy of the diffusion matrix, the loss landscape driving the dynamics at the end of training is not the original training loss L(θ), but a modified loss Ψ(θ, v) in phase space. As shown in equation (8), the modified loss decouples into a term Ψθ that only depends on the parameters θ and a term Ψv that only depends on the velocities v. Under assumption 5, the parameter dependent component is proportional to the convex quadratic, Ψθ ∝ (θ − µ)ᵀ ( H−1(H + λI) η(1 + β) ) (θ − µ) . (10) This quadratic function has the same mean µ as the training loss, but a different curvature. Using this expression, notice that when λ ≈ 0, the modified loss is isotropic in the column space of H , regardless of what the nonzero eigenspectrum of H is. This striking prediction suggests that no matter how anisotropic the original training loss – as reflected by poor conditioning of the Hessian eigenspectrum – the training trajectory of the network will behave isotropically, since it is driven not by the original anisotropic loss, but a modified isotropic loss. We test this prediction by studying the limiting dynamics of a pre-trained ResNet-18 model with batch normalization that we continue to train on ImageNet according to the last setting of its hyperparameters [49]. Let θ∗ represent the initial pre-trained parameters of the network, depicted with the white dot in figures 3 and 4. We estimate1 the top thirty eigenvectors q1, . . . , q30 of the Hessian matrix H∗ evaluated at θ∗ and project the limiting trajectory for the parameters onto the plane spanned by the top q1 and bottom q30 eigenvectors to maximize the illustrated anisotropy with our estimates. We sample the train and test loss in this subspace for a region around the projected trajectory. Additionally, using the hyperparameters of the optimization, the eigenvalues ρ1 and ρ30, and the estimate for the mean µ = θ∗−H−1∗ g∗ (g∗ is the gradient evaluated at θ∗), we also sample from the modified loss equation (10) in the same region. Figure 3 shows the projected parameter trajectory on the sampled train, test and modified losses. Contour lines of both the train and test loss exhibit anisotropic structure, with sharper curvature along eigenvector q1 compared to eigenvector q30, as expected. However, as predicted, the trajectory appears to cover both directions equally. This striking isotropy of the trajectory within a highly anisotropic slice of the loss landscape indicates qualitatively that the trajectory evolves in a modified isotropic loss landscape. Implicit velocity regularization. A second qualitative prediction of the theory is that the velocity is regulated by the inverse Hessian of the training loss. Of course there are no explicit terms in either the train or test losses that depend on the velocity. Yet, the modified loss contains a component, Ψv ∝ vᵀH−1v, that only depends on the velocities This additional term can be understood as a form of implicit regularization on the velocity trajectory. Indeed, when we project the velocity trajectory onto the plane spanned by the q1 and q30 eigenvectors, as shown in Fig. 4, we see that the trajectory closely resembles the curvature of the inverse Hessian H−1. The modified loss is effectively penalizing SGD for moving in eigenvectors of the Hessian with small eigenvalues. A similar qualitative effect was recently proposed by Barrett and Dherin [21] as a consequence of the discretization error due to finite learning rates. Phase space oscillations. A final implication of the theory is that at stationarity the network is in broken detailed balance leading to non-zero probability currents flowing through phase space: Jss(θ, v) = [ v − 2η(1+β) (H + λI) (θ − µ) ] pss. (11) These probability currents encourage oscillatory dynamics in the phase space planes characterized by the eigenvectors of the Hessian, at rates proportional to their eigenvalues. We consider the same projected trajectory of the ResNet-18 model visualized in figures 3 and 4, but plot the trajectory in phase space for the two eigenvectors q1 and q30 separately. Shown in Fig. 5, we see that both trajectories look like noisy clockwise rotations. Qualitatively, the trajectories for the different eigenvectors appear to be rotating at different rates. The integral curves of the stationary probability current are one-dimensional paths confined to level sets of the modified loss. These paths might cross themselves, in which case they are limit cycles, or they could cover the entire surface of the level sets, in which case they are space-filling curves. This distinction depends on the relative frequencies of the oscillations, as determined by the pairwise 1To estimate the eigenvectors of H∗ we use subspace iteration, and limit ourselves to 30 eigenvectors to constrain computation time. See appendix H for details. ratios of the eigenvalues of the Hessian. For real-world datasets, with a large spectrum of incommensurate frequencies, we expect to be in the latter setting, thus contradicting the suggestion that SGD in deep networks converges to limit cycles, as claimed in Chaudhari and Soatto [33]. 8 UNDERSTANDING THE DIFFUSIVE BEHAVIOUR OF THE LIMITING DYNAMICS Taken together the empirical results shown in section 7 indicate that many of the same qualitative behaviors of SGD identified theoretically for linear regression are evident in the limiting dynamics of neural networks. Can this theory quantitatively explain the results we identified in section 2? Constant instantaneous speed. As noted in section 2, we observed that at the end of training, across various architectures, the squared norm of the local displacement ‖δt‖22 remains essentially constant. Assuming the limiting dynamics are described by the stationary solution the expectation of the local displacement is Ess [ ‖δt‖2 ] = η2 S(1− β2)σ 2tr (H) , (12) as derived in appendix G. We cannot test this prediction directly as we do not know σ2 and computing tr(H) is computationally prohibitive. However, we can estimate σ2tr(H) by resuming training for a model, measuring the average ‖δt‖2, and then inverting equation (12). Using this single estimate, we find that for a sweep of models with varying hyperparameters, equation (12) accurately predicts their instantaneous speed. Indeed, Fig. 6 shows an exact match between the empirics and theory, which strongly suggests that despite changing hyperparameters at the end of training, the model remains in the same quadratic basin. Exponent of anomalous diffusion. The expected value for the global displacement under the stationary solution can also be analytically expressed in terms of the optimization hyperparameters and the eigendecomposition of the Hessian as, Ess [ ‖∆t‖2 ] = η2 S(1− β2)σ 2 ( tr (H) t+ 2t t∑ k=1 ( 1− k t ) m∑ l=1 ρlCl(k) ) , (13) where Cl(k) is a trigonometric function describing the velocity of a harmonic oscillator with damping ratio ζl = (1 − β)/ √ 2η(1 + β) (pl + λ), see appendix G for details. As shown empirically in section 2, the squared norm ‖∆t‖2 monotonically increases as a power law in the number of steps, suggesting its expectation is proportional to tc for some unknown, constant c. The exponent c determines the regime of diffusion for the process. When c = 1, the process corresponds to standard Brownian diffusion. For c > 1 or c < 1 the process corresponds to anomalous super-diffusion or sub-diffusion respectively. Unfortunately, it is not immediately clear how to extract the explicit exponent c from equation (13). However, by exploring the functional form of Cl(k) and its relationship to the hyperparameters of optimization through the damping ratio ζl, we can determine overall trends in the diffusion exponent c. Akin to how the exponent c determines the regime of diffusion, the damping ratio ζl determines the regime for the harmonic oscillator describing the stationary velocity-velocity correlation in the lth eigenvector of the Hessian. When ζl = 1, the oscillator is critically damped implying the velocity correlations converge to zero as quickly as possible. In the extreme setting of Cl(k) = 0 for all l, k, then equation (13) simplifies to standard Brownian diffusion, Ess [ ‖∆t‖2 ] ∝ t. When ζl > 1, the oscillator is overdamped implying the velocity correlations dampen slowly and remain positive even over long temporal lags. Such long lasting temporal correlations in velocity lead to faster global displacement. Indeed, in the extreme setting of Cl(k) = 1 for all l, k, then equation (13) simplifies to a form of anomalous super-diffusion, Ess [ ‖∆t‖2 ] ∝ t2. When ζl < 1, the oscillator is underdamped implying the velocity correlations will oscillate quickly between positive and negative values. Indeed, the only way equation (13) could describe anomalous sub-diffusion is if Cl(k) took on negative values for certain l, k. Using the same sweep of models described previously, we can empirically confirm that the optimization hyperparameters each influence the diffusion exponent c. As shown in Fig. 6, the learning rate, batch size, and momentum can each independently drive the exponent c into different regimes of anomalous diffusion. Notice how the influence of the learning rate and momentum on the diffusion exponent c closely resembles their respective influences on the damping ratio ζl. Interestingly, a larger learning rate leads to underdamped oscillations, and the resultant temporal velocities’ anti-correlations reduce the exponent of anomalous diffusion. Thus contrary to intuition, a larger learning rate actually leads to slower global transport in parameter space. The batch size on the other hand, has no influence on the damping ratio, but leads to an interesting, non-monotonic influence on the diffusion exponent. Overall, the hyperparameters of optimization and eigenspectrum of the Hessian all conspire to govern the degree of anomalous diffusion at the end of training. 9 DISCUSSION Through combined empirics and theory based on statistical physics, we uncovered an intricate interplay between the optimization hyperparameters, structure in the gradient noise, and the Hessian matrix at the end of training. Significance. The significance of our work lies in (1) the identification/verification of multiple empirical phenomena (constant instantaneous speed, anomalous diffusion in global displacement, isotropic parameter exploration despite anisotopic loss, velocity regularization, and slower global parameter exploration with faster learning rates) present in the limiting dynamics of deep neural networks, (2) the emphasis on studying the dynamics in velocity space in addition to parameter space, and (3) concrete quantitative as well as qualitative predictions of an SDE based theory that we empirically verified in deep networks trained on large scale datasets (indeed some of the above nontrivial phenomena were predictions of this theory). Of course, these contributions directly build upon a series of related works studying the immensely complex process of deep learning. To this end, we further clarify the originality of our contributions with respect to some relevant works. Originality. The empirical phenomena we present provide novel insight with respect to the works of Wan et al. [13], Hoffer et al. [18], and Chen et al. [15]. We observe that all parameters in the network (not just those with scale symmetry) move at a constant instantaneous speed at the end of training and diffuse anomalously at rates determined by the hyperparameters of optimization. In contrast to the work by Liu et al. [35], we modeled the entire SGD process as an OU process which allows us to provide insight into the transient dynamics and identify oscillations in parameter and velocity space. We build on the theoretical framework used by Chaudhari and Soatto [33] and provide explicit expressions for the limiting dynamics in the simplified linear regression setting and conclude that the oscillations present in the limiting dynamics are more likely to be space-filling curves (and not limit cycles) in deep learning due to many incommensurate oscillations. Overall, by identifying key phenomena, explaining them in a simpler setting, deriving predictions of new phenomena, and providing evidence for these predictions at scale, we are furthering the scientific study of deep learning. We hope our newly derived understanding of the limiting dynamics of SGD, and its dependence on various important hyperparameters like batch size, learning rate, and momentum, can serve as a basis for future work that can turn these insights into algorithmic gains. A MODELING SGD WITH AN SDE As explained in section 4, in order to understand the dynamics of stochastic gradient descent we build a continuous Langevin equation in phase space modeling the effect of discrete updates and stochastic batches simultaneously. A.1 MODELING DISCRETIZATION To model the discretization effect we assume that the system of update equations (2) is actually a discretization of some unknown ordinary differential equation. To uncover this ODE, we combine the two update equations in (2), by incorporating a previous time step θk−1, and rearrange into the form of a finite difference discretization, as shown in equation (??). Like all discretizations, the Euler discretizations introduce error terms proportional to the step size, which in this case is the learning rate η. Taylor expanding θk+1 and θk−1 around θk, its easy to show that both Euler discretizations introduce a second-order error term proportional to η2 θ̈. θk+1 − θk η = θ̇ + η 2 θ̈ +O(η2), θk − θk−1 η = θ̇ − η 2 θ̈ +O(η2). Notice how the momentum coefficient β ∈ [0, 1] regulates the amount of backward Euler incorporated into the discretization. When β = 0, we remove all backward Euler discretization leaving just the forward Euler discretization. When β = 1, we have equal amounts of backward Euler as forward Euler resulting in a central second-order discretization2 as noticed in [19]. A.2 MODELING STOCHASTICITY In order to model the effect of stochastic batches, we first model a batch gradient with the following assumption: Assumption 1 (CLT). We assume the batch gradient is a noisy version of the true gradient such that gB(θ)− g(θ) is a Gaussian random variable with mean 0 and covariance 1SΣ(θ). The two conditions needed for the CLT to hold are not exactly met in the setting of SGD. Independent and identically distributed. Generally we perform SGD by making a complete pass through the entire dataset before using a sample again which introduces a weak dependence between samples. While the covariance matrix without replacement more accurately models the dependence between samples within a batch, it fails to account for the dependence between batches. Finite variance. A different line of work has questioned the Gaussian assumption entirely because of the need for finite variance random variables. This work instead suggests using the generalized central limit theorem implying the noise would be a heavy-tailed α-stable random variable [29]. Thus, the previous assumption is implicitly assuming the i.i.d. and finite variance conditions apply for large enough datasets and small enough batches. Under the CLT assumption, we must also replace the Euler discretizations with Euler–Maruyama discretizations. For a general stochastic process, dXt = µdt+ σdWt, the Euler–Maruyama method extends the Euler method for ODEs to SDEs, resulting in the update equation Xk+1 = Xk + ∆tµ+√ ∆tσξ, where ξ ∼ N (0, 1). Notice, the key difference is that if the temporal step size is ∆t = η, then the noise is scaled by the square root √ η. In fact, the main argument against modeling SGD with an SDE, as nicely explained in Yaida [28], is that most SDE approximations simultaneously assume that ∆t → 0+, while maintaining that the square root of the learning rate √η is finite. However, by modeling the discretization and stochastic effect simultaneously we can avoid this argument, bringing us to our second assumption: Assumption 2 (SDE). We assume the underdamped Langevin equation (3) accurately models the trajectory of the network driven by SGD through phase space such that θ(ηk) ≈ θk and v(ηk) ≈ vk. This approach of modeling discretization and stochasticity simultaneously is called stochastic modified equations, as further explained in Li et al. [22]. 2The difference between a forward Euler and backward Euler discretization is a second-order central discretization, ( θk+1−θk η ) − ( θk−θk−1 η ) = η ( θk+1−2θk+θk−1 η2 ) = ηθ̈ +O(η2). B STRUCTURE IN THE COVARIANCE OF THE GRADIENT NOISE As we’ve mentioned before, SGD introduces highly structured noise into an optimization process, often assumed to be an essential ingredient for its ability to avoid local minima. Assumption 5 (Covariance Structure). We assume the covariance of the gradient noise is proportional to the Hessian of the quadratic loss Σ(θ) = σ2H where σ ∈ R+ is some unknown scalar. In the setting of linear regression, this is a very natural assumption. If we assume the classic generative model for linear regression data yi = x ᵀ i θ̄+σ where, θ̄ ∈ Rd is the true model and ∼ N (0, 1), then provably Σ(θ) ≈ σ2H . Proof. We can estimate the covariance as Σ(θ) ≈ 1N ∑N i=1 gig ᵀ i − ggᵀ. Near stationarity ggᵀ 1 N ∑N i=1 gig ᵀ i , and thus, Σ(θ) ≈ 1 N N∑ i=1 gig ᵀ i . Under the generative model yi = x ᵀ i θ̄ + σ where ∼ N (0, 1) and σ ∈ R+, then the gradient gi is gi = (x ᵀ i (θ − θ̄)− σ )xi, and the matrix gig ᵀ i is gig ᵀ i = (x ᵀ i (θ − θ̄)− σ )2(xixᵀi ). Assuming θ ≈ θ̄ at stationarity, then (xᵀi (θ − θ̄)− σ )2 ≈ σ2. Thus, Σ(θ) ≈ σ 2 N N∑ i=1 xix ᵀ i = σ2 N XᵀX = σ2H Also notice that weight decay is independent of the data or batch and thus simply shifts the gradient distribution, but leaves the covariance of the gradient noise unchanged. While the above analysis is in the linear regression setting, for deep neural networks it is reasonable to make the same assumption. See the appendix of Jastrzębski et al. [12] for a discussion on this assumption in the non-linear setting. Recent work by Ali et al. [32] also studies the dynamics of SGD (without momentum) in the setting of linear regression. This work, while studying the classic first-order stochastic differential equation, made a point to not introduce an assumption on the diffusion matrix. In particular, they make the point that even in the setting of linear regression, a constant covariance matrix will fail to capture the actual dynamics. To illustrate this point they consider the univariate responseless least squares problem, minimize θ∈R 1 2n n∑ i=1 (xiθ) 2. As they explain, the SGD update for this problem would be θk+1 = θk − η S (∑ i∈B xi ) θk = k∏ i=1 (1− η( 1S ∑ i∈B xi))θ0, from which they conclude for a small enough learning rate η, then with probability one θk → 0. They contrast this with the Ornstein-Uhlenbeck process given by a constant covariance matrix where while the mean for θk converges to zero its variance converges to a positive constant. So is this discrepancy evidence that an Ornstein-Uhlenbeck process with a constant covariance matrix fails to capture the updates of SGD? In many ways this problem is not a simple example, rather a pathological edge case. Consider the generative model that would give rise to this problem, y = 0x+ 0ξ = 0. In otherwords, the true model θ̄ = 0 and the standard deviation for the noise σ = 0. This would imply by the assumption used in our paper that there would be zero diffusion and the resulting SDE would simplify to a deterministic ODE that exponentially converges to zero. C A QUADRATIC LOSS AT THE END OF TRAINING Assumption 4 (Quadratic Loss). We assume that at the end of training the loss for a neural network can be approximated by the quadratic loss L(θ) = (θ − µ)ᵀ ( H 2 ) (θ − µ), where H 0 is the training loss Hessian and µ is some unknown mean vector, corresponding to a local minimum. This assumption has been amply used in previous works such as Mandt et al. [31], Jastrzębski et al. [12], and Poggio et al. [50]. Particularly, Mandt et al. [31] discuss how this assumption makes sense for smooth loss functions for which the stationary solution to the stochastic process reaches a deep local minimum from which it is difficult to escape. It is a well-studied fact, both empirically and theoretically, that the Hessian is low-rank near local minima as noted by Sagun et al. [51], and Kunin et al. [20]. This degeneracy results in flat directions of equal loss. Kunin et al. [20] discuss how differentiable symmetries, architectural features that keep the loss constant under certain weight transformations, give rise to these flat directions. Importantly, the Hessian and the covariance matrix share the same null space, and thus we can always restrict ourselves to the image space of the Hessian, where the drift and diffusion matrix will be full rank. Further discussion on the relationship between the Hessian and the covariance matrix can be found in Thomas et al. [52]. It is also a well known empirical fact that even at the end of training the Hessian can have negative eigenvalues [41]. This empirical observation is at odds with our assumption that the Hessian is positive semi-definite H 0. Further analysis is needed to alleviate this inconsistency. D SOLVING AN ORNSTEIN-UHLENBECK PROCESS WITH ANISOTROPIC NOISE We will study the multivariate Ornstein-Uhlenbeck process described by the stochastic differential equation dXt = A(µ−Xt)dt+ √ 2κ−1DdWt X0 = x0, (14) whereA ∈ Sm++ is a positive definite drift matrix, µ ∈ Rm is a mean vector, κ ∈ R+ is some positive constant, and D ∈ Sm++ is a positive definite diffusion matrix. This OU process is unique in that it is one of the few SDEs we can solve explicitly. We can derive an expression for XT as, XT = e −ATx0 + ( I − e−AT ) µ+ ∫ T 0 eA(t−T ) √ 2κ−1DdWt. (15) Proof. Consider the function f(t, x) = eAtx where eA is a matrix exponential. Then by Itô’s Lemma3 we can evaluate the derivative of f(t,Xt) as df(t,Xt) = ( AeAtXt + e AtA(µ−Xt) ) dt+ eAt √ 2κ−1DdWt = AeAtµdt+ eAt √ 2κ−1DdWt Integrating this expression from t = 0 to t = T gives f(T,XT )− f(0, X0) = ∫ T 0 AeAtµdt+ ∫ T 0 eAt √ 2κ−1DdWt eATXT − x0 = ( eAT − I ) µ+ ∫ T 0 eAt √ 2κ−1DdWt which rearranged gives the expression for XT . From this expression it is clear that XT is a Gaussian process. The mean of the process is E [XT ] = e −ATx0 + ( I − e−AT ) µ, (16) and the covariance and cross-covariance of the process are Var(XT ) = κ −1 ∫ T 0 eA(t−T )2DeA ᵀ(t−T )dt, (17) Cov(XT , XS) = κ −1 ∫ min(T,S) 0 eA(t−T )2DeA ᵀ(t−S)dt. (18) These last two expressions are derived by Itô Isometry4. D.1 THE LYAPUNOV EQUATION We can explicitly solve the integral expressions for the covariance and cross-covariance exactly by solving for the unique matrix B ∈ Sm++ that solves the Lyapunov equation, AB +BAᵀ = 2D. (19) If B solves the Lyapunov equation, notice d dt ( eA(t−T )BeA ᵀ(t−S) ) = eA(t−T )ABeA ᵀ(t−S) + eA(t−T )BAᵀeA ᵀ(t−S) = eA(t−T )2DeA ᵀ(t−S) Using this derivative, the integral expressions for the covariance and cross-covariance simplify as, Var(XT ) = κ −1 ( B − e−ATBe−AᵀT ) , (20) Cov(XT , XS) = κ −1 ( B − e−ATBe−AᵀT ) eA ᵀ(T−S), (21) where we implicitly assume T ≤ S. 3Itô’s Lemma states that for any Itô drift-diffusion process dXt = µtdt + σtdWt and twice differentiable scalar function f(t, x), then df(t,Xt) = ( ft + µtfx + σ2t 2 fxx ) dt+ σtfxdWt. 4Itô Isometry states for any standard Itô process Xt, then E [(∫ t 0 XtdWt )2] = E [∫ t 0 X2t dt ] . D.2 DECOMPOSING THE DRIFT MATRIX While the Lyapunov equation simplifies the expressions for the covariance and cross-covariance, it does not explain how to actually solve for the unknown matrix B. Following a method proposed by Kwon et al. [48], we will show how to solve for B explicitly in terms of the drift A and diffusion D. The drift matrix A can be uniquely decomposed as, A = (D +Q)U (22) whereD is our symmetric diffusion matrix,Q is a skew-symmetric matrix (i.e. Q = −Qᵀ), and U is a positive definite matrix. Using this decomposition, then B = U−1, solves the Lyapunov equation. Proof. Plug B = U−1 into the left-hand side of equation (19), AU−1 + U−1Aᵀ = (D +Q)UU−1 + U−1U(D −Q) = (D +Q) + (D −Q) = 2D Here we used the symmetry of A,D,U and the skew-symmetry of Q. All that is left is to do is solve for the unknown matricesQ and U . First notice the following identity, AD −DA = QA+AQ (23) Proof. Multiplying A = (D +Q)U on the right by (D −Q) gives, A(D −Q) = (D +Q)U(D −Q) = (D +Q)Aᵀ, which rearranged and using A = Aᵀ gives the desired equation. Let V ΛV ᵀ be the eigendecomposition of A and define the matrices D̃ = V ᵀDV and Q̃ = V ᵀQV . These matrices observe the following relationship, Q̃ij = λi − λj ρi + λj D̃ij . (24) Proof. Replace A in the previous equality with its eigendecompsoition, V ΛV ᵀD −DV ΛV ᵀ = QV ΛV ᵀ + V ΛV ᵀQ. Multiply this equation on the right by V and on the left by V ᵀ, ΛD̃ − D̃Λ = Q̃Λ + ΛQ̃. Looking at this equality element-wise and using the fact that Λ is diagonal gives the scalar equality for any i, j, (λi − λj)D̃ij = (λi + λj)Q̃ij , which rearranged gives the desired expression. Thus, Q and U are given by, Q = V Q̃V ᵀ, U = (D +Q)−1A. (25) This decomposition always holds uniquely when A,D 0, as λi−λjλi+λj exists and (D +Q) is invertible. See [48] for a discussion on the singularities of this decomposition. D.3 STATIONARY SOLUTION Using the Lyapunov equation and the drift decomposition, then XT ∼ pT , where pT = N ( e−ATx0 + ( I − e−AT ) µ, κ−1 ( U−1 − e−ATU−1e−AᵀT )) . (26) In the limit as T →∞, then e−AT → 0 and pT → pss where pss = N ( µ, κ−1U−1 ) . (27) Similarly, the cross-covariance converges to the stationary cross-covariance, Covss(XT , XS) = κ −1BeA ᵀ(T−S). (28) E A VARIATIONAL FORMULATION OF THE OU PROCESS WITH ANISOTROPIC NOISE In this section we will describe an alternative, variational, route towards solving the dynamics of the OU process studied in appendix D. Let Φ : Rn → R be an arbitrary, non-negative potential and consider the stochastic differential equation describing the Langevin dynamics of a particle in this potential field, dXt = −∇Φ(Xt)dt+ √ 2κ−1D(Xt)dWt, X0 = x0, (29) where D(Xt) is an arbitrary, spatially-dependent, diffusion matrix, κ is a temperature constant, and x0 ∈ Rm is the particle’s initial position. The Fokker-Planck equation describes the time evolution for the probability distribution p of the particle’s position such that p(x, t) = P(Xt = x). The FP equation is the partial differential equation5, ∂tp = ∇ · ( ∇Φ(Xt)p+ κ−1∇ · (D(Xt)p) ) , p(x, 0) = δ(x0), (30) where ∇· denotes the divergence and δ(x0) is a dirac delta distribution centered at the initialization x0. To assist in the exploration of the FP equation we define the vector field, J(x, t) = −∇Φ(Xt)p−∇ · (D(Xt)p) , (31) which is commonly referred to as the probability current. Notice, that this gives an alternative expression for the FP equation, ∂tp = −∇·J , demonstrating that J(x, t) defines the flow of probability mass through space and time. This interpretation is especially useful for solving for the stationary solution pss, which is the unique distribution that satisfies, ∂tpss = −∇ · Jss = 0, (32) where Jss is the probability current for pss. The stationary condition can be obtained in two distinct ways: 1. Detailed balance. This is when Jss(x) = 0 for all x ∈ Ω. This is analogous to reversibility for discrete Markov chains, which implies that the probability mass flowing from a state i to any state j is the same as the probability mass flowing from state j to state i. 2. Broken detailed balance. This is when ∇ · Jss(x) = 0 but Jss(x) 6= 0 for all x ∈ Ω. This is analogous to irreversibility for discrete Markov chains, which only implies that the total probability mass flowing out of state i equals to the total probability mass flowing into state i. The distinction between these two cases is critical for understanding the limiting dynamics of the process. E.1 THE VARIATIONAL FORMULATION OF THE FOKKER-PLANCK EQUATION WITH ISOTROPIC DIFFUSION We will now consider the restricted setting of standard, isotropic diffusion (D = I). It is easy enough to check that in this setting the stationary solution is pss(x) = e−κΦ(x) Z , Z = ∫ Ω e−κΦ(x)dx, (33) where pss is called a Gibbs distribution and Z is the partition function. Under this distribution, the stationary probability current is zero (Jss(x) = 0) and thus the process is in detailed balance. Interestingly, the Gibbs distribution pss has another interpretation as the unique minimizer of the the Gibbs free energy functional, F (p) = E [Φ]− κ−1H(p), (34) where E [Φ] is the expectation of the potential Φ under the distribution p and H(p) = − ∫ Ω p(x)log(p(x))dx is the Shannon entropy of p. 5This PDE is also known as the Forward Kolmogorov equation. Proof. To prove that indeed pss is the unique minimizer of the Gibbs free energy functional, consider the following equivalent expression F (p) = ∫ Ω p(x)Φ(x)dx+ κ−1 ∫ Ω p(x)log(p(x))dx = κ−1 ∫ Ω p(x) (log(p(x))− log(pss(x))) dx− κ−1 ∫ Ω log(Z) = κ−1DKL(p ‖ pss)− κ−1log(Z) From this expressions, it is clear that the Kullback–Leibler divergence is uniquely minimized when p = pss. In other words, with isotropic diffusion the stationary solution pss can be thought of as the limiting distribution given by the Fokker-Planck equation or the unique minimizer of an energetic-entropic functional. Seminal work by Jordan et al. [53] deepened this connection between the Fokker-Planck equation and the Gibbs free energy functional. In particular, their work demonstrates that the solution p(x, t) to the Fokker-Planck equation is the Wasserstein gradient flow trajectory on the Gibbs free energy functional. Steepest descent is always defined with respect to a distance metric. For example, the update equation, xk+1 = xk − η∇Φ(xk), for classic gradient descent on a potential Φ(x), can be formulated as the solution to the minimization problem xk+1 = argminxηΦ(x) + 1 2d(x, xk) 2 where d(x, xk) = ‖x− xk‖ is the Euclidean distance metric. Gradient flow is the continuous-time limit of gradient descent where we take η → 0+. Similarly, Wasserstein gradient flow is the continuous-time limit of steepest descent optimization defined by the Wasserstein metric. The Wasserstein metric is a distance metric between probability measures defined as, W 22 (µ1, µ2) = inf p∈Π(µ1,µ2) ∫ Rn×Rn |x− y|2p(dx, dy), (35) where µ1 and µ2 are two probability measures on Rn with finite second moments and Π(µ1, µ2) defines the set of joint probability measures with marginals µ1 and µ2. Thus, given an initial distribution and learning rate η, we can use the Wasserstein metric to derive a sequence of distributions minimizing some functional in the sense of steepest descent. In the continuous-time limit as η → 0+ this sequence defines a continuous trajectory of probability distributions minimizing the functional. Jordan et al. [54] proved, through the following theorem, that this process applied to the Gibbs free energy functional converges to the solution to the Fokker-Planck equation with the same initialization: Theorem 1 (JKO). Given an initial condition p0 with finite second moment and an η > 0, define the iterative scheme pη with iterates defined by pk = argminpη ( E [Φ]− κ−1H(p) ) +W 22 (p, p k−1). As η → 0+, then pη → p weakly in L1 where p is the solution to the Fokker-Planck equation with the same initial condition. See [54] for further explanation and [53] for a complete derivation. E.2 EXTENDING THE VARIATIONAL FORMULATION TO THE SETTING OF ANISOTROPIC DIFFUSION While the JKO theorem provides a very powerful lens through which to view solutions to the FokkerPlanck equation, and thus distributions for particles governed by Langevin dynamics, it only applies in the very restricted setting of isotropic diffusion. In this section we will review work by Chaudhari and Soatto [33] extending the variational interpretation to the setting of anisotropic diffusion. Consider when D(Xt) is an anisotropic, spatially-dependent diffusion matrix. In this setting, the original Gibbs distribution given in equation (33) does not necessarily satisfy the stationarity condition equation (32). In fact, it is not immediately clear what the stationary solution is or if the dynamics even have one. Thus, Chaudhari and Soatto [33] make the following assumption: Stationary Assumption. Assume there exists a unique distribution pss that is the stationary solution to the Fokker-Planck equation irregardless of initial conditions. Under this assumption we can implicitly define the potential Ψ(x) = −κ−1log(pss(x)). Using this modified potential we can express the stationary solution as a Gibbs distribution, pss(x) ∝ e−κΨ(x). (36) Under this implicit definition we can define the stationary probability current as Jss(x) = j(x)pss(x) where j(x) = −∇Φ(x)− κ−1∇ ·D(x) +D(x)∇Ψ(x). (37) The vector field j(x) reflects the discrepancy between the original potential Φ and the modified potential Ψ according to the diffusion D(x). Notice that in the isotropic case, when D(x) = I , then Φ = Ψ and j(x) = 0. Chaudhari and Soatto [33] introduce another property of j(x) through assumption, Conservative Assumption. Assume that the force j(x) is conservative (i.e. ∇ · j(x) = 0). Using this assumption, Chaudhari and Soatto [33] extends the variational formulation provided by the JKO theorem to the anisotropic setting, Theorem 2 (CS). Given an initial condition p0 with finite second moment, then the energeticentropic functional, F (p) = Ep [Ψ(x)]− κ−1H(p) monotonically decreases throughout the trajectory given by the solution to the Fokker-Planck equation with the given initial condition. In other words, the Fokker-Plank equation (30) with anisotropic diffusion can be interpreted as minimizing the expectation of a modified loss Ψ, while being implicitly regularized towards distributions that maximize entropy. The derivation requires we assume a stationary solution pss exists and that the force j(x) implicitly defined by pss is conservative. However, rather than implicitly define Ψ(x) and j(x) through assumption, if we can explicitly construct a modified loss Ψ(x) such that the resulting j(x) satisfies certain conditions, then the stationary solution exists and the variational formulation will apply as well. We formalize this statement with the following theorem, Theorem 3 (Explicit Construction). If there exists a potential Ψ(x) such that either j(x) = 0 or ∇ · j(x) = 0 and ∇Ψ(x) ⊥ j(x), then pss is the Gibbs distribution ∝ e−κΨ(x) and the variational formulation given in Theorem 2 applies. E.3 APPLYING THE VARIATIONAL FORMULATION TO THE OU PROCESS Through explicit construction we now seek to find analytic expressions for the modified loss Ψ(x) and force j(x) hypothesised by Chaudhari and Soatto [33] in the fundamental setting of an OU process with anisotropic diffusion, as described in section D. We assume the diffusion matrix is anisotropic, but spatially independent, ∇ · D(x) = 0. For the OU process the original potential generating the drift is Φ(x) = (x− µ)ᵀA2 (x− µ). (38) Recall, that in order to extend the variational formulation we must construct some potential Ψ(x) such that∇ · j(x) = 0 and∇Ψ ⊥ j(x). It is possible to construct Ψ(x) using the unique decomposition of the drift matrix A = (D +Q)U discussed in appendix D. Define the modified potential, Ψ(x) = (x− µ)ᵀ U2 (x− µ). (39) Using this potential, the force j(x) is j(x) = −A(x− µ) +DU(x− µ) = −QU(x− µ). (40) Notice that j(x) is conservative, ∇ · j(x) = ∇ · −QU (x− µ) = 0 because Q is skew-symmetric. Additionally, j(x) is orthogonal, j(x)ᵀ∇Ψ(x) = (x− µ)ᵀ UᵀQU (x− µ) = 0, again because Q is skew-symmetric. Thus, we have determined a modified potential Ψ(x) that results in a conservative orthogonal force j(x) satisfying the conditions for Theorem 3. Indeed the stationary Gibbs distribution given by Theorem 3 agrees with equation (27) derived via the first and second moments in appendix D, e−κΨ(x) ∝ N ( µ, κ−1U−1 ) In addition to the variational formulation, this interpretation further details explicitly the stationary probability current, Jss(x) = j(x)pss, and whether or not the the stationary solution is in broken detailed balance. F EXPLICIT EXPRESSIONS FOR THE OU PROCESS GENERATED BY SGD We will now consider the specific OU process generated by SGD with linear regression. Here we repeat the setup as explained in section 5. Let X ∈ RN×d, Y ∈ RN be the input data, output labels respectively and θ ∈ Rd be our vector of regression coefficients. The least squares loss is the convex quadratic loss L(θ) = 12N ‖Y −Xθ‖2 with gradient g(θ) = Hθ − b, where H = XᵀXN and b = X ᵀY N . Plugging this expression for the gradient into the underdamped Langevin equation (3), and rearranging terms, results in the multivariate Ornstein-Uhlenbeck (OU) process, d [ θt vt ] = A ([ µ 0 ] − [ θt vt ]) dt+ √ 2κ−1DdWt, (41) where A and D are the drift and diffusion matrices respectively, A = [ 0 −I 2 η(1+β) (H + λI) 2(1−β) η(1+β)I ] , D = [ 0 0 0 2(1−β)η(1+β)Σ(θ) ] , (42) κ = S(1− β2) is a temperature constant, and µ = (H + λI)−1b is the ridge regression solution. F.1 SOLVING FOR THE MODIFIED LOSS AND CONSERVATIVE FORCE In order to apply the expressions derived for a general OU process in appendix D and E, we must first decompose the drift as A = (D + Q)U . Under the simplification Σ(θ) = σ2H discussed in appendix B, then the matrices Q and U , as defined below, achieve this, Q = [ 0 −σ2H σ2H 0 ] , U = [ 2 η(1+β)σ2H −1 (H + λI) 0 0 1σ2H −1 ] . (43) Using these matrices we can now derive explicit expressions for the modified loss Ψ(θ, v) and conservative force j(θ, v). First notice that the least squares loss with L2 regularization is proportional to the convex quadratic, Φ(θ) = (θ − µ)ᵀ(H + λI)(θ − µ). (44) The modified loss Ψ is composed of two terms, one that only depends on the position, Ψθ(θ) = (θ − µ)ᵀ ( H−1(H + λI) η(1 + β)σ2 ) (θ − µ) , (45) and another that only depends on the velocity, Ψv(v) = v ᵀ ( H−1 σ2 ) v. (46) The conservative force j(θ, v) is j(θ, v) = [ v − 2η(1+β) (H + λI) (θ − µ) ] , (47) and thus the stationary probability current is Jss(θ, v) = j(θ, v)pss. F.2 DECOMPOSING THE TRAJECTORY INTO THE EIGENBASIS OF THE HESSIAN As shown in appendix D, the temporal distribution for the OU process at some time T ≥ 0 is, pT ([ θ v ]) = N ( e−AT [ θ0 v0 ] + ( I − e−AT ) [µ 0 ] , κ−1 ( U−1 − e−ATU−1e−AᵀT )) . Here we will now use the eigenbasis {q1, . . . , qm} of the Hessian with eigenvalues {ρ1, . . . , ρm} to derive explicit expressions for the mean and covariance of the process through time. Deterministic component. We can rearrange the expectation as E [[ θ v ]] = [ µ 0 ] + e−AT [ θ0 − µ v0 ] . Notice that the second, time-dependent term is actually the solution to the system of ODEs ˙[θ v ] = −A [ θ v ] with initial condition [θ0 − µ v0]ᵀ. This system of ODEs can be block diagonalized by factorizing A = OSOᵀ where O is orthogonal and S is block diagonal defined as O = q1 0 . . . qm 0 . . . 0 q1 . . . 0 qm S = 0 −1 2 η(1+β) (ρ1 + λ) 2(1−β) η(1+β) . . . . . . . . . 0 −1 2 η(1+β) (ρm + λ) 2(1−β) η(1+β) In otherwords in the plane spanned by [qi 0] ᵀ and [0 qi] ᵀ the system of ODEs decouples into the 2D system ˙[ai bi ] = [ 0 1 − 2η(1+β) (ρi + λ) − 2(1−β) η(1+β) ] [ ai bi ] This system has a simple physical interpretation as a damped harmonic oscillator. If we let bi = ȧi, then we can unravel this system into the second order ODE äi + 2 1− β η(1 + β) ȧi + 2 η(1 + β) (ρi + λ)ai = 0 which is in standard form (i.e. ẍ + 2γẋ + ω2x = 0) for γ = 1−βη(1+β) and ωi = √ 2 η(1+β) (ρi + λ). Let ai(0) = 〈θ0 − µ, qi〉 and bi(0) = 〈v0, qi〉, then the solution in terms of γ and ωi is ai(t) = e−γt ( ai(0) cosh (√ γ2 − ω2i t ) + γai(0)+bi(0)√ γ2−ω2i sinh (√ γ2 − ω2i t )) γ > ωi e−γt(ai(0) + (γai(0) + bi(0))t) γ = ωi e−γt ( ai(0) cos (√ ω2i − γ2t ) + γai(0)+bi(0)√ ω2i−γ2 sin (√ ω2i − γ2t )) γ < ωi Differentiating these equations gives us solutions for bi(t) bi(t) = e−γt ( bi(0) cosh (√ γ2 − ω2i t ) − ω 2 i ai(0)+γbi(0)√ γ2−ω2i sinh (√ γ2 − ω2i t )) γ > ωi e−γt ( bi(0)− ( ω2i ai(0) + γbi(0) ) t ) γ = ωi e−γt ( bi(0) cos (√ ω2i − γ2t ) − ω 2 i ai(0)+γbi(0)√ ω2i−γ2 sin (√ ω2i − γ2t )) γ < ωi Combining all these results, we can now analytically decompose the expectation as the sum, E [[ θ v ]] = [ µ 0 ] + m∑ i=1 ( ai(t) [ qi 0 ] + bi(t) [ 0 qi ]) . Intuitively, this equation describes a damped rotation (spiral) around the OLS solution in the planes defined by the the eigenvectors of the Hessian at a rate proportional to the respective eigenvalue. Stochastic component. Using the previous block diagonal decomposition A = OSOᵀ we can simplify the variance as Var ([ θ v ]) = κ−1 ( U−1 − e−ATU−1e−AᵀT ) = κ−1 ( U−1 − e−OSOᵀTU−1e−OSᵀOᵀT ) = κ−1O ( OᵀU−1O − e−ST (OᵀU−1O)e−ST ᵀ ) Oᵀ Interestingly, the matrix OᵀU−1O is also block diagonal, OᵀU−1O = Oᵀ [ η(1+β)σ2 2 (H + λI) −1 H 0 0 σ2H ] O = η(1+β)σ2 2 ρ1 ρ1+λ 0 0 σ2ρ1 . . . . . . . . . η(1+β)σ2 2 ρm ρm+λ 0 0 σ2ρm Thus, similar to the mean, we can simply consider the variance in each of the planes spanned by [qi 0] ᵀ and [0 qi] ᵀ. If we define the block matrices, Di = [ ησ2 2S(1−β) ρi ρi+λ 0 0 σ 2 S(1−β2)ρi ] Si = [ 0 1 − 2η(1+β) (ρi + λ) − 2(1−β) η(1+β) ] then the projected variance matrix in this plane simplifies as Var ([ qᵀi θ qᵀi v ]) = Di − e−SiTDie−SiT ᵀ Using the solution to a damped harmonic osccilator discussed previously, we can express the matrix exponential e−SiT explicitly in terms of γ = 1−βη(1+β) and ωi = √ 2 η(1+β) (ρi + λ). If we let αi =√ |γ2 − ω2i |, then the matrix exponential is e−Sit = e−γt [ cosh (αit) + γ αi sinh (αit) 1 αi sinh (αit) −ω 2 i αi sinh (αit) cosh (αit)− γαi sinh (αit) ] γ > ωi e−γt [ 1 + γt t −ω2i t 1− γt ] γ = ωi e−γt [ cos (αit) + γ αi sin (αit) 1 αi sin (αit) −ω 2 i αi sin (αit) cos (αit)− γαi sin (αit) ] γ < ωi G ANALYZING PROPERTIES OF THE STATIONARY SOLUTION Assuming the stationary solution is given by equation (??) we can solve for the expected value of the norm of the local displacement and gain some intuition for the expected value of the norm of global displacement. G.1 INSTANTANEOUS SPEED Ess [ ‖δk‖2 ] = Ess [ ‖θk+1 − θk‖2 ] = η2Ess [ ‖vk+1‖2 ] = η2tr ( Ess [ vk+1v ᵀ k+1 ]) = η2tr (Varss (vk+1) + Ess [vk+1] Ess [vk+1] ᵀ ) = η2tr ( κ−1U−1 ) = η2 S(1− β2) tr ( σ2H ) Note that this follows directly from the definition of δk in equation (1) and the mean and variance of the stationary solution in equation ( ??), as well as the follow-up derivation in appendix F. G.2 ANOMALOUS DIFFUSION Notice, that the global movement ∆t = θt−θ0 can be broken up into the sum of the local movements ∆t = ∑t i=1 δi, where δi = θi − θi−1. Applying this decomposition, Ess [ ‖∆t‖2 ] = Ess ∣∣∣∣∣ ∣∣∣∣∣ t∑ i=1 δi ∣∣∣∣∣ ∣∣∣∣∣ 2 = t∑ i=1 Ess [ ‖δi‖2 ] + t∑ i 6=j Ess [〈δi, δj〉] As we solved for previously, Ess [ ‖δi‖2 ] = η2Ess [ ‖vi‖2 ] = η2tr (Varss(vi)) = η2 S(1− β2) tr ( σ2H ) . By a similar simplification, we can express the second term in terms of the stationary crosscovariance, Ess [〈δi, δj〉] = η2Ess [〈vi, vj〉] = η2tr (Covss(vi, vj)) . Thus, to simplify this expression we just need to consider the velocity-velocity covariance Covss(vi, vj). At stationarity, the cross-covariance for the system in phase space, zi = [θi vi] is Covss(zi, zj) = κ −1U−1e−A ᵀ|i−j| where κ = S(1− β2), and U = [ 2 η(1+β)σ2H −1 (H + λI) 0 0 1σ2H −1 ] A = [ 0 −I 2 η(1+β) (H + λI) 2(1−β) η(1+β)I ] As discussed when solving for the mean of the OU trajectory, the drift matrix A can be block diagonalized as A = OSOᵀ where O is orthogonal and S is block diagonal defined as O = q1 0 . . . qm 0 . . . 0 q1 . . . 0 qm , S = 0 −1 2 η(1+β) (ρ1 + λ) 2(1−β) η(1+β) . . . . . . . . . 0 −1 2 η(1+β) (ρm + λ) 2(1−β) η(1+β) . Notice also that O diagonalizes U−1 such that, Λ = OᵀU−1O = η(1+β)σ2 2 ρ1 ρ1+λ 0 0 σ2ρ1 . . . . . . . . . η(1+β)σ2 2 ρm ρm+λ 0 0 σ2ρm . Applying these decompositions, properties of matrix exponentials, and the cyclic invariance of the trace, allows us to express the trace of the cross-covariance as tr (Covss(zi, zj)) = κ −1tr ( U−1e−A ᵀ|i−j| ) = κ−1tr ( U−1Oe−S ᵀ|i−j|Oᵀ ) = κ−1tr ( Λe−S ᵀ|i−j| ) = κ−1 n∑ k=1 tr ( Λke −Sᵀk |i−j| ) where Λk and Sk are the blocks associated with each eigenvector of H . As solved for previously in the variance of the OU process, we can express the matrix exponential e−Sk|i−j| explicitly in terms of γ = 1−βη(
1. What is the focus of the paper regarding neural networks and stochastic gradient descent momentum? 2. What are the strengths and weaknesses of the proposed approach in understanding the dynamics of neural networks? 3. How does the paper contribute to the existing literature on the topic, and what are some limitations in its discussion of related works? 4. Can the authors provide more empirical exploration and analysis of the practical consequences of their findings on generalization and test performance? 5. How does the paper's theory relate to the concept of detailed balance, and what are the implications of breaking it? 6. Can the authors clarify their methodology and predictions regarding the behavior of the displacement in Figure 6?
Summary Of The Paper Review
Summary Of The Paper The paper uses a stochastic differential equation approximation of stochastic gradient descent momentum to model the neural networks dynamics. At the theoretical level the study focuses on linear regression. They showed that there is agreement between the results in the simpler model and deep neural networks. The most relevant fact is that the networks can have a super or sub diffusive behavior at the end of training. Unfortunately, their theory cannot explain this finding nor the practical consequences on the test loss. Review In the current format, the paper suffers from several limitation that should be addressed by the authors. I believe that there are interesting results and I encourage the authors to answer my comments as I would be very happy to rise my recommendation score. The following comments try to follow the section in the submission: The authors in their analysis of related works miss a large portion of the relevant literature. Discussing the analysis of the dynamics of neural networks: The mean-field limit was introduced by three different independent works [Rotskoff, Vanden-Eijnden 2018] [Chizat, Bach 2018] [Song et al. 2018], only the ladder is cited. If the mean-field limit is mentioned then also the dynamical mean field theory approach should be cited. In the context of neural networks we have: [Mignacco et al. 2020][Mannelli et al. 2020][Mignacco et al. 2021]. The dynamics of stochastic gradient descent and momentum were also obtained in dynamical mean field theory by the two papers by Mignacco above and [Mannelli, Urbani 2021]. When mentioning the modelling of SGD using SDE, it is important to discuss the results of [Simsekli et al. 2019] where they showed that the jumps are fat-tail distributed and how that is compatible with the approximation. On the "empirical exploration" the authors refer only to [Papyan 2018], the results of [Ghorbani, Krishnan, Xiao 2019] that derived the same method independently and make thorough analysis of the dynamics. In the same section also [Sagun et al. 2016] (ref [35]) should appear. On page 2, the authors say "Surprisingly, [..] the networks continue to move through parameter space" but this very well known. It known that stochastic gradient descent keeps moving after training, indeed this is at the basis of ref. [16]. What is not known is how is moving, and indeed this finding is interesting. In figure 1 lower panel, there is only one value on x-ax. Please add at least a second one. I can guess that the next number after 10^4 would be 2*10^4 but this is not obvious a priori. The authors should try to avoid to add adjectives to embellish the paper in the technical sections. These parts of the paper should be factual and reporting their results in a scientific way. For instance in section 8, describing figure 5 the authors say "As predicted, the spiral appears more evident ..", can you quantify how "spiral" the plot is? It is hard to conclude from that picture. Furthermore, that appears to be the result of one run. It should be good to average over many simulations to avoid the risk of observing random fluctuations. The fact that the second figure appears "less evident" may indicate a limit in the theory. Since the authors are considering a SDE, higher order effects should appear more frequently for eigenvectors corresponding to smaller eigenvalues. It would be interesting to see what happens to the 1000th eigenvectors. Since ImageNet has 1000 classes, according to [Sagun et al. 2016][Papyan 2019] that eigenvector should be associated to irrelevant information and probably this behavior disappear. You could observe this also if you have the pretrained network in CIFAR100 or CIFAR10, by looking at the 100th and 10th eigenvector respectively. An important aspect, that has not been discuss, is the effect on generalization. What are the practical consequences of these results? Although the theoretical framework can not be applied to the test dataset, the authors can verify the practical effects. Is being super/sub diffusive an advantage? Does the performance change? Answering these questions will bring value to the work. The authors affirm that the dynamics brake detailed balance. Unfortunately I must have missed this part in the paper. Could they clarify? Also, what are the practical implications of braking detailed balance? In figure 6 the authors claim that their theory predicts the behavior of the displacement. It is not clear to me the procedure used. When you say "using this single estimate" what do you mean? Are evaluating \sigma^2 tr(H) in a single point for each plot and using this information in draw the line? It is not very clear. On the second line, it is an overstatement to say that you predict the diffusion constant. Your model, by construction, is a correlated Brownian motion with a drift, therefore the diffusion constant is 1. This is a pity, there is no explanation for the very interesting behavior the you observe.
ICLR
Title Generative Paragraph Vector Abstract The recently introduced Paragraph Vector is an efficient method for learning highquality distributed representations for pieces of texts. However, an inherent limitation of Paragraph Vector is lack of ability to infer distributed representations for texts outside of the training set. To tackle this problem, we introduce a Generative Paragraph Vector, which can be viewed as a probabilistic extension of the Distributed Bag of Words version of Paragraph Vector with a complete generative process. With the ability to infer the distributed representations for unseen texts, we can further incorporate text labels into the model and turn it into a supervised version, namely Supervised Generative Paragraph Vector. In this way, we can leverage the labels paired with the texts to guide the representation learning, and employ the learned model for prediction tasks directly. Experiments on five text classification benchmark collections show that both model architectures can yield superior classification performance over the state-of-the-art counterparts. 1 INTRODUCTION A central problem in many text based applications, e.g., sentiment classification (Pang & Lee, 2008), question answering (Stefanie Tellex & Marton., 2003) and machine translation (I. Sutskever & Le, 2014), is how to capture the essential meaning of a piece of text in a fixed-length vector. Perhaps the most popular fixed-length vector representations for texts is the bag-of-words (or bag-of-ngrams) (Harris, 1954). Besides, probabilistic latent semantic indexing (PLSI) (Hofmann, 1999) and latent Dirichlet allocation (LDA) (Blei & Jordan, 2003) are two widely adopted alternatives. A recent paradigm in this direction is to use a distributed representation for texts (T. Mikolov & Dean, 2013a). In particular, Le and Mikolov (Quoc Le, 2014; Andrew M.Dai, 2014) show that their method, Paragraph Vector (PV), can capture text semantics in dense vectors and outperform many existing representation models. Although PV is an efficient method for learning high-quality distributed text representations, it suffers a similar problem as PLSI that it provides no model on text vectors: it is unclear how to infer the distributed representations for texts outside of the training set with the learned model (i.e., learned text and word vectors). Such a limitation largely restricts the usage of the PV model, especially in those prediction focused scenarios. Inspired by the completion and improvement of LDA over PLSI, we first introduce the Generative Paragraph Vector (GPV) with a complete generation process for a corpus. Specifically, GPV can be viewed as a probabilistic extension of the Distributed Bag of Words version of Paragraph Vector (PVDBOW), where the text vector is viewed as a hidden variable sampled from some prior distributions, and the words within the text are then sampled from the softmax distribution given the text and word vectors. With a complete generative process, we are able to infer the distributed representations of new texts based on the learned model. Meanwhile, the prior distribution over text vectors also acts as a regularization factor from the view of optimization, thus can lead to higher-quality text representations. More importantly, with the ability to infer the distributed representations for unseen texts, we now can directly incorporate labels paired with the texts into the model to guide the representation learning, and turn the model into a supervised version, namely Supervised Generative Paragraph Vector (SGPV). Note that supervision cannot be directly leveraged in the original PV model since it has no generalization ability on new texts. By learning the SGPV model, we can directly employ SGPV to predict labels for new texts. As we know, when the goal is prediction, fitting a supervised model would be a better choice than learning a general purpose representations of texts in an unsupervised way. We further show that SGPV can be easily extended to accommodate n-grams so that we can take into account word order information, which is important in learning semantics of texts. We evaluated our proposed models on five text classification benchmark datasets. For the unsupervised GPV, we show that its superiority over the existing counterparts, such as bag-of-words, LDA, PV and FastSent (Felix Hill, 2016). For the SGPV model, we take into comparison both traditional supervised representation models, e.g. MNB (S. Wang, 2012), and a variety of state-of-the-art deep neural models for text classification (Kim, 2014; N. Kalchbrenner, 2014; Socher & Potts, 2013; Irsoy & Cardie, 2014). Again we show that the proposed SGPV can outperform the baseline methods by a substantial margin, demonstrating it is a simple yet effective model. The rest of the paper is organized as follows. We first review the related work in section 2 and briefly describe PV in section 3. We then introduce the unsupervised generative model GPV and supervised generative model SGPV in section 4 and section 5 respectively. Experimental results are shown in section 6 and conclusions are made in section 7. 2 RELATED WORK Many text based applications require the text input to be represented as a fixed-length feature vector. The most common fixed-length representation is bag-of-words (BoW) (Harris, 1954). For example, in the popular TF-IDF scheme (Salton & McGill, 1983), each document is represented by tfidf values of a set of selected feature-words. However, the BoW representation often suffers from data sparsity and high dimension. Meanwhile, due to the independent assumption between words, BoW representation has very little sense about the semantics of the words. To address this shortcoming, several dimensionality reduction methods have been proposed, such as latent semantic indexing (LSI) (S. Deerwester & Harshman, 1990), Probabilistic latent semantic indexing (PLSI) (Hofmann, 1999) and latent Dirichlet allocation (LDA) (Blei & Jordan, 2003). Both PLSI and LDA have a good statistical foundation and proper generative model of the documents, as compared with LSI which relies on a singular value decomposition over the term-document cooccurrence matrix. In PLSI, each word is generated from a single topic, and different words in a document may be generated from different topics. While PLSI makes great effect on probabilistic modeling of documents, it is not clear how to assign probability to a document outside of the training set with the learned model. To address this issue, LDA is proposed by introducing a complete generative process over the documents, and demonstrated as a state-of-the-art document representation method. To further tackle the prediction task, Supervised LDA (David M.Blei, 2007) is developed by jointly modeling the documents and the labels. Recently, distributed models have been demonstrated as efficient methods to acquire semantic representations of texts. A representative method is Word2Vec (Tomas Mikolov & Dean, 2013b), which can learn meaningful word representations in an unsupervised way from large scale corpus. To represent sentences or documents, a simple approach is then using a weighted average of all the words. A more sophisticated approach is combing the word vectors in an order given by a parse tree (Richard Socher & Ng, 2012). Later, Paragraph Vector (PV) (Quoc Le, 2014) is introduced to directly learn the distributed representations of sentences and documents. There are two variants in PV, namely the Distributed Memory Model of Paragraph Vector (PV-DM) and the Distributed Bag of Words version of Paragraph Vector (PV-DBOW), based on two different model architectures. Although PV is a simple yet effective distributed model on sentences and documents, it suffers a similar problem as PLSI that it provides no model on text vectors: it is unclear how to infer the distributed representations for texts outside of the training set with the learned model. Besides these unsupervised representation learning methods, there have been many supervised deep models with directly learn sentence or document representations for the prediction tasks. Recursive Neural Network (RecursiveNN) (Richard Socher & Ng, 2012) has been proven to be efficient in terms of constructing sentence representations. Recurrent Neural Network (RNN) (Ilya Sutskever & Hinton, 2011) can be viewed as an extremely deep neural network with weight sharing across time. Convolution Neural Network (CNN) (Kim, 2014) can fairly determine discriminative phrases in a text with a max-pooling layer. However, these deep models are usually quite complex and thus the training would be time-consuming on large corpus. 3 PARAGRAPH VECTOR Since our model can be viewed as a probabilistic extension of the PV-DBOW model with a complete generative process, we first briefly review the PV-DBOW model for reference. In PV-DBOW, each text is mapped to a unique paragraph vector and each word is mapped to a unique word vector in a continuous space. The paragraph vector is used to predict target words randomly sampled from the paragraph as shown in Figure 1. More formally, Let D={d1, . . . ,dN} denote a corpus of N texts, where each text dn = (wn1 , w n 2 , . . . , w n ln ), n ∈ 1, 2, . . . , N is an lnlength word sequence over the word vocabulary V of size M . Each text d ∈ D and each word w ∈ V is associated with a vector ~d ∈ RK and ~w ∈ RK , respectively, where K is the embedding dimensionality. The predictive objective of the PV-DBOW for each word wnl ∈ dn is defined by the softmax function p(wni |dn) = exp(~wni · ~dn)∑ w′∈V exp(~w ′ · ~dn) (1) The PV-DBOW model can be efficiently trained using the stochastic gradient descent (Rumelhart & Williams, 1986) with negative sampling (T. Mikolov & Dean, 2013a). As compared with traditional topic models, e.g. PLSI and LDA, PV-DBOW conveys the following merits. Firstly, PV-DBOW using negative sampling can be interpretated as a matrix factorization over the words-by-texts co-occurrence matrix with shifted-PMI values (Omer Levy & Ramat-Gan, 2015). In this way, more discriminative information (i.e., PMI) can be modeled in PV as compared with the generative topic models which learn over the words-by-texts co-occurrence matrix with raw frequency values. Secondly, PV-DBOW does not have the explicit “topic” layer and allows words automatically clustered according to their co-occurrence patterns during the learning process. In this way, PV-DBOW can potentially learn much finer topics than traditional topic models given the same hidden dimensionality of texts. However, a major problem with PV-DBOW is that it provides no model on text vectors: it is unclear how to infer the distributed representations for unseen texts. 4 GENERATIVE PARAGRAPH VECTOR In this section, we introduce the GPV model in detail. Overall, GPV is a generative probabilistic model for a corpus. We assume that for each text, a latent paragraph vector is first sampled from some prior distributions, and the words within the text are then generated from the normalized exponential (i.e. softmax) distribution given the paragraph vector and word vectors. In our work, multivariate normal distribution is employed as the prior distribution for paragraph vectors. It could be replaced by other prior distributions and we will leave this as our future work. The specific generative process is as follows: For each text dn ∈D, n = 1, 2, . . . , N : (a) Draw paragraph vector ~dn ∼ N (µ,Σ) (b) For each word wni ∈ dn, i = 1, 2, . . . , ln : Draw word wni ∼ softmax(~dn ·W )i where W denotes a k ×M word embedding matrix with W∗j = ~wj , and softmax(~dn ·W )i is the softmax function defined the same as in Equation (1). Figure 2 (Left) provides the graphical model of this generative process. Note that GPV differs from PV-DBOW in that the paragraph vector is a hidden variable generated from some prior distribution, which allows us to infer the paragraph vector over future texts given the learned model. Based on the above generative process, the probability of the whole corpus can be written as follows: p(D)= N∏ n=1 ∫ p(~dn|µ,Σ) ∏ wni ∈dn p(wni |W, ~dn)d~dn To learn the model, direct maximum likelihood estimation is not tractable due to non-closed form of the integral. We approximate this learning problem by using MAP estimates for ~dn, which can be formulated as follows: (µ∗,Σ∗,W ∗) = arg max µ,Σ,W ∏ p(d̂n|µ,Σ) ∏ wni ∈dn p(wni |W, d̂n) where d̂n denotes the MAP estimate of ~dn for dn, (µ∗,Σ∗,W ∗) denotes the optimal solution. Note that for computational simplicity, in this work we fixed µ as a zero vector and Σ as a identity matrix. In this way, all the free parameters to be learned in our model are word embedding matrix W . By taking the logarithm and applying the negative sampling idea to approximate the softmax function, we obtain the final learning problem L= N∑ n=1 ( −1 2 ||d̂n||2+ ∑ wni ∈dn ( log σ(~wni ·d̂n)+k·Ew′∼Pnw log σ(− ~w′ · d̂n) )) where σ(x) = 1/(1 + exp(−x)), k is the number of “negative” samples, w′ denotes the sampled word and Pnw denotes the distribution of negative word samples. As we can see from the final objective function, the prior distribution over paragraph vectors actually act as a regularization term. From the view of optimization, such regularization term could constrain the learning space and usually produces better paragraph vectors. For optimization, we use coordinate ascent, which first optimizes the word vectors W while leaving the MAP estimates (d̂) fixed. Then we find the new MAP estimate for each document while leaving the word vectors fixed, and continue this process until convergence. To accelerate the learning, we adopt a similar stochastic learning framework as in PV which iteratively updates W and estimates ~d by randomly sampling text and word pairs. At prediction time, given a new text, we perform an inference step to compute the paragraph vector for the input text. In this step, we freeze the vector representations of each word, and apply the same MAP estimation process of ~d as in the learning phase. With the inferred paragraph vector of the test text, we can feed it to other prediction models for different applications. 5 SUPERVISED GENERATIVE PARAGRAPH VECTOR With the ability to infer the distributed representations for unseen texts, we now can incorporate the labels paired with the texts into the model to guide the representation learning, and turn the model into a more powerful supervised version directly towards prediction tasks. Specifically, we introduce an additional label generation process into GPV to accommodate text labels, and obtain the Supervised Generative Paragraph Vector (SGPV) model. Formally, in SGPV, the n-th text dn and the corresponding class label yn ∈ {1, 2, . . . , C} arise from the following generative process: For each text dn ∈D, n = 1, 2, . . . , N : (a) Draw paragraph vector ~dn ∼ N (µ,Σ) (b) For each word wni ∈ dn, i = 1, 2, . . . , ln : Draw word wni ∼ softmax(~dn ·W )i (c) Draw label yn|~dn, U, b ∼ softmax(U · ~dn+b) where U is a C ×K matrix for a dataset with C output labels, and b is a bias term. The graphical model of the above generative process is depicted in Figure 2 (Right). SGPV defines the probability of the whole corpus as follows p(D)= N∏ n=1 ∫ p(~dn|µ,Σ) ( ∏ wni ∈dn p(wni |W, ~dn) ) p(yn|~dn, U, b)d~dn We adopt a similar learning process as GPV to estimate the model parameters. Since the SGPV includes the complete generative process of both paragraphs and labels, we can directly leverage it to predict the labels of new texts. Specifically, at prediction time, given all the learned model parameters, we conduct an inference step to infer the paragraph vector as well as the label using MAP estimate over the test text. The above SGPV may have limited modeling ability on text representation since it mainly relies on uni-grams. As we know, word order information is often critical in capturing the meaning of texts. For example, “machine learning” and “learning machine” are totally different in meaning with the same words. There has been a variety of deep models using complex architectures such as convolution layers or recurrent structures to help capture such order information at the expense of large computational cost. Here we propose to extend SGPV by introducing an additional generative process for n-grams, so that we can incorporate the word order information into the model and meanwhile keep its simplicity in learning. We name this extension as SGPV-ngram. Here we take the generative process of SGPVbigram as an example. For each text dn ∈D, n = 1, 2, . . . , N : (a) Draw paragraph vector ~dn ∼ N (µ,Σ) (b) For each word wni ∈ dn, i = 1, 2, . . . , ln : Draw word wni ∼ softmax(~dn ·W )i (c) For each bigram gni ∈ dn, i = 1, 2, . . . , sn : Draw bigram gni ∼ softmax(~dn ·G)i (d) Draw label yn|~dn, U, b ∼ softmax(U · ~dn+b) where G denotes a K × S bigram embedding matrix with G∗j = ~gj , and S denotes the size of bigram vocabulary. The joint probability over the whole corpus is then defined as p(D)= N∏ n=1 ∫ p(~dn|µ,Σ) ( ∏ wni ∈dn p(wni |W, ~dn) )( ∏ gni ∈dn p(gni |G, ~dn) ) p(yn|~dn, U, b)d~dn 6 EXPERIMENTS In this section, we introduce the experimental settings and empirical results on a set of text classification tasks. 6.1 DATASET AND EXPERIMENTAL SETUP We made use of five publicly available benchmark datasets in comparison. TREC: The TREC Question Classification dataset (Li & Roth, 2002)1 which consists of 5, 452 train questions and 500 test questions. The goal is to classify a question into 6 different types depending on the answer they seek for. Subj: Subjectivity dataset (Pang & Lee, 2004) which contains 5, 000 subjective instances and 5, 000 objective instances. The task is to classify a sentence as being subjective or objective. MR: Movie reviews (Pang & Lee, 2005) 2 with one sentence per review. There are 5, 331 positive sentences and 5, 331 negative sentences. The objective is to classify each review into positive or negative category. SST-1: Stanford Sentiment Treebank (Socher & Potts, 2013) 3. SST-1 is provided with train/dev/test splits of size 8, 544/1, 101/2, 210. It is a fine-grained classification over five classes: very negative, negative, neutral, positive, and very positive. SST-2: SST-2 is the same as SST-1 but with neutral reviews removed. We use the standard train/dev/test splits of size 6, 920/872/1, 821 for the binary classification task. Preprocessing steps were applied to all datasets: words were lowercased, non-English characters and stop words occurrence in the training set are removed. For fair comparison with other published results, we use the default train/test split for TREC, SST-1 and SST-2 datasets. Since explicit split of train/test is not provided by subj and MR datasets, we use 10-fold cross-validation instead. In our model, text and word vectors are randomly initialized with values uniformly distributed in the range of [-0.5, +0.5]. Following the practice in (Tomas Mikolov & Dean, 2013b) , we set the noise distributions for context and words as pnw(w) ∝ #(w)0.75. We adopt the same linear learning rate strategy where the initial learning rate of our models is 0.025. For unsupervised methods, we use support vector machines (SVM) 4 as the classifier. 6.2 BASELINES We adopted both unsupervised and supervised methods on text representation as baselines. 6.2.1 UNSUPERVISED BASELINES Bag-of-word-TFIDF and Bag-of-bigram-TFIDF. In the bag-of-word-TFIDF scheme (Salton & McGill, 1983) , each text is represented as the tf-idf value of chosen feature-words. The bag-of- 1http://cogcomp.cs.illinois.edu/Data/QA/QC/ 2https://www.cs.cornell.edu/people/pabo/movie-review-data/ 3http://nlp.stanford.edu/sentiment/ 4http://www.csie.ntu.edu.tw/˜cjlin/libsvm/ bigram-TFIDF model is constructed by selecting the most frequent unigrams and bigrams from the training subset. We use the vanilla TFIDF in the gensim library5. LSI (S. Deerwester & Harshman, 1990) and LDA (Blei & Jordan, 2003). LSI maps both texts and words to lower-dimensional representations in a so-called latent semantic space using SVD decomposition. In LDA, each word within a text is modeled as a finite mixture over an underlying set of topics. We use the vanilla LSI and LDA in the gensim library with topic number set as 100. cBow (Tomas Mikolov & Dean, 2013b). Continuous Bag-Of-Words model. We use average pooling as the global pooling mechanism to compose a sentence vector from a set of word vectors. PV (Quoc Le, 2014). Paragraph Vector is an unsupervised model to learn distributed representations of words and paragraphs. FastSent (Felix Hill, 2016). In FastSent, given a simple representation of some sentence in context, the model attempts to predict adjacent sentences. Note that unlike LDA and GPV, LSI, cBow, and FastSent cannot infer the representations of unseen texts. Therefore, these four models need to fold-in all the test data to learn representations together with training data, which makes it not efficient in practice. 6.2.2 SUPERVISED BASELINES NBSVM and MNB (S. Wang, 2012). Naive Bayes SVM and Multinomial Naive Bayes with unigrams and bi-grams. DAN (Mohit Iyyer & III, 2015). Deep averaging network uses average word vectors as the input and applies multiple neural layers to learn text representation under supervision. CNN-multichannel (Kim, 2014). CNN-multichannel employs convolutional neural network for sentence modeling. DCNN (N. Kalchbrenner, 2014). DCNN uses a convolutional architecture that replaces wide convolutional layers with dynamic pooling layers. MV-RNN (Richard Socher & Ng, 2012). Matrix-Vector RNN represents every word and longer phrase in a parse tree as both a vector and a matrix. DRNN (Irsoy & Cardie, 2014). Deep Recursive Neural Networks is constructed by stacking multiple recursive layers. Dependency Tree-LSTM (Kai Sheng Tai & Manning, 2015). The Dependency Tree-LSTM based on LSTM structure uses dependency parses of each sentence. 6.3 PERFORMANCE OF GENERATIVE PARAGRAPH VECTOR We first evaluate the GPV model by comparing with the unsupervised baselines on the TREC, Subj and MR datasets. As shown in table 1, GPV works better than PV over the three tasks. It demonstrates the benefits of introducing a prior distribution (i.e., regularization) over the paragraph vectors. Moreover, GPV can also outperform almost all the baselines on three tasks except Bow-TFIDF and Bigram-TFIDF on the TREC collection. The results show that for unsupervised text representation, bag-of-words representation is quite simple yet powerful which can beat many embedding models. Meanwhile, by using a complete generative process to infer the paragraph vectors, our model can achieve the state-of-the-art performance among the embedding based models. 6.4 PERFORMANCE OF SUPERVISED GENERATIVE PARAGRAPH VECTOR We compare SGPV model to supervised baselines on all the five classification tasks. Empirical results are shown in Table 2. We can see that SGPV achieves comparable performance against other deep learning models. Note that SGPV is much simpler than these deep models with significantly less parameters and no complex structures. Moreover, deep models with convolutional layers or recurrent structures can potentially capture compositional semantics (e.g., phrases), while SGPV only 5http://radimrehurek.com/gensim/ relies on uni-gram. In this sense, SGPV is quite effective in learning text representation. Meanwhile, if we take Table 1 into consideration, it is not surprising to see that SGPV can consistently outperform GPV on all the three classification tasks. This also demonstrates that it is more effective to directly fit supervised representation models than to learn a general purpose representation in prediction scenarios. By introducing bi-grams, SGPV-bigram can outperform all the other deep models on four tasks. In particular, the improvements of SGPV-bigram over other baselines are significant on SST-1 and SST-2. These results again demonstrated the effectiveness of our proposed SGPV model on text representations. It also shows the importance of word order information in modeling text semantics. 7 CONCLUSIONS In this paper, we introduce GPV and SGPV for learning distributed representations for pieces of texts. With a complete generative process, our models are able to infer vector representations as well as labels over unseen texts. Our models keep as simple as PV models, and thus can be efficiently learned over large scale text corpus. Even with such simple structures, both GPV and SGPV can produce state-of-the-art results as compared with existing baselines, especially those complex deep models. For future work, we may consider other probabilistic distributions for both paragraph vectors and word vectors.
1. What is the basis for the reviewer's criticism of the paper's motivation? 2. Are there any concerns regarding the paper's formatting and citation style? 3. How does the reviewer assess the novelty of the proposed approach in comparison to existing works?
Review
Review While this paper has some decent accuracy numbers, it is hard to argue for acceptance given the following: 1) motivation based on the incorrect assumption that the Paragraph Vector wouldn't work on unseen data 2) Numerous basic formatting and Bibtex citation issues. Lack of novelty of yet another standard directed LDA-like bag of words/bigram model.
ICLR
Title Generative Paragraph Vector Abstract The recently introduced Paragraph Vector is an efficient method for learning highquality distributed representations for pieces of texts. However, an inherent limitation of Paragraph Vector is lack of ability to infer distributed representations for texts outside of the training set. To tackle this problem, we introduce a Generative Paragraph Vector, which can be viewed as a probabilistic extension of the Distributed Bag of Words version of Paragraph Vector with a complete generative process. With the ability to infer the distributed representations for unseen texts, we can further incorporate text labels into the model and turn it into a supervised version, namely Supervised Generative Paragraph Vector. In this way, we can leverage the labels paired with the texts to guide the representation learning, and employ the learned model for prediction tasks directly. Experiments on five text classification benchmark collections show that both model architectures can yield superior classification performance over the state-of-the-art counterparts. 1 INTRODUCTION A central problem in many text based applications, e.g., sentiment classification (Pang & Lee, 2008), question answering (Stefanie Tellex & Marton., 2003) and machine translation (I. Sutskever & Le, 2014), is how to capture the essential meaning of a piece of text in a fixed-length vector. Perhaps the most popular fixed-length vector representations for texts is the bag-of-words (or bag-of-ngrams) (Harris, 1954). Besides, probabilistic latent semantic indexing (PLSI) (Hofmann, 1999) and latent Dirichlet allocation (LDA) (Blei & Jordan, 2003) are two widely adopted alternatives. A recent paradigm in this direction is to use a distributed representation for texts (T. Mikolov & Dean, 2013a). In particular, Le and Mikolov (Quoc Le, 2014; Andrew M.Dai, 2014) show that their method, Paragraph Vector (PV), can capture text semantics in dense vectors and outperform many existing representation models. Although PV is an efficient method for learning high-quality distributed text representations, it suffers a similar problem as PLSI that it provides no model on text vectors: it is unclear how to infer the distributed representations for texts outside of the training set with the learned model (i.e., learned text and word vectors). Such a limitation largely restricts the usage of the PV model, especially in those prediction focused scenarios. Inspired by the completion and improvement of LDA over PLSI, we first introduce the Generative Paragraph Vector (GPV) with a complete generation process for a corpus. Specifically, GPV can be viewed as a probabilistic extension of the Distributed Bag of Words version of Paragraph Vector (PVDBOW), where the text vector is viewed as a hidden variable sampled from some prior distributions, and the words within the text are then sampled from the softmax distribution given the text and word vectors. With a complete generative process, we are able to infer the distributed representations of new texts based on the learned model. Meanwhile, the prior distribution over text vectors also acts as a regularization factor from the view of optimization, thus can lead to higher-quality text representations. More importantly, with the ability to infer the distributed representations for unseen texts, we now can directly incorporate labels paired with the texts into the model to guide the representation learning, and turn the model into a supervised version, namely Supervised Generative Paragraph Vector (SGPV). Note that supervision cannot be directly leveraged in the original PV model since it has no generalization ability on new texts. By learning the SGPV model, we can directly employ SGPV to predict labels for new texts. As we know, when the goal is prediction, fitting a supervised model would be a better choice than learning a general purpose representations of texts in an unsupervised way. We further show that SGPV can be easily extended to accommodate n-grams so that we can take into account word order information, which is important in learning semantics of texts. We evaluated our proposed models on five text classification benchmark datasets. For the unsupervised GPV, we show that its superiority over the existing counterparts, such as bag-of-words, LDA, PV and FastSent (Felix Hill, 2016). For the SGPV model, we take into comparison both traditional supervised representation models, e.g. MNB (S. Wang, 2012), and a variety of state-of-the-art deep neural models for text classification (Kim, 2014; N. Kalchbrenner, 2014; Socher & Potts, 2013; Irsoy & Cardie, 2014). Again we show that the proposed SGPV can outperform the baseline methods by a substantial margin, demonstrating it is a simple yet effective model. The rest of the paper is organized as follows. We first review the related work in section 2 and briefly describe PV in section 3. We then introduce the unsupervised generative model GPV and supervised generative model SGPV in section 4 and section 5 respectively. Experimental results are shown in section 6 and conclusions are made in section 7. 2 RELATED WORK Many text based applications require the text input to be represented as a fixed-length feature vector. The most common fixed-length representation is bag-of-words (BoW) (Harris, 1954). For example, in the popular TF-IDF scheme (Salton & McGill, 1983), each document is represented by tfidf values of a set of selected feature-words. However, the BoW representation often suffers from data sparsity and high dimension. Meanwhile, due to the independent assumption between words, BoW representation has very little sense about the semantics of the words. To address this shortcoming, several dimensionality reduction methods have been proposed, such as latent semantic indexing (LSI) (S. Deerwester & Harshman, 1990), Probabilistic latent semantic indexing (PLSI) (Hofmann, 1999) and latent Dirichlet allocation (LDA) (Blei & Jordan, 2003). Both PLSI and LDA have a good statistical foundation and proper generative model of the documents, as compared with LSI which relies on a singular value decomposition over the term-document cooccurrence matrix. In PLSI, each word is generated from a single topic, and different words in a document may be generated from different topics. While PLSI makes great effect on probabilistic modeling of documents, it is not clear how to assign probability to a document outside of the training set with the learned model. To address this issue, LDA is proposed by introducing a complete generative process over the documents, and demonstrated as a state-of-the-art document representation method. To further tackle the prediction task, Supervised LDA (David M.Blei, 2007) is developed by jointly modeling the documents and the labels. Recently, distributed models have been demonstrated as efficient methods to acquire semantic representations of texts. A representative method is Word2Vec (Tomas Mikolov & Dean, 2013b), which can learn meaningful word representations in an unsupervised way from large scale corpus. To represent sentences or documents, a simple approach is then using a weighted average of all the words. A more sophisticated approach is combing the word vectors in an order given by a parse tree (Richard Socher & Ng, 2012). Later, Paragraph Vector (PV) (Quoc Le, 2014) is introduced to directly learn the distributed representations of sentences and documents. There are two variants in PV, namely the Distributed Memory Model of Paragraph Vector (PV-DM) and the Distributed Bag of Words version of Paragraph Vector (PV-DBOW), based on two different model architectures. Although PV is a simple yet effective distributed model on sentences and documents, it suffers a similar problem as PLSI that it provides no model on text vectors: it is unclear how to infer the distributed representations for texts outside of the training set with the learned model. Besides these unsupervised representation learning methods, there have been many supervised deep models with directly learn sentence or document representations for the prediction tasks. Recursive Neural Network (RecursiveNN) (Richard Socher & Ng, 2012) has been proven to be efficient in terms of constructing sentence representations. Recurrent Neural Network (RNN) (Ilya Sutskever & Hinton, 2011) can be viewed as an extremely deep neural network with weight sharing across time. Convolution Neural Network (CNN) (Kim, 2014) can fairly determine discriminative phrases in a text with a max-pooling layer. However, these deep models are usually quite complex and thus the training would be time-consuming on large corpus. 3 PARAGRAPH VECTOR Since our model can be viewed as a probabilistic extension of the PV-DBOW model with a complete generative process, we first briefly review the PV-DBOW model for reference. In PV-DBOW, each text is mapped to a unique paragraph vector and each word is mapped to a unique word vector in a continuous space. The paragraph vector is used to predict target words randomly sampled from the paragraph as shown in Figure 1. More formally, Let D={d1, . . . ,dN} denote a corpus of N texts, where each text dn = (wn1 , w n 2 , . . . , w n ln ), n ∈ 1, 2, . . . , N is an lnlength word sequence over the word vocabulary V of size M . Each text d ∈ D and each word w ∈ V is associated with a vector ~d ∈ RK and ~w ∈ RK , respectively, where K is the embedding dimensionality. The predictive objective of the PV-DBOW for each word wnl ∈ dn is defined by the softmax function p(wni |dn) = exp(~wni · ~dn)∑ w′∈V exp(~w ′ · ~dn) (1) The PV-DBOW model can be efficiently trained using the stochastic gradient descent (Rumelhart & Williams, 1986) with negative sampling (T. Mikolov & Dean, 2013a). As compared with traditional topic models, e.g. PLSI and LDA, PV-DBOW conveys the following merits. Firstly, PV-DBOW using negative sampling can be interpretated as a matrix factorization over the words-by-texts co-occurrence matrix with shifted-PMI values (Omer Levy & Ramat-Gan, 2015). In this way, more discriminative information (i.e., PMI) can be modeled in PV as compared with the generative topic models which learn over the words-by-texts co-occurrence matrix with raw frequency values. Secondly, PV-DBOW does not have the explicit “topic” layer and allows words automatically clustered according to their co-occurrence patterns during the learning process. In this way, PV-DBOW can potentially learn much finer topics than traditional topic models given the same hidden dimensionality of texts. However, a major problem with PV-DBOW is that it provides no model on text vectors: it is unclear how to infer the distributed representations for unseen texts. 4 GENERATIVE PARAGRAPH VECTOR In this section, we introduce the GPV model in detail. Overall, GPV is a generative probabilistic model for a corpus. We assume that for each text, a latent paragraph vector is first sampled from some prior distributions, and the words within the text are then generated from the normalized exponential (i.e. softmax) distribution given the paragraph vector and word vectors. In our work, multivariate normal distribution is employed as the prior distribution for paragraph vectors. It could be replaced by other prior distributions and we will leave this as our future work. The specific generative process is as follows: For each text dn ∈D, n = 1, 2, . . . , N : (a) Draw paragraph vector ~dn ∼ N (µ,Σ) (b) For each word wni ∈ dn, i = 1, 2, . . . , ln : Draw word wni ∼ softmax(~dn ·W )i where W denotes a k ×M word embedding matrix with W∗j = ~wj , and softmax(~dn ·W )i is the softmax function defined the same as in Equation (1). Figure 2 (Left) provides the graphical model of this generative process. Note that GPV differs from PV-DBOW in that the paragraph vector is a hidden variable generated from some prior distribution, which allows us to infer the paragraph vector over future texts given the learned model. Based on the above generative process, the probability of the whole corpus can be written as follows: p(D)= N∏ n=1 ∫ p(~dn|µ,Σ) ∏ wni ∈dn p(wni |W, ~dn)d~dn To learn the model, direct maximum likelihood estimation is not tractable due to non-closed form of the integral. We approximate this learning problem by using MAP estimates for ~dn, which can be formulated as follows: (µ∗,Σ∗,W ∗) = arg max µ,Σ,W ∏ p(d̂n|µ,Σ) ∏ wni ∈dn p(wni |W, d̂n) where d̂n denotes the MAP estimate of ~dn for dn, (µ∗,Σ∗,W ∗) denotes the optimal solution. Note that for computational simplicity, in this work we fixed µ as a zero vector and Σ as a identity matrix. In this way, all the free parameters to be learned in our model are word embedding matrix W . By taking the logarithm and applying the negative sampling idea to approximate the softmax function, we obtain the final learning problem L= N∑ n=1 ( −1 2 ||d̂n||2+ ∑ wni ∈dn ( log σ(~wni ·d̂n)+k·Ew′∼Pnw log σ(− ~w′ · d̂n) )) where σ(x) = 1/(1 + exp(−x)), k is the number of “negative” samples, w′ denotes the sampled word and Pnw denotes the distribution of negative word samples. As we can see from the final objective function, the prior distribution over paragraph vectors actually act as a regularization term. From the view of optimization, such regularization term could constrain the learning space and usually produces better paragraph vectors. For optimization, we use coordinate ascent, which first optimizes the word vectors W while leaving the MAP estimates (d̂) fixed. Then we find the new MAP estimate for each document while leaving the word vectors fixed, and continue this process until convergence. To accelerate the learning, we adopt a similar stochastic learning framework as in PV which iteratively updates W and estimates ~d by randomly sampling text and word pairs. At prediction time, given a new text, we perform an inference step to compute the paragraph vector for the input text. In this step, we freeze the vector representations of each word, and apply the same MAP estimation process of ~d as in the learning phase. With the inferred paragraph vector of the test text, we can feed it to other prediction models for different applications. 5 SUPERVISED GENERATIVE PARAGRAPH VECTOR With the ability to infer the distributed representations for unseen texts, we now can incorporate the labels paired with the texts into the model to guide the representation learning, and turn the model into a more powerful supervised version directly towards prediction tasks. Specifically, we introduce an additional label generation process into GPV to accommodate text labels, and obtain the Supervised Generative Paragraph Vector (SGPV) model. Formally, in SGPV, the n-th text dn and the corresponding class label yn ∈ {1, 2, . . . , C} arise from the following generative process: For each text dn ∈D, n = 1, 2, . . . , N : (a) Draw paragraph vector ~dn ∼ N (µ,Σ) (b) For each word wni ∈ dn, i = 1, 2, . . . , ln : Draw word wni ∼ softmax(~dn ·W )i (c) Draw label yn|~dn, U, b ∼ softmax(U · ~dn+b) where U is a C ×K matrix for a dataset with C output labels, and b is a bias term. The graphical model of the above generative process is depicted in Figure 2 (Right). SGPV defines the probability of the whole corpus as follows p(D)= N∏ n=1 ∫ p(~dn|µ,Σ) ( ∏ wni ∈dn p(wni |W, ~dn) ) p(yn|~dn, U, b)d~dn We adopt a similar learning process as GPV to estimate the model parameters. Since the SGPV includes the complete generative process of both paragraphs and labels, we can directly leverage it to predict the labels of new texts. Specifically, at prediction time, given all the learned model parameters, we conduct an inference step to infer the paragraph vector as well as the label using MAP estimate over the test text. The above SGPV may have limited modeling ability on text representation since it mainly relies on uni-grams. As we know, word order information is often critical in capturing the meaning of texts. For example, “machine learning” and “learning machine” are totally different in meaning with the same words. There has been a variety of deep models using complex architectures such as convolution layers or recurrent structures to help capture such order information at the expense of large computational cost. Here we propose to extend SGPV by introducing an additional generative process for n-grams, so that we can incorporate the word order information into the model and meanwhile keep its simplicity in learning. We name this extension as SGPV-ngram. Here we take the generative process of SGPVbigram as an example. For each text dn ∈D, n = 1, 2, . . . , N : (a) Draw paragraph vector ~dn ∼ N (µ,Σ) (b) For each word wni ∈ dn, i = 1, 2, . . . , ln : Draw word wni ∼ softmax(~dn ·W )i (c) For each bigram gni ∈ dn, i = 1, 2, . . . , sn : Draw bigram gni ∼ softmax(~dn ·G)i (d) Draw label yn|~dn, U, b ∼ softmax(U · ~dn+b) where G denotes a K × S bigram embedding matrix with G∗j = ~gj , and S denotes the size of bigram vocabulary. The joint probability over the whole corpus is then defined as p(D)= N∏ n=1 ∫ p(~dn|µ,Σ) ( ∏ wni ∈dn p(wni |W, ~dn) )( ∏ gni ∈dn p(gni |G, ~dn) ) p(yn|~dn, U, b)d~dn 6 EXPERIMENTS In this section, we introduce the experimental settings and empirical results on a set of text classification tasks. 6.1 DATASET AND EXPERIMENTAL SETUP We made use of five publicly available benchmark datasets in comparison. TREC: The TREC Question Classification dataset (Li & Roth, 2002)1 which consists of 5, 452 train questions and 500 test questions. The goal is to classify a question into 6 different types depending on the answer they seek for. Subj: Subjectivity dataset (Pang & Lee, 2004) which contains 5, 000 subjective instances and 5, 000 objective instances. The task is to classify a sentence as being subjective or objective. MR: Movie reviews (Pang & Lee, 2005) 2 with one sentence per review. There are 5, 331 positive sentences and 5, 331 negative sentences. The objective is to classify each review into positive or negative category. SST-1: Stanford Sentiment Treebank (Socher & Potts, 2013) 3. SST-1 is provided with train/dev/test splits of size 8, 544/1, 101/2, 210. It is a fine-grained classification over five classes: very negative, negative, neutral, positive, and very positive. SST-2: SST-2 is the same as SST-1 but with neutral reviews removed. We use the standard train/dev/test splits of size 6, 920/872/1, 821 for the binary classification task. Preprocessing steps were applied to all datasets: words were lowercased, non-English characters and stop words occurrence in the training set are removed. For fair comparison with other published results, we use the default train/test split for TREC, SST-1 and SST-2 datasets. Since explicit split of train/test is not provided by subj and MR datasets, we use 10-fold cross-validation instead. In our model, text and word vectors are randomly initialized with values uniformly distributed in the range of [-0.5, +0.5]. Following the practice in (Tomas Mikolov & Dean, 2013b) , we set the noise distributions for context and words as pnw(w) ∝ #(w)0.75. We adopt the same linear learning rate strategy where the initial learning rate of our models is 0.025. For unsupervised methods, we use support vector machines (SVM) 4 as the classifier. 6.2 BASELINES We adopted both unsupervised and supervised methods on text representation as baselines. 6.2.1 UNSUPERVISED BASELINES Bag-of-word-TFIDF and Bag-of-bigram-TFIDF. In the bag-of-word-TFIDF scheme (Salton & McGill, 1983) , each text is represented as the tf-idf value of chosen feature-words. The bag-of- 1http://cogcomp.cs.illinois.edu/Data/QA/QC/ 2https://www.cs.cornell.edu/people/pabo/movie-review-data/ 3http://nlp.stanford.edu/sentiment/ 4http://www.csie.ntu.edu.tw/˜cjlin/libsvm/ bigram-TFIDF model is constructed by selecting the most frequent unigrams and bigrams from the training subset. We use the vanilla TFIDF in the gensim library5. LSI (S. Deerwester & Harshman, 1990) and LDA (Blei & Jordan, 2003). LSI maps both texts and words to lower-dimensional representations in a so-called latent semantic space using SVD decomposition. In LDA, each word within a text is modeled as a finite mixture over an underlying set of topics. We use the vanilla LSI and LDA in the gensim library with topic number set as 100. cBow (Tomas Mikolov & Dean, 2013b). Continuous Bag-Of-Words model. We use average pooling as the global pooling mechanism to compose a sentence vector from a set of word vectors. PV (Quoc Le, 2014). Paragraph Vector is an unsupervised model to learn distributed representations of words and paragraphs. FastSent (Felix Hill, 2016). In FastSent, given a simple representation of some sentence in context, the model attempts to predict adjacent sentences. Note that unlike LDA and GPV, LSI, cBow, and FastSent cannot infer the representations of unseen texts. Therefore, these four models need to fold-in all the test data to learn representations together with training data, which makes it not efficient in practice. 6.2.2 SUPERVISED BASELINES NBSVM and MNB (S. Wang, 2012). Naive Bayes SVM and Multinomial Naive Bayes with unigrams and bi-grams. DAN (Mohit Iyyer & III, 2015). Deep averaging network uses average word vectors as the input and applies multiple neural layers to learn text representation under supervision. CNN-multichannel (Kim, 2014). CNN-multichannel employs convolutional neural network for sentence modeling. DCNN (N. Kalchbrenner, 2014). DCNN uses a convolutional architecture that replaces wide convolutional layers with dynamic pooling layers. MV-RNN (Richard Socher & Ng, 2012). Matrix-Vector RNN represents every word and longer phrase in a parse tree as both a vector and a matrix. DRNN (Irsoy & Cardie, 2014). Deep Recursive Neural Networks is constructed by stacking multiple recursive layers. Dependency Tree-LSTM (Kai Sheng Tai & Manning, 2015). The Dependency Tree-LSTM based on LSTM structure uses dependency parses of each sentence. 6.3 PERFORMANCE OF GENERATIVE PARAGRAPH VECTOR We first evaluate the GPV model by comparing with the unsupervised baselines on the TREC, Subj and MR datasets. As shown in table 1, GPV works better than PV over the three tasks. It demonstrates the benefits of introducing a prior distribution (i.e., regularization) over the paragraph vectors. Moreover, GPV can also outperform almost all the baselines on three tasks except Bow-TFIDF and Bigram-TFIDF on the TREC collection. The results show that for unsupervised text representation, bag-of-words representation is quite simple yet powerful which can beat many embedding models. Meanwhile, by using a complete generative process to infer the paragraph vectors, our model can achieve the state-of-the-art performance among the embedding based models. 6.4 PERFORMANCE OF SUPERVISED GENERATIVE PARAGRAPH VECTOR We compare SGPV model to supervised baselines on all the five classification tasks. Empirical results are shown in Table 2. We can see that SGPV achieves comparable performance against other deep learning models. Note that SGPV is much simpler than these deep models with significantly less parameters and no complex structures. Moreover, deep models with convolutional layers or recurrent structures can potentially capture compositional semantics (e.g., phrases), while SGPV only 5http://radimrehurek.com/gensim/ relies on uni-gram. In this sense, SGPV is quite effective in learning text representation. Meanwhile, if we take Table 1 into consideration, it is not surprising to see that SGPV can consistently outperform GPV on all the three classification tasks. This also demonstrates that it is more effective to directly fit supervised representation models than to learn a general purpose representation in prediction scenarios. By introducing bi-grams, SGPV-bigram can outperform all the other deep models on four tasks. In particular, the improvements of SGPV-bigram over other baselines are significant on SST-1 and SST-2. These results again demonstrated the effectiveness of our proposed SGPV model on text representations. It also shows the importance of word order information in modeling text semantics. 7 CONCLUSIONS In this paper, we introduce GPV and SGPV for learning distributed representations for pieces of texts. With a complete generative process, our models are able to infer vector representations as well as labels over unseen texts. Our models keep as simple as PV models, and thus can be efficiently learned over large scale text corpus. Even with such simple structures, both GPV and SGPV can produce state-of-the-art results as compared with existing baselines, especially those complex deep models. For future work, we may consider other probabilistic distributions for both paragraph vectors and word vectors.
1. What is the focus of the paper, and how does it relate to the original paragraph vectors paper? 2. What is the alleged shortcoming of the original paper that the current paper aims to address? 3. Is the premise of the work presented in the paper valid? 4. Are there any novel aspects of the idea presented in the paper that could be explored further? 5. How might the authors revise their approach to create a different type of paper?
Review
Review It feels that this paper is structured around a shortcoming of the original paragraph vectors paper, namely an alleged inability to infer representation for text outside of the training data. I am reasonably sure that this is not the case. Unfortunately on that basis, the premise for the work presented here no longer holds, which renders most of the subsequent discussion void. While I recommend this paper be rejected, I encourage the authors to revisit the novel aspects of the idea presented here and see if that can be turned into a different type of paper going forward.
ICLR
Title Generative Paragraph Vector Abstract The recently introduced Paragraph Vector is an efficient method for learning highquality distributed representations for pieces of texts. However, an inherent limitation of Paragraph Vector is lack of ability to infer distributed representations for texts outside of the training set. To tackle this problem, we introduce a Generative Paragraph Vector, which can be viewed as a probabilistic extension of the Distributed Bag of Words version of Paragraph Vector with a complete generative process. With the ability to infer the distributed representations for unseen texts, we can further incorporate text labels into the model and turn it into a supervised version, namely Supervised Generative Paragraph Vector. In this way, we can leverage the labels paired with the texts to guide the representation learning, and employ the learned model for prediction tasks directly. Experiments on five text classification benchmark collections show that both model architectures can yield superior classification performance over the state-of-the-art counterparts. 1 INTRODUCTION A central problem in many text based applications, e.g., sentiment classification (Pang & Lee, 2008), question answering (Stefanie Tellex & Marton., 2003) and machine translation (I. Sutskever & Le, 2014), is how to capture the essential meaning of a piece of text in a fixed-length vector. Perhaps the most popular fixed-length vector representations for texts is the bag-of-words (or bag-of-ngrams) (Harris, 1954). Besides, probabilistic latent semantic indexing (PLSI) (Hofmann, 1999) and latent Dirichlet allocation (LDA) (Blei & Jordan, 2003) are two widely adopted alternatives. A recent paradigm in this direction is to use a distributed representation for texts (T. Mikolov & Dean, 2013a). In particular, Le and Mikolov (Quoc Le, 2014; Andrew M.Dai, 2014) show that their method, Paragraph Vector (PV), can capture text semantics in dense vectors and outperform many existing representation models. Although PV is an efficient method for learning high-quality distributed text representations, it suffers a similar problem as PLSI that it provides no model on text vectors: it is unclear how to infer the distributed representations for texts outside of the training set with the learned model (i.e., learned text and word vectors). Such a limitation largely restricts the usage of the PV model, especially in those prediction focused scenarios. Inspired by the completion and improvement of LDA over PLSI, we first introduce the Generative Paragraph Vector (GPV) with a complete generation process for a corpus. Specifically, GPV can be viewed as a probabilistic extension of the Distributed Bag of Words version of Paragraph Vector (PVDBOW), where the text vector is viewed as a hidden variable sampled from some prior distributions, and the words within the text are then sampled from the softmax distribution given the text and word vectors. With a complete generative process, we are able to infer the distributed representations of new texts based on the learned model. Meanwhile, the prior distribution over text vectors also acts as a regularization factor from the view of optimization, thus can lead to higher-quality text representations. More importantly, with the ability to infer the distributed representations for unseen texts, we now can directly incorporate labels paired with the texts into the model to guide the representation learning, and turn the model into a supervised version, namely Supervised Generative Paragraph Vector (SGPV). Note that supervision cannot be directly leveraged in the original PV model since it has no generalization ability on new texts. By learning the SGPV model, we can directly employ SGPV to predict labels for new texts. As we know, when the goal is prediction, fitting a supervised model would be a better choice than learning a general purpose representations of texts in an unsupervised way. We further show that SGPV can be easily extended to accommodate n-grams so that we can take into account word order information, which is important in learning semantics of texts. We evaluated our proposed models on five text classification benchmark datasets. For the unsupervised GPV, we show that its superiority over the existing counterparts, such as bag-of-words, LDA, PV and FastSent (Felix Hill, 2016). For the SGPV model, we take into comparison both traditional supervised representation models, e.g. MNB (S. Wang, 2012), and a variety of state-of-the-art deep neural models for text classification (Kim, 2014; N. Kalchbrenner, 2014; Socher & Potts, 2013; Irsoy & Cardie, 2014). Again we show that the proposed SGPV can outperform the baseline methods by a substantial margin, demonstrating it is a simple yet effective model. The rest of the paper is organized as follows. We first review the related work in section 2 and briefly describe PV in section 3. We then introduce the unsupervised generative model GPV and supervised generative model SGPV in section 4 and section 5 respectively. Experimental results are shown in section 6 and conclusions are made in section 7. 2 RELATED WORK Many text based applications require the text input to be represented as a fixed-length feature vector. The most common fixed-length representation is bag-of-words (BoW) (Harris, 1954). For example, in the popular TF-IDF scheme (Salton & McGill, 1983), each document is represented by tfidf values of a set of selected feature-words. However, the BoW representation often suffers from data sparsity and high dimension. Meanwhile, due to the independent assumption between words, BoW representation has very little sense about the semantics of the words. To address this shortcoming, several dimensionality reduction methods have been proposed, such as latent semantic indexing (LSI) (S. Deerwester & Harshman, 1990), Probabilistic latent semantic indexing (PLSI) (Hofmann, 1999) and latent Dirichlet allocation (LDA) (Blei & Jordan, 2003). Both PLSI and LDA have a good statistical foundation and proper generative model of the documents, as compared with LSI which relies on a singular value decomposition over the term-document cooccurrence matrix. In PLSI, each word is generated from a single topic, and different words in a document may be generated from different topics. While PLSI makes great effect on probabilistic modeling of documents, it is not clear how to assign probability to a document outside of the training set with the learned model. To address this issue, LDA is proposed by introducing a complete generative process over the documents, and demonstrated as a state-of-the-art document representation method. To further tackle the prediction task, Supervised LDA (David M.Blei, 2007) is developed by jointly modeling the documents and the labels. Recently, distributed models have been demonstrated as efficient methods to acquire semantic representations of texts. A representative method is Word2Vec (Tomas Mikolov & Dean, 2013b), which can learn meaningful word representations in an unsupervised way from large scale corpus. To represent sentences or documents, a simple approach is then using a weighted average of all the words. A more sophisticated approach is combing the word vectors in an order given by a parse tree (Richard Socher & Ng, 2012). Later, Paragraph Vector (PV) (Quoc Le, 2014) is introduced to directly learn the distributed representations of sentences and documents. There are two variants in PV, namely the Distributed Memory Model of Paragraph Vector (PV-DM) and the Distributed Bag of Words version of Paragraph Vector (PV-DBOW), based on two different model architectures. Although PV is a simple yet effective distributed model on sentences and documents, it suffers a similar problem as PLSI that it provides no model on text vectors: it is unclear how to infer the distributed representations for texts outside of the training set with the learned model. Besides these unsupervised representation learning methods, there have been many supervised deep models with directly learn sentence or document representations for the prediction tasks. Recursive Neural Network (RecursiveNN) (Richard Socher & Ng, 2012) has been proven to be efficient in terms of constructing sentence representations. Recurrent Neural Network (RNN) (Ilya Sutskever & Hinton, 2011) can be viewed as an extremely deep neural network with weight sharing across time. Convolution Neural Network (CNN) (Kim, 2014) can fairly determine discriminative phrases in a text with a max-pooling layer. However, these deep models are usually quite complex and thus the training would be time-consuming on large corpus. 3 PARAGRAPH VECTOR Since our model can be viewed as a probabilistic extension of the PV-DBOW model with a complete generative process, we first briefly review the PV-DBOW model for reference. In PV-DBOW, each text is mapped to a unique paragraph vector and each word is mapped to a unique word vector in a continuous space. The paragraph vector is used to predict target words randomly sampled from the paragraph as shown in Figure 1. More formally, Let D={d1, . . . ,dN} denote a corpus of N texts, where each text dn = (wn1 , w n 2 , . . . , w n ln ), n ∈ 1, 2, . . . , N is an lnlength word sequence over the word vocabulary V of size M . Each text d ∈ D and each word w ∈ V is associated with a vector ~d ∈ RK and ~w ∈ RK , respectively, where K is the embedding dimensionality. The predictive objective of the PV-DBOW for each word wnl ∈ dn is defined by the softmax function p(wni |dn) = exp(~wni · ~dn)∑ w′∈V exp(~w ′ · ~dn) (1) The PV-DBOW model can be efficiently trained using the stochastic gradient descent (Rumelhart & Williams, 1986) with negative sampling (T. Mikolov & Dean, 2013a). As compared with traditional topic models, e.g. PLSI and LDA, PV-DBOW conveys the following merits. Firstly, PV-DBOW using negative sampling can be interpretated as a matrix factorization over the words-by-texts co-occurrence matrix with shifted-PMI values (Omer Levy & Ramat-Gan, 2015). In this way, more discriminative information (i.e., PMI) can be modeled in PV as compared with the generative topic models which learn over the words-by-texts co-occurrence matrix with raw frequency values. Secondly, PV-DBOW does not have the explicit “topic” layer and allows words automatically clustered according to their co-occurrence patterns during the learning process. In this way, PV-DBOW can potentially learn much finer topics than traditional topic models given the same hidden dimensionality of texts. However, a major problem with PV-DBOW is that it provides no model on text vectors: it is unclear how to infer the distributed representations for unseen texts. 4 GENERATIVE PARAGRAPH VECTOR In this section, we introduce the GPV model in detail. Overall, GPV is a generative probabilistic model for a corpus. We assume that for each text, a latent paragraph vector is first sampled from some prior distributions, and the words within the text are then generated from the normalized exponential (i.e. softmax) distribution given the paragraph vector and word vectors. In our work, multivariate normal distribution is employed as the prior distribution for paragraph vectors. It could be replaced by other prior distributions and we will leave this as our future work. The specific generative process is as follows: For each text dn ∈D, n = 1, 2, . . . , N : (a) Draw paragraph vector ~dn ∼ N (µ,Σ) (b) For each word wni ∈ dn, i = 1, 2, . . . , ln : Draw word wni ∼ softmax(~dn ·W )i where W denotes a k ×M word embedding matrix with W∗j = ~wj , and softmax(~dn ·W )i is the softmax function defined the same as in Equation (1). Figure 2 (Left) provides the graphical model of this generative process. Note that GPV differs from PV-DBOW in that the paragraph vector is a hidden variable generated from some prior distribution, which allows us to infer the paragraph vector over future texts given the learned model. Based on the above generative process, the probability of the whole corpus can be written as follows: p(D)= N∏ n=1 ∫ p(~dn|µ,Σ) ∏ wni ∈dn p(wni |W, ~dn)d~dn To learn the model, direct maximum likelihood estimation is not tractable due to non-closed form of the integral. We approximate this learning problem by using MAP estimates for ~dn, which can be formulated as follows: (µ∗,Σ∗,W ∗) = arg max µ,Σ,W ∏ p(d̂n|µ,Σ) ∏ wni ∈dn p(wni |W, d̂n) where d̂n denotes the MAP estimate of ~dn for dn, (µ∗,Σ∗,W ∗) denotes the optimal solution. Note that for computational simplicity, in this work we fixed µ as a zero vector and Σ as a identity matrix. In this way, all the free parameters to be learned in our model are word embedding matrix W . By taking the logarithm and applying the negative sampling idea to approximate the softmax function, we obtain the final learning problem L= N∑ n=1 ( −1 2 ||d̂n||2+ ∑ wni ∈dn ( log σ(~wni ·d̂n)+k·Ew′∼Pnw log σ(− ~w′ · d̂n) )) where σ(x) = 1/(1 + exp(−x)), k is the number of “negative” samples, w′ denotes the sampled word and Pnw denotes the distribution of negative word samples. As we can see from the final objective function, the prior distribution over paragraph vectors actually act as a regularization term. From the view of optimization, such regularization term could constrain the learning space and usually produces better paragraph vectors. For optimization, we use coordinate ascent, which first optimizes the word vectors W while leaving the MAP estimates (d̂) fixed. Then we find the new MAP estimate for each document while leaving the word vectors fixed, and continue this process until convergence. To accelerate the learning, we adopt a similar stochastic learning framework as in PV which iteratively updates W and estimates ~d by randomly sampling text and word pairs. At prediction time, given a new text, we perform an inference step to compute the paragraph vector for the input text. In this step, we freeze the vector representations of each word, and apply the same MAP estimation process of ~d as in the learning phase. With the inferred paragraph vector of the test text, we can feed it to other prediction models for different applications. 5 SUPERVISED GENERATIVE PARAGRAPH VECTOR With the ability to infer the distributed representations for unseen texts, we now can incorporate the labels paired with the texts into the model to guide the representation learning, and turn the model into a more powerful supervised version directly towards prediction tasks. Specifically, we introduce an additional label generation process into GPV to accommodate text labels, and obtain the Supervised Generative Paragraph Vector (SGPV) model. Formally, in SGPV, the n-th text dn and the corresponding class label yn ∈ {1, 2, . . . , C} arise from the following generative process: For each text dn ∈D, n = 1, 2, . . . , N : (a) Draw paragraph vector ~dn ∼ N (µ,Σ) (b) For each word wni ∈ dn, i = 1, 2, . . . , ln : Draw word wni ∼ softmax(~dn ·W )i (c) Draw label yn|~dn, U, b ∼ softmax(U · ~dn+b) where U is a C ×K matrix for a dataset with C output labels, and b is a bias term. The graphical model of the above generative process is depicted in Figure 2 (Right). SGPV defines the probability of the whole corpus as follows p(D)= N∏ n=1 ∫ p(~dn|µ,Σ) ( ∏ wni ∈dn p(wni |W, ~dn) ) p(yn|~dn, U, b)d~dn We adopt a similar learning process as GPV to estimate the model parameters. Since the SGPV includes the complete generative process of both paragraphs and labels, we can directly leverage it to predict the labels of new texts. Specifically, at prediction time, given all the learned model parameters, we conduct an inference step to infer the paragraph vector as well as the label using MAP estimate over the test text. The above SGPV may have limited modeling ability on text representation since it mainly relies on uni-grams. As we know, word order information is often critical in capturing the meaning of texts. For example, “machine learning” and “learning machine” are totally different in meaning with the same words. There has been a variety of deep models using complex architectures such as convolution layers or recurrent structures to help capture such order information at the expense of large computational cost. Here we propose to extend SGPV by introducing an additional generative process for n-grams, so that we can incorporate the word order information into the model and meanwhile keep its simplicity in learning. We name this extension as SGPV-ngram. Here we take the generative process of SGPVbigram as an example. For each text dn ∈D, n = 1, 2, . . . , N : (a) Draw paragraph vector ~dn ∼ N (µ,Σ) (b) For each word wni ∈ dn, i = 1, 2, . . . , ln : Draw word wni ∼ softmax(~dn ·W )i (c) For each bigram gni ∈ dn, i = 1, 2, . . . , sn : Draw bigram gni ∼ softmax(~dn ·G)i (d) Draw label yn|~dn, U, b ∼ softmax(U · ~dn+b) where G denotes a K × S bigram embedding matrix with G∗j = ~gj , and S denotes the size of bigram vocabulary. The joint probability over the whole corpus is then defined as p(D)= N∏ n=1 ∫ p(~dn|µ,Σ) ( ∏ wni ∈dn p(wni |W, ~dn) )( ∏ gni ∈dn p(gni |G, ~dn) ) p(yn|~dn, U, b)d~dn 6 EXPERIMENTS In this section, we introduce the experimental settings and empirical results on a set of text classification tasks. 6.1 DATASET AND EXPERIMENTAL SETUP We made use of five publicly available benchmark datasets in comparison. TREC: The TREC Question Classification dataset (Li & Roth, 2002)1 which consists of 5, 452 train questions and 500 test questions. The goal is to classify a question into 6 different types depending on the answer they seek for. Subj: Subjectivity dataset (Pang & Lee, 2004) which contains 5, 000 subjective instances and 5, 000 objective instances. The task is to classify a sentence as being subjective or objective. MR: Movie reviews (Pang & Lee, 2005) 2 with one sentence per review. There are 5, 331 positive sentences and 5, 331 negative sentences. The objective is to classify each review into positive or negative category. SST-1: Stanford Sentiment Treebank (Socher & Potts, 2013) 3. SST-1 is provided with train/dev/test splits of size 8, 544/1, 101/2, 210. It is a fine-grained classification over five classes: very negative, negative, neutral, positive, and very positive. SST-2: SST-2 is the same as SST-1 but with neutral reviews removed. We use the standard train/dev/test splits of size 6, 920/872/1, 821 for the binary classification task. Preprocessing steps were applied to all datasets: words were lowercased, non-English characters and stop words occurrence in the training set are removed. For fair comparison with other published results, we use the default train/test split for TREC, SST-1 and SST-2 datasets. Since explicit split of train/test is not provided by subj and MR datasets, we use 10-fold cross-validation instead. In our model, text and word vectors are randomly initialized with values uniformly distributed in the range of [-0.5, +0.5]. Following the practice in (Tomas Mikolov & Dean, 2013b) , we set the noise distributions for context and words as pnw(w) ∝ #(w)0.75. We adopt the same linear learning rate strategy where the initial learning rate of our models is 0.025. For unsupervised methods, we use support vector machines (SVM) 4 as the classifier. 6.2 BASELINES We adopted both unsupervised and supervised methods on text representation as baselines. 6.2.1 UNSUPERVISED BASELINES Bag-of-word-TFIDF and Bag-of-bigram-TFIDF. In the bag-of-word-TFIDF scheme (Salton & McGill, 1983) , each text is represented as the tf-idf value of chosen feature-words. The bag-of- 1http://cogcomp.cs.illinois.edu/Data/QA/QC/ 2https://www.cs.cornell.edu/people/pabo/movie-review-data/ 3http://nlp.stanford.edu/sentiment/ 4http://www.csie.ntu.edu.tw/˜cjlin/libsvm/ bigram-TFIDF model is constructed by selecting the most frequent unigrams and bigrams from the training subset. We use the vanilla TFIDF in the gensim library5. LSI (S. Deerwester & Harshman, 1990) and LDA (Blei & Jordan, 2003). LSI maps both texts and words to lower-dimensional representations in a so-called latent semantic space using SVD decomposition. In LDA, each word within a text is modeled as a finite mixture over an underlying set of topics. We use the vanilla LSI and LDA in the gensim library with topic number set as 100. cBow (Tomas Mikolov & Dean, 2013b). Continuous Bag-Of-Words model. We use average pooling as the global pooling mechanism to compose a sentence vector from a set of word vectors. PV (Quoc Le, 2014). Paragraph Vector is an unsupervised model to learn distributed representations of words and paragraphs. FastSent (Felix Hill, 2016). In FastSent, given a simple representation of some sentence in context, the model attempts to predict adjacent sentences. Note that unlike LDA and GPV, LSI, cBow, and FastSent cannot infer the representations of unseen texts. Therefore, these four models need to fold-in all the test data to learn representations together with training data, which makes it not efficient in practice. 6.2.2 SUPERVISED BASELINES NBSVM and MNB (S. Wang, 2012). Naive Bayes SVM and Multinomial Naive Bayes with unigrams and bi-grams. DAN (Mohit Iyyer & III, 2015). Deep averaging network uses average word vectors as the input and applies multiple neural layers to learn text representation under supervision. CNN-multichannel (Kim, 2014). CNN-multichannel employs convolutional neural network for sentence modeling. DCNN (N. Kalchbrenner, 2014). DCNN uses a convolutional architecture that replaces wide convolutional layers with dynamic pooling layers. MV-RNN (Richard Socher & Ng, 2012). Matrix-Vector RNN represents every word and longer phrase in a parse tree as both a vector and a matrix. DRNN (Irsoy & Cardie, 2014). Deep Recursive Neural Networks is constructed by stacking multiple recursive layers. Dependency Tree-LSTM (Kai Sheng Tai & Manning, 2015). The Dependency Tree-LSTM based on LSTM structure uses dependency parses of each sentence. 6.3 PERFORMANCE OF GENERATIVE PARAGRAPH VECTOR We first evaluate the GPV model by comparing with the unsupervised baselines on the TREC, Subj and MR datasets. As shown in table 1, GPV works better than PV over the three tasks. It demonstrates the benefits of introducing a prior distribution (i.e., regularization) over the paragraph vectors. Moreover, GPV can also outperform almost all the baselines on three tasks except Bow-TFIDF and Bigram-TFIDF on the TREC collection. The results show that for unsupervised text representation, bag-of-words representation is quite simple yet powerful which can beat many embedding models. Meanwhile, by using a complete generative process to infer the paragraph vectors, our model can achieve the state-of-the-art performance among the embedding based models. 6.4 PERFORMANCE OF SUPERVISED GENERATIVE PARAGRAPH VECTOR We compare SGPV model to supervised baselines on all the five classification tasks. Empirical results are shown in Table 2. We can see that SGPV achieves comparable performance against other deep learning models. Note that SGPV is much simpler than these deep models with significantly less parameters and no complex structures. Moreover, deep models with convolutional layers or recurrent structures can potentially capture compositional semantics (e.g., phrases), while SGPV only 5http://radimrehurek.com/gensim/ relies on uni-gram. In this sense, SGPV is quite effective in learning text representation. Meanwhile, if we take Table 1 into consideration, it is not surprising to see that SGPV can consistently outperform GPV on all the three classification tasks. This also demonstrates that it is more effective to directly fit supervised representation models than to learn a general purpose representation in prediction scenarios. By introducing bi-grams, SGPV-bigram can outperform all the other deep models on four tasks. In particular, the improvements of SGPV-bigram over other baselines are significant on SST-1 and SST-2. These results again demonstrated the effectiveness of our proposed SGPV model on text representations. It also shows the importance of word order information in modeling text semantics. 7 CONCLUSIONS In this paper, we introduce GPV and SGPV for learning distributed representations for pieces of texts. With a complete generative process, our models are able to infer vector representations as well as labels over unseen texts. Our models keep as simple as PV models, and thus can be efficiently learned over large scale text corpus. Even with such simple structures, both GPV and SGPV can produce state-of-the-art results as compared with existing baselines, especially those complex deep models. For future work, we may consider other probabilistic distributions for both paragraph vectors and word vectors.
1. What is the main contribution of the paper regarding paragraph vectors? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. Do you have any concerns about the novelty of the work and its relation to previous research? 4. How does the reviewer assess the clarity and quality of the paper's content, including citations and formatting? 5. What are the limitations of the experimental results and their comparisons to other works?
Review
Review This work reframes paragraph vectors from a generative point of view and in so doing, motivates the existing method of inferring paragraph vectors as well as applying a L2 regularizer on the paragraph embeddings. The work also motivates joint learning of a classifier on the paragraph vectors to perform text classification. The paper has numerous citation issues both in formatting within the text and the formatting of the bibliography, e.g. on some occasions including first names, on others not. I suggest the authors use a software package like BibTex to have a more consistent bibliography. There seems to be little novelty in this work. The authors claim that there is no proposed method for inferring unseen documents for paragraph vectors. This is untrue. In the original paragraph vector paper, the authors show that to get a new vector, the rest of the model parameters are held fixed and gradient descent is performed on the new paragraph vector. This means the original dataset is not needed when inferring a paragraph vector for new text. This work seems to be essentially doing the same thing when finding the MAP estimate for a new vector. Thus the only contribution from the generative paragraph vector framing is the regularization on the embedding matrix. The supervised generative paragraph vector amounts to jointly training a linear classifier on the paragraph vectors, while inference for the paragraph vector is unchanged. For the n-gram based approach, the authors should cite Li et al., 2015. In the experiments, table 1 and 2 are badly formatted with .0 being truncated. The authors also do not state the size of the paragraph vector. Finally the SGPV results are actually worse than that reported in the original paragraph vector paper where SST-1 got 48.7 and SST-2 got 86.3. Bofang Li, Tao Liu, Xiaoyong Du, Deyuan Zhang, Zhe Zhao, Learning Document Embeddings by Predicting N-grams for Sentiment Classification of Long Movie Reviews, 2015.
ICLR
Title Reparameterized Variational Divergence Minimization for Stable Imitation Abstract State-of-the-art results in imitation learning are currently held by adversarial methods that iteratively estimate the divergence between student and expert policies and then minimize this divergence to bring the imitation policy closer to expert behavior. Analogous techniques for imitation learning from observations alone (without expert action labels), however, have not enjoyed the same ubiquitous successes. Recent work in adversarial methods for generative models has shown that the measure used to judge the discrepancy between real and synthetic samples is an algorithmic design choice, and that different choices can result in significant differences in model performance. Choices including Wasserstein distance and various f -divergences have already been explored in the adversarial networks literature, while more recently the latter class has been investigated for imitation learning (Ke et al., 2019). Unfortunately, we find that in practice this existing imitation-learning framework for using f -divergences suffers from numerical instabilities stemming from the combination of function approximation and policygradient reinforcement learning. In this work, we alleviate these challenges and offer a reparameterization of adversarial imitation learning as f -divergence minimization before further extending the framework to handle the problem of imitation from observations only. Empirically, we demonstrate that our design choices for coupling imitation learning and f -divergences are critical to recovering successful imitation policies. Moreover, we find that with the appropriate choice of f divergence, we can obtain imitation-from-observation algorithms that outperform baseline approaches and more closely match expert performance in continouscontrol tasks with low-dimensional observation spaces. With high-dimensional observations, we still observe a significant gap with and without action labels, offering an interesting avenue for future work. 1 INTRODUCTION Imitation Learning (IL) (Osa et al., 2018) refers to a paradigm of reinforcement learning in which the learning agent has access to an optimal, reward-maximizing expert for the underlying environment. In most work, this access is provided through a dataset of trajectories where each observed state is annotated with the action prescribed by the expert policy. This is often an extremely powerful learning paradigm in contrast to standard reinforcement learning, since not all tasks of interest admit easily-specified reward functions. Additionally, not all environments are amenable to the prolonged and potentially unsafe exploration needed for reward-maximizing agents to arrive at satisfactory policies (Achiam et al., 2017; Chow et al., 2019). While the traditional formulation of the IL problem assumes access to optimal expert action labels, the provision of such information can often be laborious (in the case of a real, human expert) or incur significant financial cost (such as using elaborate instrumentation to record expert actions). Additionally, this restrictive assumption removes a vast number of rich, observation-only data sources from consideration (Zhou et al., 2018). To bypass these challenges, recent work (Liu et al., 2018; Torabi et al., 2018a;b; Edwards et al., 2019; Sun et al., 2019) has explored what is perhaps a more natural problem formulation in which an agent must recover an imitation policy from a dataset containing only expert observation sequences. While this Imitation Learning from Observations (ILfO) setting carries tremendous potential, such as enabling an agent to learn complex tasks from watching freely available videos on the Internet, it also is fraught with significant additional challenges. In this paper, we show how to incorporate recent advances in generative-adversarial training of deep neural networks to tackle imitation-learning problems and advance the state-of-the-art in ILfO. With these considerations in mind, the overarching goal of this work is to enable sample-efficient imitation from expert demonstrations, both with and without the provision of expert action labels. The rich literature on Generative Adversarial Networks (Goodfellow et al., 2014) has expanded in recent years to include alternative formulations of the underlying objective that yield qualitatively different solutions to the saddle-point optimization problem (Li et al., 2015; Dziugaite et al., 2015; Zhao et al., 2016; Nowozin et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017). Of notable interest are the findings of Nowozin et al. (2016) who present Variational Divergence Minimization (VDM), a generalization of the generative-adversarial approach to arbitrary choices of distance measures between probability distributions drawn from the class of f -divergences (Ali & Silvey, 1966; Csiszár et al., 2004). Applying VDM with varying choices of f - divergence, Nowozin et al. (2016) encounter learned synthetic distribu- tions that can exhibit differences from one another while producing equally realistic samples. Translating this idea for imitation is complicated by the fact that the optimization of the generator occurs via policy-gradient reinforcement learning (Sutton et al., 2000). Existing work in combining adversarial IL and f -divergences (Ke et al., 2019), despite being well-motivated, fails to account for this difference; the end results (shown partially in Figure 1, where TV-VIM is the method of Ke et al. (2019), and discussed further in later sections) are imitation-learning algorithms that scale poorly to environments with higher-dimensional observations. In this work, we assess the effect of the VDM principle and consideration of alternative f - divergences in the contexts of IL and ILfO. We begin by reparameterizing the framework of Ke et al. (2019) for the standard IL problem. Our version transparently exposes the choices practitioners must make when designing adversarial imitation algorithms for arbitrary choices of f -divergence. We then offer a single instantiation of our framework that, in practice, allows stable training of good policies across multiple choices of f -divergence. An example is illustrated in Figure 1 where our methods (TV-VIM-sigmoid and TV-VIMO-sigmoid) result in significantly superior policies. We go on to extend our framework to encapsulate the ILfO setting and examine the efficacy of the resulting new algorithms across a range of continuous-control tasks in the MuJoCo (Todorov et al., 2012) domain. Our empirical results validate our framework as a viable unification of adversarial imitation methods under the VDM principle. With the assistance of recent advances in stabilizing regularization for adversarial training (Mescheder et al., 2018), improvements in performance can be attained under an appropriate choice of f -divergence. However, there is still a significant performance gap between the recovered imitation policies and expert behavior for tasks with high dimensional observations, leaving open directions for future work in developing improved ILfO algorithms. 2 RELATED WORK The algorithms presented in this work fall in with inverse reinforcement learning (IRL) (Ng et al.; Abbeel & Ng, 2004; Syed & Schapire, 2007; Ziebart et al., 2008; Finn et al., 2016; Ho & Ermon, 2016) approaches to IL. Early successes in this regime tend to rely on hand-engineered feature rep- resentations for success (Abbeel & Ng, 2004; Ziebart et al., 2008; Levine et al., 2011). Only in recent years, with the aid of deep neural networks, has there been a surge in the number of approaches that are capable of scaling to the raw, high-dimensional observations found in real-world control problems (Finn et al., 2016; Ho & Ermon, 2016; Duan et al., 2017; Li et al., 2017; Fu et al., 2017; Kim & Park, 2018). Our work focuses attention exclusively on adversarial methods for their widespread effectiveness across a range of imitation tasks without requiring interactive experts (Ho & Ermon, 2016; Li et al., 2017; Fu et al., 2017; Kostrikov et al., 2018); at the heart of these methods is the Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) approach which produces high-fidelity imitation policies and achieves state-of-the-art results across numerous continuous-control benchmarks by leveraging the expressive power of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) for modeling complex distributions over a high-dimensional support. From an IRL perspective, GAIL can be viewed as iteratively optimizing a parameterized reward function (discriminator) that, when used to optimize an imitation policy (generator) via policy-gradient reinforcement learning (Sutton et al., 2000), allows the agent to shift its own behavior closer to that of the expert. From the perspective of GANs, this is achieved by discriminating between the respective distributions over state-action pairs visited by the imitation and expert policies before training a generator to fool the discriminator and induce a state-action visitation distribution similar to that of the expert. While a large body of prior work exists for IL, recent work has drawn attention to the more challenging problem of imitation learning from observation (Sermanet et al., 2017; Liu et al., 2018; Goo & Niekum, 2018; Kimura et al., 2018; Torabi et al., 2018a;b; Edwards et al., 2019; Sun et al., 2019). To more closely resemble observational learning in humans and leverage the wealth of publiclyavailable, observation-only data sources, the ILfO problem considers learning from expert demonstration data where no expert action labels are provided. Many early approaches to ILfO use expert observation sequences to learn a semantic embedding space so that distances between observation sequences of the imitation and expert policies can serve as a cost signal to be minimized via reinforcement learning (Gupta et al., 2017; Sermanet et al., 2017; Dwibedi et al., 2018; Liu et al., 2018). In contrast, Torabi et al. (2018a) introduce Behavioral Cloning from Observation (BCO) which leverages state-action trajectories collected under a random policy to train an inverse dynamics model for inferring the action responsible for a transition between two input states (assuming the two represent a state and next-state pair). With this inverse model in hand, the observation-only demonstration data can be converted into the more traditional dataset of state-action pairs over which standard BC can be applied. Recognizing the previously discussed limitations of BC approaches, Torabi et al. (2018b) introduce the natural GAIL counterpart for ILfO, Generative Adversarial Imitation from Observation (GAIFO); GAIFO is identical to GAIL except the distributions under consideration in the adversarial game are over state transitions (state and next-state pairs), as opposed to stateaction pairs requiring expert action labels. While Torabi et al. (2018b) offer empirical results for continuous-control tasks with low-dimensional features as well as raw image observations, GAIFO falls short of expert performance in both settings leaving an open challenge for scalable ILfO algorithms that achieve expert performance across a wide spectrum of tasks. A central question of this work is to explore how alternative formulations of the GAN objective that underlies GAIFO might yield superior ILfO algorithms. For a more in-depth survey of ILfO approaches, we refer readers to Torabi et al. (2019). We refer readers to the Appendix for a broader overview of prior work. 3 BACKGROUND We begin by formulating the problems of imitation learning and imitation learning from observation respectively before taking a closer look at f -divergences and connecting them to imitation learning. 3.1 IMITATION LEARNING & IMITATION FROM OBSERVATION We operate within the Markov Decision Process (MDP) formalism (Bellman, 1957; Puterman, 2014) defined as a five-tupleM = 〈S,A,R, T , γ〉 where S denotes a (potentially infinite) set of states, A denotes a (potentially infinite) set of actions, R : S × A × S → R is a reward function, T : S × A → ∆(S) is a transition function, and γ ∈ [0, 1) is a discount factor. At each timestep, the agent observes the current state of the world, st ∈ S, and randomly samples an action according to its stochastic policy π : S → ∆(A). The environment then transitions to a new state according to the transition function T and produces a reward signal according to the reward function R that is communicative of the agent’s progress through the overall task. Unlike, the traditional reinforcement learning paradigm, the decision-making problem presented in IL lacks a concrete reward function; in lieu of R, a learner is provided with a dataset of expert demonstrationsD = {τ1, τ2, . . . τN}where each τi = (si1, ai1, si2, ai2, . . .) represents the sequence of states and corresponding actions taken by an expert policy, π∗. Naturally, the goal of an IL algorithm is to synthesize a policy π using D, along with access to the MDPM, whose behavior matches that of π∗. While the previous section outlines several possible avenues for using D to arrive at a satisfactory imitation policy, our work focuses on adversarial methods that build around GAIL (Ho & Ermon, 2016). Following from the widespread success of GANs (Goodfellow et al., 2014), GAIL offers a highly-performant approach to IL wherein, at each iteration of the algorithm, transitions sampled from the current imitation policy are first used to update a discriminator, Dω(s, a), that acts a binary classifier to distinguish between state-action pairs sampled according to the distributions induced by the expert and student. Subsequently, treating the imitation policy as a generator, policy-gradient reinforcement learning is used to shift the current policy towards expert behavior, issuing higher rewards for those generated state-action pairs that are regarded as belonging to the expert according to Dω(s, a). More formally, this minimax optimization follows as min π max ω E(s,a)∼ρπ∗ [log(Dω(s, a))] + E(s,a)∼ρπ [log(1−Dω(s, a))] (1) where ρπ ∗ (s, a) and ρπ(s, a) denote the undiscounted stationary distributions over state-action pairs for the expert and imitation policies respectively. Here Dω(s, a) = σ(Vω(s, a)) where Vω(s, a) represents the unconstrained output of a discriminator neural network with parameters ω and σ(v) = (1 + e−x)−1 denotes the sigmoid activation function. Since the imitation policy only exerts control over the latter term in the above objective, the per-timestep reward function maximized by reinforcement learning is given as r(s, a, s′) = − log(1 − Dω(s, a)). In practice, an entropy regularization term is often added to the objective when optimizing the imitation policy so as to avoid premature convergence to a suboptimal solution (Mnih et al., 2016; Ho & Ermon, 2016; Neu et al., 2017). In order to accommodate various observation-only data sources (Zhou et al., 2018) and remove the burden of requiring expert action labels, the imitation from observation setting adjusts the expert demonstration dataset D such that each trajectory τi = (si1, si2, . . .) consists only of expert observation sequences. Retaining the goal of recovering an imitation policy that closely resembles expert behavior, Torabi et al. (2018b) introduce GAIFO as the natural extension of GAIL for matching the state transition distribution of the expert policy. Note that an objective for matching the stationary distribution over expert state transitions enables the provision of per-timestep feedback while simultaneously avoid the issues of temporal alignment that arise when trying to match trajectories directly. The resulting algorithm iteratively finds a solution to the following minimax optimization: min π max ω E(s,s′)∼ρπ∗ [log(Dω(s, s′))] + E(s,s′)∼ρπ [log(1−Dω(s, s′))] (2) where ρπ ∗ (s, s′) and ρπ(s, s′) now denote the analogous stationary distributions over successive state pairs while Dω(s, s′) = σ(Vω(s, s′)) represents binary classifier over state pairs. Similar to GAIL, the imitation policy is optimized via policy-gradient reinforcement learning with per-timestep rewards computed according to r(s, a, s′) = − log(1−Dω(s, s′)) and using entropy regularization as needed. 4 APPROACH In this section, we begin with an overview of f -divergences, their connection to GANs, and their impact on IL through the f -VIM framework (Ke et al., 2019) (Section 4.1). We then present an alternative view of the framework that transparently exposes the fundamental choice practictioners must make in order to circumvent practical issues that arise when applying f -VIM to high-dimensional tasks (Section 4.2). We conclude by presenting our approach for ILfO as f -divergence minimization (Section 4.3) followed by a brief discussion of a regularization technique used to stabilize discriminator training in our experiments (Section 4.4). 4.1 f -DIVERGENCES AND IMITATION LEARNING The GAIL and GAIFO approaches engage in an adversarial game where the discriminator estimates the divergence between state-action or state transition distributions according to the JensenShannon divergence (Goodfellow et al., 2014). In this work, our focus is on a more general class of divergences, that includes the Jensen-Shannon divergence, known as Ali-Silvey distances or f - divergences (Ali & Silvey, 1966; Csiszár et al., 2004). For two distributions P and Q with support over a domainX and corresponding continuous densities p and q, we have the f -divergence between them according to: Df (P ||Q) = ∫ X q(x)f( p(x) q(x) )dx (3) where f : R+ → R is a convex, lower-semicontinuous function such that f(1) = 0. As illustrated in Table 1, different choices of function f yield well-known divergences between probability distributions. In order to accommodate the tractable estimation of f -divergences when only provided samples from P and Q, Nguyen et al. (2010) offer an approach for variational estimation of f -divergences. Central to their procedure is the use of the convex conjugate function or Fenchel conjugate (Hiriart-Urruty & Lemaréchal, 2004), f∗, which exists for all convex, lower-semicontinuous functions f and is defined as the following supremum: f∗(t) = sup u∈domf {ut− f(u)} (4) Using the duality of the convex conjugate (f∗∗ = f ), Nguyen et al. (2010) represent f(u) = sup t∈domf∗ {tu− f∗(t)} enabling a variational bound: Df (P ||Q) = ∫ X q(x) sup t∈domf∗ { t p(x) q(x) − f∗(t) } dx ≥ sup T∈T ( ∫ X p(x)T (x)dx− ∫ X q(x)f∗(T (x))dx) = sup T∈T (Ex∼P [T (x)]− Ex∼Q[f∗(T (x))]) (5) where T is an arbitrary class of functions T : X → domf∗ . Nowozin et al. (2016) extend the use of this variational lower bound for GANs that utilize arbitrary f -divergences, or f -GANs. Specifically, the two distributions of interest are the real data distribution P and a synthetic distribution represented by a generative model Qθ with parameters θ. The variational function is also parameterized as Tω acting as the discriminator. This gives rise to the VDM principle which defines the f -GAN objective min θ max ω Ex∼P [Tω(x)]− Ex∼Qθ [f∗(Tω(x))] (6) Nowozin et al. (2016) represent the variational function as Tω(x) = gf (Vω(x)) such that Vω(x) : X → R represents the unconstrained discriminator network while gf : R→ domf∗ is an activation function chosen in accordance with the f -divergence being optimized. Table 1 includes the “somewhat arbitrary” but effective choices for gf suggested by Nowozin et al. (2016) and we refer readers to their excellent work for more details and properties of f -divergences and f -GANs. Recently, Ke et al. (2019) have formalized the generalization from GAN to f -GAN for the traditional IL problem. They offer the f -Variational Imitation (f -VIM) framework for the specific case of estimating and then minimizing the divergence between state-action distributions induced by expert and imitation policies: min θ max ω E(s,a)∼ρπ∗ [gf (Vω(s, a))]− E(s,a)∼ρπθ [f∗(gf (Vω(s, a)))] (7) where Vω : S × A → R denotes the discriminator network that will supply per-timestep rewards during the outer policy optimization which itself is carried out over policy parameters θ via policygradient reinforcement learning (Sutton et al., 2000). In particular, the per-timestep rewards provided to the agent are given according to r(s, a, s′) = f∗(gf (Vω(s, a))). While Ke et al. (2019) do an excellent job of motivating the use of f -divergences for IL (by formalizing the relationship between divergences over trajectory distributions vs. state-action distributions) and connecting f -VIM to existing imitation-learning algorithms, their experiments focus on smaller problems to study the mode-seeking/mode-covering aspects of different f -divergences and the implications of such behavior depending on the multimodality of the expert trajectory distribution. Meanwhile, in the course of attempting to apply f -VIM to large-scale imitation problems, we empirically observe numerical instabilities stemming from function approximation, demanding a reformulation of the framework. 4.2 REPARAMETERIZING f -VIM In their presentation of the f -VIM framework, Ke et al. (2019) retain the choices for activation function gf introduced by Nowozin et al. (2016) for f -GANs. Recall that these choices of gf play a critical role in defining the reward function optimized by the imitation policy on each iteration of f -VIM, r(s, a, s′) = f∗(gf (Vω(s, a))). It is well known in the reinforcement-learning literature that the nature of the rewards provided to an agent have strong implications on learning success and efficiency (Ng et al., 1999; Singh et al., 2010). While the activation choices made for f -GANs are suitable given that both optimization problems are carried out by backpropagation, we assert that special care must be taken when specifying these activations (and implicitly, the reward function) for imitation-learning algorithms. A combination of convex conjugate and activation function could induce a reward function that engenders numerical instability or a simply challenging reward landscape, depending on the underlying policy-gradient algorithm utilized (Henderson et al., 2018). Empirically, we found that the particular activation choices for the KL and reverse KL divergences shown in Table 1 (linear and exponential, respectively) produced imitation-learning algorithms that, in all of our evaluation environments, failed to complete execution due to numerical instabilities caused by exploding policy gradients. In the case of the Total Variation distance, the corresponding f -GAN activation for the variational function is a tanh, requiring a learning agent to traverse a reward interval of [−1, 1] by crossing an intermediate region with reward signals centered around 0. To refactor the f -VIM framework so that it more clearly exposes the choice of reward function to practictioners and shifts the issues of reward scale away from the imitation policy, we propose uniformly applying an activation function gf (v) = f∗−1(r(v)) where f∗−1(t) denotes the inverse of the convex conjugate (see Table 1). Here r is effectively a free parameter that can be set according to one of the many heuristics used throughout the field of deep reinforcement learning for maintaining a reasonable reward scale (Mnih et al., 2015; 2016; Henderson et al., 2018) so long as it obeys the domain of the inverse conjugate domf∗−1 . In selecting gf accordingly, the reparameterized saddlepoint optimization for f -VIM becomes min θ max ω E(s,a)∼ρπ∗ [f∗−1(r(Vω(s, a)))]− E(s,a)∼ρπθ [r(Vω(s, a))] (8) where the per-timestep rewards used during policy optimization are given by r(s, a, s′) = r(Vω(s, a)). In applying this choice, we shift the undesirable scale of the latter term in VDM towards the discriminator, expecting it to be indifferent since training is done by backpropagation. As one potential instantiation, we consider r(u) = σ(u) where σ(·) denotes the sigmoid function leading to bounded rewards in the interval [0, 1] that conveniently adhere to domf∗−1 for almost all of the f -divergences examined in this work1. In Section 5, we evaluate imitation-learning algorithms with this choice against those using f -VIM with the original f -GAN activations; we find that, without regard for the scale of rewards and the underlying reinforcement-learning problem being solved, the f -GAN activation choices either produce degenerate solutions or completely fail to produce an imitation policy altogether. 4.3 f -DIVERGENCES AND IMITATION FROM OBSERVATION Applying the variational lower bound of Nguyen et al. (2010) and the corresponding f -GAN extension, we can now present our Variational Imitation from Observation (f -VIMO) extension for a general family of ILfO algorithms that leverage the VDM principle in the underlying saddle-point optimization. Since optimization of the generator will continue to be carried out by policy-gradient reinforcement learning, we adhere to our reparameterization of the f -VIM framework and present the f -VIMO objective as: min θ max ω E(s,s′)∼ρπ∗ [f∗−1(r(Vω(s, s′)))]− E(s,s′)∼ρπθ [r(Vω(s, s′))] (9) with the per-timestep rewards given according to r(s, a, s′) = r(Vω(s, s′)). We present the full approach as Algorithm 1. Just as in Section 4.2, we again call attention to Line 5 where the discriminator outputs (acting as individual reward signals) scale the policy gradient, unlike the more conventional discriminator optimization of Line 4 by backpropagation; this key difference is the primary motivator for our specific reparameterization of the f -VIM framework. Just as in the previous section, we take r(u) = σ(u) as a particularly convenient choice of activation given its agreement to the inverse conjugate domains domf∗−1 for many choices of f -divergence and we employ this instantiation throughout all of our experiments. We leave the examination of alternative choices for r to future work. Algorithm 1 f -VIMO 1: INPUT: Dataset of expert trajectories D, initial policy and discriminator parameters θ0 and ω0, number of iterations N , discount factor γ 2: for i = 0, 1, . . . , N do 3: Sample trajectories from current imitation policy τi ∼ πθi 4: ωi+1 = ωi +∇ω ( E(s,s′)∼D[f∗−1(r(Vω(s, s′)))]− E(s,s′)∼τi [r(Vω(s, s′))] ) 5: Update θi to θi+1 via a policy-gradient update with rewards given by r(Vω(s, s′)): θi+1 = θi + E(s,a,s′)∼τi [ ∇θ log(πθi(a|s))Eτi [ ∞∑ t=1 γt−1r(Vω(st−1, st))|s0 = s, s1 = s′] ] 6: end for 4.4 DISCRIMINATOR REGULARIZATION The refactored version of f -VIM presented in Section 4.2 is fundamentally addressing instability issues that may occur on the generator side of adversarial training; in our experiments, we also examine the utility of regularizing the discriminator side of the optimization for improved stability. Following from a line of work examining the underlying mathematical properties of GAN optimization (Roth et al., 2017; 2018; Mescheder et al., 2018), we opt for the simple gradient-based regularization of Mescheder et al. (2018) which (for f -VIMO) augments the discriminator loss with the following regularization term: R(ω) = ψ 2 E(s,s′)∼ρπ∗ [||∇ωf∗−1(r(Vω(s, s′)))||2] (10) where ψ is a hyperparameter controlling the strength of the regularization. The form of this specific penalty follows from the analysis of Roth et al. (2017); intuitively, its purpose is to disincentivize the 1For Total Variation distance, we use r(u) = 1 2 σ(u) to remain within domf∗−1 . discriminator from producing a non-zero gradient that shifts away from the Nash equilibrium of the minimax optimization when presented with a generator that perfectly matches the true data distribution. While originally developed for traditional GANs and shown to empirically exhibit stronger convergence properties over Wasserstein GANs (Gulrajani et al., 2017), this effect is still desirable for the adversarial IL setting where the reward function (discriminator) used for optimizing the imitation policy should stop changing once the expert state-transition distribution has been matched. In practice, we compare f -VIM and f -VIMO both with and without the use of this regularization term and find thatR(ω) can improve the stability and convergence of f -VIMO across almost all domains. 5 EXPERIMENTS We examine four instantiations of the f -VIM and f -VIMO frameworks (as presented in Sections 4.2 and 4.3) corresponding to imitation algorithms with the following choices of f -divergence: GAN, Kullback-Leibler, reverse KL, and Total Variation. We conduct our evaluation across four MuJoCo environments (Todorov et al., 2012) of varying difficulty: Ant, Hopper, HalfCheetah, and Walker (see the Appendix for more details on individual environments). The core questions we seek to answer through our empirical results are as follows: 1. What are the implications of the choice of activation for the variational function in f -VIM on imitation policy performance? 2. Do f -divergences act as a meaningful axis of variation for IL and ILfO algorithms? 3. What is the impact of discriminator regularization on the stability and convergence proper- ties of f -VIM/f -VIMO? 4. How does the impact of different f -divergences vary with the amount of expert demonstra- tion data provided? To answer the first three questions above, we report the average total reward achieved by the imitation policy throughout the course of learning with rewards as defined by the corresponding OpenAI Gym environment (Brockman et al., 2016). Shading in all plots denote 95% confidence intervals computed over 10 random trials with 10 random seeds. Expert demonstration datasets of 50 trajectories were collected from agents trained via Proximal Policy Optimization (PPO) (Schulman et al., 2017); 20 expert demonstrations were randomly subsampled at the start of learning and held fixed for the duration of the algorithm. We also utilize PPO as the underlying reinforcement-learning algorithm for training the imitation policy with a clipping parameter of 0.2, advantage normalization, entropy regularization coefficient 1e−3, and the Adam optimizer (Kingma & Ba, 2014). Just as in Ho & Ermon (2016) we use a discount factor of γ = 0.995 and apply Generalized Advantage Estimation (Schulman et al., 2015) with parameter λ = 0.97. We run both f -VIM and f -VIMO for a total of 500 iterations, collecting 50000 environment samples per iteration. The policy and discriminator architectures are identically two separate multi-layer perceptrons each with two hidden layers of 100 units separated by tanh nonlinearities. A grid search was used for determining the initial learning rate, number of PPO epochs, and number of epochs used for discriminator training (please see the Appendix for more details) and we report results for the best hyperparameter settings. To address our final question, we take the best hyperparameter settings recovered when given 20 expert demonstrations and re-run all algorithms with {1, 5, 10, 15} expert demonstrations that are randomly sampled at the start of each random trial and held fixed for the duration of the algorithm. We then record the average return of the final imitation policy for each level of expert demonstration. 6 RESULTS & DISCUSSION To highlight the importance of carefully selecting the variational function activation gf and validate our modifications to the f -VIM framework, we present results in Figure 2 comparing to the original f -VIM framework of Ke et al. (2019) and its natural ILfO counterpart. Activation functions for the original methods are chosen according to the choices outlined in Ke et al. (2019); Nowozin et al. (2016). In our experiments using the KL and reverse KL divergences, we found that none of the trials reached completion due to numerical instabilities caused by exploding policy gradients. Consequently, we only present results for the Total Variation distance. We observe that under the original f -GAN activation selection, we fail to produce meaningful imitation policies with learning stagnating after 100 iterations or less. As previously mentioned, we suspect that this stems from the use of tanh with TV leading to a dissipating reward signal. We present results in Figure 3 to assess the utility of varying the choice of divergence in f -VIM and f -VIMO across each domain. In considering the impact of f -divergence choice, we find that most of the domains must be examined in isolation to observe a particular subset of f -divergences that stand out. In the IL setting, we find that varying the choice of f -divergence can yield different learning curves but, ultimately, produce near-optimal (if not optimal) imitation policies across all domains. In contrast, we find meaningful choices of f -divergence in the ILfO setting including {KL, TV} for Hopper, RKL for HalfCheetah, and {GAN, TV} for Walker. We note that the use of discriminator regularization per Mescheder et al. (2018) is crucial to achieving these performance gains, whereas the regularization generally fails to help performance in the IL setting. This finding is supportive of the logical intuition that ILfO poses a fundamentally more-challenging problem than standard IL. As a negative result, we find that the Ant domain (the most difficult environment with S ⊂ R111 and A ⊂ R8) still poses a challenge for ILfO algorithms across the board. More specifically, we observe that discriminator regularization hurts learning in both the IL and ILfO settings. While the choice of RKL does manage to produce a marginal improvement over GAIFO, the gap between existing stateof-the-art and expert performance remains unchanged. It is an open challenge for future work to either identify the techniques needed to achieve optimal imitation policies from observations only or characterize a fundamental performance gap when faced with sufficiently large observation spaces. In Figure 4, we vary the total number of expert demonstrations available during learning and observe that certain choices of f -divergences can be more robust in the face of less expert data, both in the IL and ILfO settings. We find that KL-VIM and TV-VIM are slightly more performant than GAIL when only provided with a single expert demonstration. Notably, in each domain we see that certain choices of divergence for f -VIMO do a better job of residing close to their f -VIM counterparts suggesting that future improvements may come from examining f -divergences in the small-data regime. This idea is further exemplified when accounting for results collected while using discriminator regularization (Mescheder et al., 2018). We refer readers to the Appendix for the associated learning curves. Our work leaves many open directions for future work to close the performance gap between student and expert policies in the ILfO setting. While we found the sigmoid function to be a suitable instantiation of our framework, exploring alternative choices of variational function activations could prove useful in synthesizing performant ILfO algorithms. Alternative choices of f -divergences could lead to more substantial improvements than the choices we examine in this paper. Moreover, while this work has a direct focus on f -divergences, Integral Probability Metrics (IPMs) (Müller, 1997; Gretton et al., 2012) represent a distinct but well-established family of divergences between probability distributions. The success of Total Variation distance in our experiments, which doubles as both a f - divergence and IPM (Sriperumbudur et al., 2009), is suggestive of future work building IPM-based ILfO algorithms (Sun et al., 2019). 7 CONCLUSION In this work, we present a general framework for imitation learning and imitation learning from observations under arbitrary choices of f -divergence. We empirically validate a single instantiation of our framework across multiple f -divergences, demonstrating that we overcome the shortcomings of prior work and offer a wide class of IL and ILfO algorithms capable of scaling to larger problems. A RELATED WORK A.1 LEARNING FROM DEMONSTRATION Our work broadly falls within the category of Learning from Demonstration (LfD) (Schaal, 1997; Atkeson & Schaal, 1997; Argall et al., 2009), where an agent must leverage demonstration data (typically provided as trajectories, each consisting of expert state-action pairs) to produce an imitation policy that correctly captures the demonstrated behavior. Within the context of LfD, a finer distinction can be made between behavioral cloning (BC) (Bain & Sommut, 1999; Pomerleau, 1989) and inverse reinforcement learning (IRL) (Ng et al.; Abbeel & Ng, 2004; Syed & Schapire, 2007; Ziebart et al., 2008; Finn et al., 2016; Ho & Ermon, 2016) approaches; BC approaches view the demonstration data as a standard dataset of input-output pairs and apply traditional supervisedlearning techniques to recover an imitation policy. Alternatively, IRL-based methods synthesize an estimate of the reward function used to train the expert policy before subsequently applying a reinforcement-learning algorithm (Sutton & Barto, 1998; Abbeel & Ng, 2004) to recover the corresponding imitation policy. Although not a focus of this work, we also acknowledge the myriad of approaches that operate at the intersection of IL and reinforcement learning or augment reinforcement learning with IL (Rajeswaran et al., 2017; Hester et al., 2018; Salimans & Chen, 2018; Sun et al., 2018; Borsa et al., 2019; Tirumala et al., 2019). While BC approaches have been successful in some settings (Niekum et al., 2015; Giusti et al., 2016; Bojarski et al., 2016), they are also susceptible to failures stemming from covariate shift where minute errors in the actions of the imitation policy compound and force the agent into regions of the state space not captured in the original demonstration data. While some preventative measures for covariate shift do exist (Laskey et al., 2017b), a more principled solution can be found in methods like DAgger (Ross et al., 2011) and its descendants (Ross & Bagnell, 2014; Sun et al., 2017; Le et al., 2018) that remedy covariate shift by querying an expert to provide on-policy action labels. It is worth noting, however, that these approaches are only feasible in settings that admit such online interaction with an expert (Laskey et al., 2016) and, even then, failure modes leading to poor imitation policies do exist (Laskey et al., 2017a). The algorithms presented in this work fall in with IRL-based approaches to IL. Early successes in this regime tend to rely on hand-engineered feature representations for success (Abbeel & Ng, 2004; Ziebart et al., 2008; Levine et al., 2011). Only in recent years, with the aid of deep neural networks, has there been a surge in the number of approaches that are capable of scaling to the raw, highdimensional observations found in real-world control problems (Finn et al., 2016; Ho & Ermon, 2016; Duan et al., 2017; Li et al., 2017; Fu et al., 2017; Kim & Park, 2018). Our work focuses attention exclusively on adversarial methods for their widespread effectiveness across a range of imitation tasks without requiring interactive experts (Ho & Ermon, 2016; Li et al., 2017; Fu et al., 2017; Kostrikov et al., 2018); at the heart of these methods is the Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) approach which produces high-fidelity imitation policies and achieves state-of-the-art results across numerous continuous-control benchmarks by leveraging the expressive power of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) for modeling complex distributions over a high-dimensional support. From an IRL perspective, GAIL can be viewed as iteratively optimizing a parameterized reward function (discriminator) that, when used to optimize an imitation policy (generator) via policy-gradient reinforcement learning (Sutton et al., 2000), allows the agent to shift its own behavior closer to that of the expert. From the perspective of GANs, this is achieved by discriminating between the respective distributions over state-action pairs visited by the imitation and expert policies before training a generator to fool the discriminator and induce a state-action visitation distribution similar to that of the expert. While a large body of prior work exists for IL, numerous recent works have drawn attention to the more challenging problem of imitation learning from observation (Sermanet et al., 2017; Liu et al., 2018; Goo & Niekum, 2018; Kimura et al., 2018; Torabi et al., 2018a;b; Edwards et al., 2019; Sun et al., 2019). In an effort to more closely resemble observational learning in humans and leverage the wealth of publicly-available, observation-only data sources, the ILfO problem considers learning from expert demonstration data where no expert action labels are provided. Many early approaches to ILfO use expert observation sequences to learn a semantic embedding space so that distances between observation sequences of the imitation and expert policies can serve as a cost signal to be minimized via reinforcement learning (Gupta et al., 2017; Sermanet et al., 2017; Dwibedi et al., 2018; Liu et al., 2018). In contrast, Torabi et al. (2018a) introduce Behavioral Cloning from Observation (BCO) which leverages state-action trajectories collected under a random policy to train an inverse dynamics model for inferring the action responsible for a transition between two input states (assuming the two represent a state and next-state pair). With this inverse model in hand, the observation-only demonstration data can be converted into the more traditional dataset of stateaction pairs over which standard BC can be applied. Recognizing the previously discussed limitations of BC approaches, Torabi et al. (2018b) introduce the natural GAIL counterpart for ILfO, Generative Adversarial Imitation from Observation (GAIFO); GAIFO is identical to GAIL except the distributions under consideration in the adversarial game are over state transitions (state and next-state pairs), as opposed to state-action pairs requiring expert action labels. While Torabi et al. (2018b) offer empirical results for continuous-control tasks with low-dimensional features as well as raw image observations, GAIFO falls short of expert performance in both settings leaving an open challenge for scalable ILfO algorithms that achieve expert performance across a wide spectrum of tasks. A central question of this work is to explore how alternative formulations of the GAN objective that underlies GAIFO might yield superior ILfO algorithms. For a more in-depth survey of ILfO approaches, we refer readers to Torabi et al. (2019). A.2 GENERATIVE ADVERSARIAL NETWORKS With a focus on generative-adversarial methods for IL, this work leverages several related ideas in the GAN literature for offering alternative formulations as well as improving understanding of their underlying mathematical foundations (Li et al., 2015; Dziugaite et al., 2015; Zhao et al., 2016; Nowozin et al., 2016; Roth et al., 2017; Arjovsky et al., 2017; Gulrajani et al., 2017; Roth et al., 2018; Mescheder et al., 2018). Critical to the ideas presented in many of these previous works is an understanding that discriminator networks are estimating a divergence between two probability distributions of interest, usually taken to be the real data distribution and the fake or synthetic distribution represented by the generator. Formal characterizations of this divergence, either by Integral Probability Metrics (IPMs) (Müller, 1997; Gretton et al., 2012) or f -divergences (Ali & Silvey, 1966; Csiszár et al., 2004; Liese & Vajda, 2006), yield different variations on the classic GAN formulation which is itself a slight variation on the Jensen-Shannon (JS) divergence (Li et al., 2015; Dziugaite et al., 2015; Zhao et al., 2016; Nowozin et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017). Following from work by Nowozin et al. (2016) to generalize the GAN objective to arbitrary f -divergences, Ke et al. (2019) offer a generalization of GAIL to an arbitrary choice of f -divergence for quantifying the gap between the state-action visitation distributions of the imitation and expert policies; moreover, Ke et al. (2019) propose a unifying framework for IL, f -Variational IMitation (f -VIM), in which they highlight a correspondence between particular choices of f -divergences and existing IL algorithms (specifically BC⇐⇒ Kullback-Leibler (KL) divergence, DAgger⇐⇒ TotalVariation distance, and GAIL ⇐⇒ JS-divergence 2). While Ke et al. (2019) focus on providing empirical results in smaller toy problems to better understand the interplay between f -divergence 2The discriminator loss optimized in the original GAN formulation is 2 ·DJS − log(4) whereDJS denotes the Jensen-Shannon divergence (Goodfellow et al., 2014; Nowozin et al., 2016). choice and the multimodality of the expert trajectory distribution, we provide an empirical evaluation of their f -VIM framework across a range of continous control tasks in the Mujoco domain (Todorov et al., 2012). Empirically, we find that some of the design choices f -VIM inherits from the original f -GAN work (Nowozin et al., 2016) are problematic when coupled with adversarial IL and training of the generator by policy-gradient reinforcement learning, instead of via direct backpropagation as in traditional GANs. Consequently, we refactor their framework to expose this point and provide one practical instantiation that works well empirically. We then go on to extend the f -VIM framework to the IFO problem (f -VIMO) and evaluate the resulting algorithms empirically against the state-of-the-art, GAIFO. B EXPERIMENT DETAILS Here we provide details of the MuJoCo environments (Todorov et al., 2012) used in our experiments as well as the details of the hyperparameter search conducted for all algorithms (IL and ILfO) presented. B.1 MUJOCO ENVIRONMENTS All environments have continuous observation and action spaces of varying dimensionality (as shown below). All algorithms evaluated in each environment were trained for a total of 500 iterations, collecting 50, 000 environment transitions per iteration. Task Observation Space Action Space Ant-v2 R111 R8 Hopper-v2 R11 R3 HalfCheetah-v2 R17 R6 Walker2d-v2 R17 R6 B.2 HYPERPARAMETERS Below we outline the full set of hyperparameters examined for all experiments presented in this work. We conducted a full grid search over 10 random trials with 10 random seeds and report results for the best hyperparameter setting. Hyperparameter Values Discriminator learning rate {1e−4, 1e−3} PPO epochs {5, 10} Discriminator epochs {1, 5, 10} Preliminary experiments were conducted to test smaller values for PPO epochs and policy learning rates before settling on the grid shown above. C ADDITIONAL RESULTS C.1 UNREGULARIZED f -VIM/VIMO C.2 SAMPLE COMPLEXITY LEARNING CURVES C.3 f -DIVERGENCE VARIATIONAL BOUND SWAP Throughout this paper, we advocate for the use of the following variational lower bound to the f -divergence for both f -VIM and f -VIMO: Df (ρ π∗ ||ρπθ ) ≥ min θ max ω E(s,s′)∼ρπ∗ [f∗−1(r(Vω(s, s′)))]− E(s,s′)∼ρπθ [r(Vω(s, s′))] (11) In particular, we value the above form as it clearly exposes the choice of reward function for the imitation policy as a free parameter that, in practice, has strong implications for the stability and convergence of adversarial IL/ILfO algorithms. Alternatively, one may consider appealing to the original lower bound of Nguyen et al. (2010), used in f -GANs (Nowozin et al., 2016) unmodified, but swapping the positions of the two distributions: Df (ρ πθ ||ρπ ∗ ) ≥ min θ max ω E(s,s′)∼ρπθ [gf (Vω(s, s′))]− E(s,s′)∼ρπ∗ [f∗(gf (Vω(s, s′)))] (12) Consequently, the term in this lower bound pertaining to the imitation policy is now similar to that of the bound in Equation 11; namely, an almost arbitrary activation function, gf , applied to the output of the variational function (discriminator) Vω . The difference being that the codomain of gf must obey the domain of the convex conjugate, f∗, while the codomain of r must respect the domain of the inverse convex conjugate, f∗−1. We evaluate these two choices empirically below for the specific choice of the KL-divergence in the Ant and Hopper domains (the two most difficult domains of our evaluation). We find that the original unswapped bound in Equation 11 used throughout this paper outperforms the variants with the distributions swapper, for both the IL and ILfO settings. Crucially, we find that the KL-VIM in the Ant domain no longer achieves expert performance while optimizing the swapped bound.
1. What is the main contribution of the paper on IL? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its independence from the convex conjugate of the function f? 3. How does the reviewer assess the significance of the proposed method compared to prior works, such as forward adversarial IL? 4. What are the concerns regarding the motivation and support of the claim, specifically regarding the connection between f-divergence and ILfO? 5. Are there any issues with the experimental design and results, including choices of divergence, incorporation of priors about survival bonuses, and unfair comparisons?
Review
Review * Summary: The paper proposes an IL method based on the f-divergence. Specifically, the paper extends f-VIM (Ke et al., 2019), which uses the f-divergence for IL, by using a sigmoid function for discriminator output’s activation function. This choice of activation function yields an alternative objective function, where the reward function for an RL agent does not directly depend on the convex conjugate of the function f; the paper claims that this independency improves stability of policy learning. This proposed method is named f-VIM-sigmoid. The paper extends f-VIM-sigmoid to the setting of IL with observation and proposes f-VIMO-sigmoid. Experiments on Mujoco locomotion tasks show that f-VIM-sigmoid and f-VIMO-sigmoid perform better than existing methods. * Rating: The paper proposes a simple but interesting approach to improve stability of adversarial IL. However, the paper has issues regarding baseline methods, motivation, supports of the claim, and experiments (see below). These issues should be addressed. At the present, I vote for rejection. * Major comments: - Discussion and comparing against a simple baseline method based on swapping distributions: To make the reward function be independent of the convex conjugate f*, it is possible to simply swapping the distributions P and Q in the definition of the f-divergence. More specifically, instead of minimizing D_f(P||Q), we can minimize D_f(Q||P), where P is a data distribution and Q is a generator. In this case, pi* and pi_theta in Eq. (7) swap, and the RL agent minimizes the cost function g_f(V_w(s,a)). This cost function does not directly depend on f*, similarly to the reward function r(V_w(s,a)) in Eq. (8). This swapping is simpler and more flexible than re-parameterizing, while achieving the same goal as f-VIM-sigmoid. This swapping method should be discussed and compared against the proposed methods. - Need stronger baseline methods for ILfO: The paper should evaluate f-VIMO-sigmoid against stronger baselines, e.g., forward adversarial IL (Sun et al., 2019) which outperforms GAIL-based methods in the ILfO setting. [1] Wen Sun, Anirudh Vemula, Byron Boots, and J Andrew Bagnell. Provably efficient imitation learning from observation alone. ICML, 2019. - Using the f-divergence for ILfO is not well motivated: The paper does not provide good motivations for using f-divergence in ILfO. This makes the paper quite difficult to follow, since there is no connection between f-divergence and ILfO. - The experiments focus on evaluating existing methods rather than the proposed methods: Specifically, the proposed methods are evaluated with only one choice of the divergence (TV) in Figure 2. Meanwhile, most of Section 6 and results (Figure 3 and 4, and additional results in the appendix) focus on evaluating the existing methods (f-VIM and f-VIMO) with different choices of divergence. - The experiments in Figure 2 do not support the claim regarding stability: The paper claims to improve stability of IL by using the proposed re-parameterization. However, the experimental results do not support this claim, and the questions asked in Section 5 are not related to this claimed. Instead, it seems that re-parameterization helps avoiding local optima (possibly due to a biased reward function, see below), while stability is improved by regularizing the discriminator. I could not see how the re-parameterization improves the policy stability as claimed. - The experiments in Figure 2 seem unfair, since TV-VIM-sigmoid incorporates priors about survival bonuses: Specifically, TV-VIM-sigmoid uses sigmoid which yields strictly positive rewards, while TV-VIM uses tanh which yields positive and negative rewards. As discussed by Kostrikov et al., 2019, using strictly positive rewards incorporate strong priors about the survival bonuses, which exist in the locomotion task used in the experiments. Therefore, TV-VIM-sigmoid uses strong priors while TV-VIM does not. In order to make the comparison fairer, I suggest the authors to evaluate TV-VIM with sigmoid reward output, or include environments that do not have survival bonuses. [2] Ilya Kostrikov, Kumar Krishna Agrawal, Debidatta Dwibedi, Sergey Levine, and Jonathan Tompson. Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning. ICLR, 2019 * Minor comments: - The abstract is long and could be shorten. - Figures are too small and difficult to see, especially the legends. - Table 1 should describe the form of f in addition to its conjugate. - The title of the Algorithm 1 should be f-VIMO-sigmoid instead of f-VIMO. ** Update after response. I read the response. I thank the authors for clarifying the claims as well as the new experiments with the swap formulation. However, improving clarity of the claims is considered a major revision. I still keep the vote of rejection. Regarding reward bias. As the authors acknowledge, the improvement achieved by using reparameterization+sigmoid can be explained by two equally-plausible reasons: 1) reparameterization+sigmoid improves stability (as claimed) and 2) sigmoid gives biased rewards. The issue here is that we do not know which is the actual reason, given the current experiments in the paper. As I commented, evaluating TV-VIM with sigmoid but without reparameterization will help address this issue.
ICLR
Title Reparameterized Variational Divergence Minimization for Stable Imitation Abstract State-of-the-art results in imitation learning are currently held by adversarial methods that iteratively estimate the divergence between student and expert policies and then minimize this divergence to bring the imitation policy closer to expert behavior. Analogous techniques for imitation learning from observations alone (without expert action labels), however, have not enjoyed the same ubiquitous successes. Recent work in adversarial methods for generative models has shown that the measure used to judge the discrepancy between real and synthetic samples is an algorithmic design choice, and that different choices can result in significant differences in model performance. Choices including Wasserstein distance and various f -divergences have already been explored in the adversarial networks literature, while more recently the latter class has been investigated for imitation learning (Ke et al., 2019). Unfortunately, we find that in practice this existing imitation-learning framework for using f -divergences suffers from numerical instabilities stemming from the combination of function approximation and policygradient reinforcement learning. In this work, we alleviate these challenges and offer a reparameterization of adversarial imitation learning as f -divergence minimization before further extending the framework to handle the problem of imitation from observations only. Empirically, we demonstrate that our design choices for coupling imitation learning and f -divergences are critical to recovering successful imitation policies. Moreover, we find that with the appropriate choice of f divergence, we can obtain imitation-from-observation algorithms that outperform baseline approaches and more closely match expert performance in continouscontrol tasks with low-dimensional observation spaces. With high-dimensional observations, we still observe a significant gap with and without action labels, offering an interesting avenue for future work. 1 INTRODUCTION Imitation Learning (IL) (Osa et al., 2018) refers to a paradigm of reinforcement learning in which the learning agent has access to an optimal, reward-maximizing expert for the underlying environment. In most work, this access is provided through a dataset of trajectories where each observed state is annotated with the action prescribed by the expert policy. This is often an extremely powerful learning paradigm in contrast to standard reinforcement learning, since not all tasks of interest admit easily-specified reward functions. Additionally, not all environments are amenable to the prolonged and potentially unsafe exploration needed for reward-maximizing agents to arrive at satisfactory policies (Achiam et al., 2017; Chow et al., 2019). While the traditional formulation of the IL problem assumes access to optimal expert action labels, the provision of such information can often be laborious (in the case of a real, human expert) or incur significant financial cost (such as using elaborate instrumentation to record expert actions). Additionally, this restrictive assumption removes a vast number of rich, observation-only data sources from consideration (Zhou et al., 2018). To bypass these challenges, recent work (Liu et al., 2018; Torabi et al., 2018a;b; Edwards et al., 2019; Sun et al., 2019) has explored what is perhaps a more natural problem formulation in which an agent must recover an imitation policy from a dataset containing only expert observation sequences. While this Imitation Learning from Observations (ILfO) setting carries tremendous potential, such as enabling an agent to learn complex tasks from watching freely available videos on the Internet, it also is fraught with significant additional challenges. In this paper, we show how to incorporate recent advances in generative-adversarial training of deep neural networks to tackle imitation-learning problems and advance the state-of-the-art in ILfO. With these considerations in mind, the overarching goal of this work is to enable sample-efficient imitation from expert demonstrations, both with and without the provision of expert action labels. The rich literature on Generative Adversarial Networks (Goodfellow et al., 2014) has expanded in recent years to include alternative formulations of the underlying objective that yield qualitatively different solutions to the saddle-point optimization problem (Li et al., 2015; Dziugaite et al., 2015; Zhao et al., 2016; Nowozin et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017). Of notable interest are the findings of Nowozin et al. (2016) who present Variational Divergence Minimization (VDM), a generalization of the generative-adversarial approach to arbitrary choices of distance measures between probability distributions drawn from the class of f -divergences (Ali & Silvey, 1966; Csiszár et al., 2004). Applying VDM with varying choices of f - divergence, Nowozin et al. (2016) encounter learned synthetic distribu- tions that can exhibit differences from one another while producing equally realistic samples. Translating this idea for imitation is complicated by the fact that the optimization of the generator occurs via policy-gradient reinforcement learning (Sutton et al., 2000). Existing work in combining adversarial IL and f -divergences (Ke et al., 2019), despite being well-motivated, fails to account for this difference; the end results (shown partially in Figure 1, where TV-VIM is the method of Ke et al. (2019), and discussed further in later sections) are imitation-learning algorithms that scale poorly to environments with higher-dimensional observations. In this work, we assess the effect of the VDM principle and consideration of alternative f - divergences in the contexts of IL and ILfO. We begin by reparameterizing the framework of Ke et al. (2019) for the standard IL problem. Our version transparently exposes the choices practitioners must make when designing adversarial imitation algorithms for arbitrary choices of f -divergence. We then offer a single instantiation of our framework that, in practice, allows stable training of good policies across multiple choices of f -divergence. An example is illustrated in Figure 1 where our methods (TV-VIM-sigmoid and TV-VIMO-sigmoid) result in significantly superior policies. We go on to extend our framework to encapsulate the ILfO setting and examine the efficacy of the resulting new algorithms across a range of continuous-control tasks in the MuJoCo (Todorov et al., 2012) domain. Our empirical results validate our framework as a viable unification of adversarial imitation methods under the VDM principle. With the assistance of recent advances in stabilizing regularization for adversarial training (Mescheder et al., 2018), improvements in performance can be attained under an appropriate choice of f -divergence. However, there is still a significant performance gap between the recovered imitation policies and expert behavior for tasks with high dimensional observations, leaving open directions for future work in developing improved ILfO algorithms. 2 RELATED WORK The algorithms presented in this work fall in with inverse reinforcement learning (IRL) (Ng et al.; Abbeel & Ng, 2004; Syed & Schapire, 2007; Ziebart et al., 2008; Finn et al., 2016; Ho & Ermon, 2016) approaches to IL. Early successes in this regime tend to rely on hand-engineered feature rep- resentations for success (Abbeel & Ng, 2004; Ziebart et al., 2008; Levine et al., 2011). Only in recent years, with the aid of deep neural networks, has there been a surge in the number of approaches that are capable of scaling to the raw, high-dimensional observations found in real-world control problems (Finn et al., 2016; Ho & Ermon, 2016; Duan et al., 2017; Li et al., 2017; Fu et al., 2017; Kim & Park, 2018). Our work focuses attention exclusively on adversarial methods for their widespread effectiveness across a range of imitation tasks without requiring interactive experts (Ho & Ermon, 2016; Li et al., 2017; Fu et al., 2017; Kostrikov et al., 2018); at the heart of these methods is the Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) approach which produces high-fidelity imitation policies and achieves state-of-the-art results across numerous continuous-control benchmarks by leveraging the expressive power of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) for modeling complex distributions over a high-dimensional support. From an IRL perspective, GAIL can be viewed as iteratively optimizing a parameterized reward function (discriminator) that, when used to optimize an imitation policy (generator) via policy-gradient reinforcement learning (Sutton et al., 2000), allows the agent to shift its own behavior closer to that of the expert. From the perspective of GANs, this is achieved by discriminating between the respective distributions over state-action pairs visited by the imitation and expert policies before training a generator to fool the discriminator and induce a state-action visitation distribution similar to that of the expert. While a large body of prior work exists for IL, recent work has drawn attention to the more challenging problem of imitation learning from observation (Sermanet et al., 2017; Liu et al., 2018; Goo & Niekum, 2018; Kimura et al., 2018; Torabi et al., 2018a;b; Edwards et al., 2019; Sun et al., 2019). To more closely resemble observational learning in humans and leverage the wealth of publiclyavailable, observation-only data sources, the ILfO problem considers learning from expert demonstration data where no expert action labels are provided. Many early approaches to ILfO use expert observation sequences to learn a semantic embedding space so that distances between observation sequences of the imitation and expert policies can serve as a cost signal to be minimized via reinforcement learning (Gupta et al., 2017; Sermanet et al., 2017; Dwibedi et al., 2018; Liu et al., 2018). In contrast, Torabi et al. (2018a) introduce Behavioral Cloning from Observation (BCO) which leverages state-action trajectories collected under a random policy to train an inverse dynamics model for inferring the action responsible for a transition between two input states (assuming the two represent a state and next-state pair). With this inverse model in hand, the observation-only demonstration data can be converted into the more traditional dataset of state-action pairs over which standard BC can be applied. Recognizing the previously discussed limitations of BC approaches, Torabi et al. (2018b) introduce the natural GAIL counterpart for ILfO, Generative Adversarial Imitation from Observation (GAIFO); GAIFO is identical to GAIL except the distributions under consideration in the adversarial game are over state transitions (state and next-state pairs), as opposed to stateaction pairs requiring expert action labels. While Torabi et al. (2018b) offer empirical results for continuous-control tasks with low-dimensional features as well as raw image observations, GAIFO falls short of expert performance in both settings leaving an open challenge for scalable ILfO algorithms that achieve expert performance across a wide spectrum of tasks. A central question of this work is to explore how alternative formulations of the GAN objective that underlies GAIFO might yield superior ILfO algorithms. For a more in-depth survey of ILfO approaches, we refer readers to Torabi et al. (2019). We refer readers to the Appendix for a broader overview of prior work. 3 BACKGROUND We begin by formulating the problems of imitation learning and imitation learning from observation respectively before taking a closer look at f -divergences and connecting them to imitation learning. 3.1 IMITATION LEARNING & IMITATION FROM OBSERVATION We operate within the Markov Decision Process (MDP) formalism (Bellman, 1957; Puterman, 2014) defined as a five-tupleM = 〈S,A,R, T , γ〉 where S denotes a (potentially infinite) set of states, A denotes a (potentially infinite) set of actions, R : S × A × S → R is a reward function, T : S × A → ∆(S) is a transition function, and γ ∈ [0, 1) is a discount factor. At each timestep, the agent observes the current state of the world, st ∈ S, and randomly samples an action according to its stochastic policy π : S → ∆(A). The environment then transitions to a new state according to the transition function T and produces a reward signal according to the reward function R that is communicative of the agent’s progress through the overall task. Unlike, the traditional reinforcement learning paradigm, the decision-making problem presented in IL lacks a concrete reward function; in lieu of R, a learner is provided with a dataset of expert demonstrationsD = {τ1, τ2, . . . τN}where each τi = (si1, ai1, si2, ai2, . . .) represents the sequence of states and corresponding actions taken by an expert policy, π∗. Naturally, the goal of an IL algorithm is to synthesize a policy π using D, along with access to the MDPM, whose behavior matches that of π∗. While the previous section outlines several possible avenues for using D to arrive at a satisfactory imitation policy, our work focuses on adversarial methods that build around GAIL (Ho & Ermon, 2016). Following from the widespread success of GANs (Goodfellow et al., 2014), GAIL offers a highly-performant approach to IL wherein, at each iteration of the algorithm, transitions sampled from the current imitation policy are first used to update a discriminator, Dω(s, a), that acts a binary classifier to distinguish between state-action pairs sampled according to the distributions induced by the expert and student. Subsequently, treating the imitation policy as a generator, policy-gradient reinforcement learning is used to shift the current policy towards expert behavior, issuing higher rewards for those generated state-action pairs that are regarded as belonging to the expert according to Dω(s, a). More formally, this minimax optimization follows as min π max ω E(s,a)∼ρπ∗ [log(Dω(s, a))] + E(s,a)∼ρπ [log(1−Dω(s, a))] (1) where ρπ ∗ (s, a) and ρπ(s, a) denote the undiscounted stationary distributions over state-action pairs for the expert and imitation policies respectively. Here Dω(s, a) = σ(Vω(s, a)) where Vω(s, a) represents the unconstrained output of a discriminator neural network with parameters ω and σ(v) = (1 + e−x)−1 denotes the sigmoid activation function. Since the imitation policy only exerts control over the latter term in the above objective, the per-timestep reward function maximized by reinforcement learning is given as r(s, a, s′) = − log(1 − Dω(s, a)). In practice, an entropy regularization term is often added to the objective when optimizing the imitation policy so as to avoid premature convergence to a suboptimal solution (Mnih et al., 2016; Ho & Ermon, 2016; Neu et al., 2017). In order to accommodate various observation-only data sources (Zhou et al., 2018) and remove the burden of requiring expert action labels, the imitation from observation setting adjusts the expert demonstration dataset D such that each trajectory τi = (si1, si2, . . .) consists only of expert observation sequences. Retaining the goal of recovering an imitation policy that closely resembles expert behavior, Torabi et al. (2018b) introduce GAIFO as the natural extension of GAIL for matching the state transition distribution of the expert policy. Note that an objective for matching the stationary distribution over expert state transitions enables the provision of per-timestep feedback while simultaneously avoid the issues of temporal alignment that arise when trying to match trajectories directly. The resulting algorithm iteratively finds a solution to the following minimax optimization: min π max ω E(s,s′)∼ρπ∗ [log(Dω(s, s′))] + E(s,s′)∼ρπ [log(1−Dω(s, s′))] (2) where ρπ ∗ (s, s′) and ρπ(s, s′) now denote the analogous stationary distributions over successive state pairs while Dω(s, s′) = σ(Vω(s, s′)) represents binary classifier over state pairs. Similar to GAIL, the imitation policy is optimized via policy-gradient reinforcement learning with per-timestep rewards computed according to r(s, a, s′) = − log(1−Dω(s, s′)) and using entropy regularization as needed. 4 APPROACH In this section, we begin with an overview of f -divergences, their connection to GANs, and their impact on IL through the f -VIM framework (Ke et al., 2019) (Section 4.1). We then present an alternative view of the framework that transparently exposes the fundamental choice practictioners must make in order to circumvent practical issues that arise when applying f -VIM to high-dimensional tasks (Section 4.2). We conclude by presenting our approach for ILfO as f -divergence minimization (Section 4.3) followed by a brief discussion of a regularization technique used to stabilize discriminator training in our experiments (Section 4.4). 4.1 f -DIVERGENCES AND IMITATION LEARNING The GAIL and GAIFO approaches engage in an adversarial game where the discriminator estimates the divergence between state-action or state transition distributions according to the JensenShannon divergence (Goodfellow et al., 2014). In this work, our focus is on a more general class of divergences, that includes the Jensen-Shannon divergence, known as Ali-Silvey distances or f - divergences (Ali & Silvey, 1966; Csiszár et al., 2004). For two distributions P and Q with support over a domainX and corresponding continuous densities p and q, we have the f -divergence between them according to: Df (P ||Q) = ∫ X q(x)f( p(x) q(x) )dx (3) where f : R+ → R is a convex, lower-semicontinuous function such that f(1) = 0. As illustrated in Table 1, different choices of function f yield well-known divergences between probability distributions. In order to accommodate the tractable estimation of f -divergences when only provided samples from P and Q, Nguyen et al. (2010) offer an approach for variational estimation of f -divergences. Central to their procedure is the use of the convex conjugate function or Fenchel conjugate (Hiriart-Urruty & Lemaréchal, 2004), f∗, which exists for all convex, lower-semicontinuous functions f and is defined as the following supremum: f∗(t) = sup u∈domf {ut− f(u)} (4) Using the duality of the convex conjugate (f∗∗ = f ), Nguyen et al. (2010) represent f(u) = sup t∈domf∗ {tu− f∗(t)} enabling a variational bound: Df (P ||Q) = ∫ X q(x) sup t∈domf∗ { t p(x) q(x) − f∗(t) } dx ≥ sup T∈T ( ∫ X p(x)T (x)dx− ∫ X q(x)f∗(T (x))dx) = sup T∈T (Ex∼P [T (x)]− Ex∼Q[f∗(T (x))]) (5) where T is an arbitrary class of functions T : X → domf∗ . Nowozin et al. (2016) extend the use of this variational lower bound for GANs that utilize arbitrary f -divergences, or f -GANs. Specifically, the two distributions of interest are the real data distribution P and a synthetic distribution represented by a generative model Qθ with parameters θ. The variational function is also parameterized as Tω acting as the discriminator. This gives rise to the VDM principle which defines the f -GAN objective min θ max ω Ex∼P [Tω(x)]− Ex∼Qθ [f∗(Tω(x))] (6) Nowozin et al. (2016) represent the variational function as Tω(x) = gf (Vω(x)) such that Vω(x) : X → R represents the unconstrained discriminator network while gf : R→ domf∗ is an activation function chosen in accordance with the f -divergence being optimized. Table 1 includes the “somewhat arbitrary” but effective choices for gf suggested by Nowozin et al. (2016) and we refer readers to their excellent work for more details and properties of f -divergences and f -GANs. Recently, Ke et al. (2019) have formalized the generalization from GAN to f -GAN for the traditional IL problem. They offer the f -Variational Imitation (f -VIM) framework for the specific case of estimating and then minimizing the divergence between state-action distributions induced by expert and imitation policies: min θ max ω E(s,a)∼ρπ∗ [gf (Vω(s, a))]− E(s,a)∼ρπθ [f∗(gf (Vω(s, a)))] (7) where Vω : S × A → R denotes the discriminator network that will supply per-timestep rewards during the outer policy optimization which itself is carried out over policy parameters θ via policygradient reinforcement learning (Sutton et al., 2000). In particular, the per-timestep rewards provided to the agent are given according to r(s, a, s′) = f∗(gf (Vω(s, a))). While Ke et al. (2019) do an excellent job of motivating the use of f -divergences for IL (by formalizing the relationship between divergences over trajectory distributions vs. state-action distributions) and connecting f -VIM to existing imitation-learning algorithms, their experiments focus on smaller problems to study the mode-seeking/mode-covering aspects of different f -divergences and the implications of such behavior depending on the multimodality of the expert trajectory distribution. Meanwhile, in the course of attempting to apply f -VIM to large-scale imitation problems, we empirically observe numerical instabilities stemming from function approximation, demanding a reformulation of the framework. 4.2 REPARAMETERIZING f -VIM In their presentation of the f -VIM framework, Ke et al. (2019) retain the choices for activation function gf introduced by Nowozin et al. (2016) for f -GANs. Recall that these choices of gf play a critical role in defining the reward function optimized by the imitation policy on each iteration of f -VIM, r(s, a, s′) = f∗(gf (Vω(s, a))). It is well known in the reinforcement-learning literature that the nature of the rewards provided to an agent have strong implications on learning success and efficiency (Ng et al., 1999; Singh et al., 2010). While the activation choices made for f -GANs are suitable given that both optimization problems are carried out by backpropagation, we assert that special care must be taken when specifying these activations (and implicitly, the reward function) for imitation-learning algorithms. A combination of convex conjugate and activation function could induce a reward function that engenders numerical instability or a simply challenging reward landscape, depending on the underlying policy-gradient algorithm utilized (Henderson et al., 2018). Empirically, we found that the particular activation choices for the KL and reverse KL divergences shown in Table 1 (linear and exponential, respectively) produced imitation-learning algorithms that, in all of our evaluation environments, failed to complete execution due to numerical instabilities caused by exploding policy gradients. In the case of the Total Variation distance, the corresponding f -GAN activation for the variational function is a tanh, requiring a learning agent to traverse a reward interval of [−1, 1] by crossing an intermediate region with reward signals centered around 0. To refactor the f -VIM framework so that it more clearly exposes the choice of reward function to practictioners and shifts the issues of reward scale away from the imitation policy, we propose uniformly applying an activation function gf (v) = f∗−1(r(v)) where f∗−1(t) denotes the inverse of the convex conjugate (see Table 1). Here r is effectively a free parameter that can be set according to one of the many heuristics used throughout the field of deep reinforcement learning for maintaining a reasonable reward scale (Mnih et al., 2015; 2016; Henderson et al., 2018) so long as it obeys the domain of the inverse conjugate domf∗−1 . In selecting gf accordingly, the reparameterized saddlepoint optimization for f -VIM becomes min θ max ω E(s,a)∼ρπ∗ [f∗−1(r(Vω(s, a)))]− E(s,a)∼ρπθ [r(Vω(s, a))] (8) where the per-timestep rewards used during policy optimization are given by r(s, a, s′) = r(Vω(s, a)). In applying this choice, we shift the undesirable scale of the latter term in VDM towards the discriminator, expecting it to be indifferent since training is done by backpropagation. As one potential instantiation, we consider r(u) = σ(u) where σ(·) denotes the sigmoid function leading to bounded rewards in the interval [0, 1] that conveniently adhere to domf∗−1 for almost all of the f -divergences examined in this work1. In Section 5, we evaluate imitation-learning algorithms with this choice against those using f -VIM with the original f -GAN activations; we find that, without regard for the scale of rewards and the underlying reinforcement-learning problem being solved, the f -GAN activation choices either produce degenerate solutions or completely fail to produce an imitation policy altogether. 4.3 f -DIVERGENCES AND IMITATION FROM OBSERVATION Applying the variational lower bound of Nguyen et al. (2010) and the corresponding f -GAN extension, we can now present our Variational Imitation from Observation (f -VIMO) extension for a general family of ILfO algorithms that leverage the VDM principle in the underlying saddle-point optimization. Since optimization of the generator will continue to be carried out by policy-gradient reinforcement learning, we adhere to our reparameterization of the f -VIM framework and present the f -VIMO objective as: min θ max ω E(s,s′)∼ρπ∗ [f∗−1(r(Vω(s, s′)))]− E(s,s′)∼ρπθ [r(Vω(s, s′))] (9) with the per-timestep rewards given according to r(s, a, s′) = r(Vω(s, s′)). We present the full approach as Algorithm 1. Just as in Section 4.2, we again call attention to Line 5 where the discriminator outputs (acting as individual reward signals) scale the policy gradient, unlike the more conventional discriminator optimization of Line 4 by backpropagation; this key difference is the primary motivator for our specific reparameterization of the f -VIM framework. Just as in the previous section, we take r(u) = σ(u) as a particularly convenient choice of activation given its agreement to the inverse conjugate domains domf∗−1 for many choices of f -divergence and we employ this instantiation throughout all of our experiments. We leave the examination of alternative choices for r to future work. Algorithm 1 f -VIMO 1: INPUT: Dataset of expert trajectories D, initial policy and discriminator parameters θ0 and ω0, number of iterations N , discount factor γ 2: for i = 0, 1, . . . , N do 3: Sample trajectories from current imitation policy τi ∼ πθi 4: ωi+1 = ωi +∇ω ( E(s,s′)∼D[f∗−1(r(Vω(s, s′)))]− E(s,s′)∼τi [r(Vω(s, s′))] ) 5: Update θi to θi+1 via a policy-gradient update with rewards given by r(Vω(s, s′)): θi+1 = θi + E(s,a,s′)∼τi [ ∇θ log(πθi(a|s))Eτi [ ∞∑ t=1 γt−1r(Vω(st−1, st))|s0 = s, s1 = s′] ] 6: end for 4.4 DISCRIMINATOR REGULARIZATION The refactored version of f -VIM presented in Section 4.2 is fundamentally addressing instability issues that may occur on the generator side of adversarial training; in our experiments, we also examine the utility of regularizing the discriminator side of the optimization for improved stability. Following from a line of work examining the underlying mathematical properties of GAN optimization (Roth et al., 2017; 2018; Mescheder et al., 2018), we opt for the simple gradient-based regularization of Mescheder et al. (2018) which (for f -VIMO) augments the discriminator loss with the following regularization term: R(ω) = ψ 2 E(s,s′)∼ρπ∗ [||∇ωf∗−1(r(Vω(s, s′)))||2] (10) where ψ is a hyperparameter controlling the strength of the regularization. The form of this specific penalty follows from the analysis of Roth et al. (2017); intuitively, its purpose is to disincentivize the 1For Total Variation distance, we use r(u) = 1 2 σ(u) to remain within domf∗−1 . discriminator from producing a non-zero gradient that shifts away from the Nash equilibrium of the minimax optimization when presented with a generator that perfectly matches the true data distribution. While originally developed for traditional GANs and shown to empirically exhibit stronger convergence properties over Wasserstein GANs (Gulrajani et al., 2017), this effect is still desirable for the adversarial IL setting where the reward function (discriminator) used for optimizing the imitation policy should stop changing once the expert state-transition distribution has been matched. In practice, we compare f -VIM and f -VIMO both with and without the use of this regularization term and find thatR(ω) can improve the stability and convergence of f -VIMO across almost all domains. 5 EXPERIMENTS We examine four instantiations of the f -VIM and f -VIMO frameworks (as presented in Sections 4.2 and 4.3) corresponding to imitation algorithms with the following choices of f -divergence: GAN, Kullback-Leibler, reverse KL, and Total Variation. We conduct our evaluation across four MuJoCo environments (Todorov et al., 2012) of varying difficulty: Ant, Hopper, HalfCheetah, and Walker (see the Appendix for more details on individual environments). The core questions we seek to answer through our empirical results are as follows: 1. What are the implications of the choice of activation for the variational function in f -VIM on imitation policy performance? 2. Do f -divergences act as a meaningful axis of variation for IL and ILfO algorithms? 3. What is the impact of discriminator regularization on the stability and convergence proper- ties of f -VIM/f -VIMO? 4. How does the impact of different f -divergences vary with the amount of expert demonstra- tion data provided? To answer the first three questions above, we report the average total reward achieved by the imitation policy throughout the course of learning with rewards as defined by the corresponding OpenAI Gym environment (Brockman et al., 2016). Shading in all plots denote 95% confidence intervals computed over 10 random trials with 10 random seeds. Expert demonstration datasets of 50 trajectories were collected from agents trained via Proximal Policy Optimization (PPO) (Schulman et al., 2017); 20 expert demonstrations were randomly subsampled at the start of learning and held fixed for the duration of the algorithm. We also utilize PPO as the underlying reinforcement-learning algorithm for training the imitation policy with a clipping parameter of 0.2, advantage normalization, entropy regularization coefficient 1e−3, and the Adam optimizer (Kingma & Ba, 2014). Just as in Ho & Ermon (2016) we use a discount factor of γ = 0.995 and apply Generalized Advantage Estimation (Schulman et al., 2015) with parameter λ = 0.97. We run both f -VIM and f -VIMO for a total of 500 iterations, collecting 50000 environment samples per iteration. The policy and discriminator architectures are identically two separate multi-layer perceptrons each with two hidden layers of 100 units separated by tanh nonlinearities. A grid search was used for determining the initial learning rate, number of PPO epochs, and number of epochs used for discriminator training (please see the Appendix for more details) and we report results for the best hyperparameter settings. To address our final question, we take the best hyperparameter settings recovered when given 20 expert demonstrations and re-run all algorithms with {1, 5, 10, 15} expert demonstrations that are randomly sampled at the start of each random trial and held fixed for the duration of the algorithm. We then record the average return of the final imitation policy for each level of expert demonstration. 6 RESULTS & DISCUSSION To highlight the importance of carefully selecting the variational function activation gf and validate our modifications to the f -VIM framework, we present results in Figure 2 comparing to the original f -VIM framework of Ke et al. (2019) and its natural ILfO counterpart. Activation functions for the original methods are chosen according to the choices outlined in Ke et al. (2019); Nowozin et al. (2016). In our experiments using the KL and reverse KL divergences, we found that none of the trials reached completion due to numerical instabilities caused by exploding policy gradients. Consequently, we only present results for the Total Variation distance. We observe that under the original f -GAN activation selection, we fail to produce meaningful imitation policies with learning stagnating after 100 iterations or less. As previously mentioned, we suspect that this stems from the use of tanh with TV leading to a dissipating reward signal. We present results in Figure 3 to assess the utility of varying the choice of divergence in f -VIM and f -VIMO across each domain. In considering the impact of f -divergence choice, we find that most of the domains must be examined in isolation to observe a particular subset of f -divergences that stand out. In the IL setting, we find that varying the choice of f -divergence can yield different learning curves but, ultimately, produce near-optimal (if not optimal) imitation policies across all domains. In contrast, we find meaningful choices of f -divergence in the ILfO setting including {KL, TV} for Hopper, RKL for HalfCheetah, and {GAN, TV} for Walker. We note that the use of discriminator regularization per Mescheder et al. (2018) is crucial to achieving these performance gains, whereas the regularization generally fails to help performance in the IL setting. This finding is supportive of the logical intuition that ILfO poses a fundamentally more-challenging problem than standard IL. As a negative result, we find that the Ant domain (the most difficult environment with S ⊂ R111 and A ⊂ R8) still poses a challenge for ILfO algorithms across the board. More specifically, we observe that discriminator regularization hurts learning in both the IL and ILfO settings. While the choice of RKL does manage to produce a marginal improvement over GAIFO, the gap between existing stateof-the-art and expert performance remains unchanged. It is an open challenge for future work to either identify the techniques needed to achieve optimal imitation policies from observations only or characterize a fundamental performance gap when faced with sufficiently large observation spaces. In Figure 4, we vary the total number of expert demonstrations available during learning and observe that certain choices of f -divergences can be more robust in the face of less expert data, both in the IL and ILfO settings. We find that KL-VIM and TV-VIM are slightly more performant than GAIL when only provided with a single expert demonstration. Notably, in each domain we see that certain choices of divergence for f -VIMO do a better job of residing close to their f -VIM counterparts suggesting that future improvements may come from examining f -divergences in the small-data regime. This idea is further exemplified when accounting for results collected while using discriminator regularization (Mescheder et al., 2018). We refer readers to the Appendix for the associated learning curves. Our work leaves many open directions for future work to close the performance gap between student and expert policies in the ILfO setting. While we found the sigmoid function to be a suitable instantiation of our framework, exploring alternative choices of variational function activations could prove useful in synthesizing performant ILfO algorithms. Alternative choices of f -divergences could lead to more substantial improvements than the choices we examine in this paper. Moreover, while this work has a direct focus on f -divergences, Integral Probability Metrics (IPMs) (Müller, 1997; Gretton et al., 2012) represent a distinct but well-established family of divergences between probability distributions. The success of Total Variation distance in our experiments, which doubles as both a f - divergence and IPM (Sriperumbudur et al., 2009), is suggestive of future work building IPM-based ILfO algorithms (Sun et al., 2019). 7 CONCLUSION In this work, we present a general framework for imitation learning and imitation learning from observations under arbitrary choices of f -divergence. We empirically validate a single instantiation of our framework across multiple f -divergences, demonstrating that we overcome the shortcomings of prior work and offer a wide class of IL and ILfO algorithms capable of scaling to larger problems. A RELATED WORK A.1 LEARNING FROM DEMONSTRATION Our work broadly falls within the category of Learning from Demonstration (LfD) (Schaal, 1997; Atkeson & Schaal, 1997; Argall et al., 2009), where an agent must leverage demonstration data (typically provided as trajectories, each consisting of expert state-action pairs) to produce an imitation policy that correctly captures the demonstrated behavior. Within the context of LfD, a finer distinction can be made between behavioral cloning (BC) (Bain & Sommut, 1999; Pomerleau, 1989) and inverse reinforcement learning (IRL) (Ng et al.; Abbeel & Ng, 2004; Syed & Schapire, 2007; Ziebart et al., 2008; Finn et al., 2016; Ho & Ermon, 2016) approaches; BC approaches view the demonstration data as a standard dataset of input-output pairs and apply traditional supervisedlearning techniques to recover an imitation policy. Alternatively, IRL-based methods synthesize an estimate of the reward function used to train the expert policy before subsequently applying a reinforcement-learning algorithm (Sutton & Barto, 1998; Abbeel & Ng, 2004) to recover the corresponding imitation policy. Although not a focus of this work, we also acknowledge the myriad of approaches that operate at the intersection of IL and reinforcement learning or augment reinforcement learning with IL (Rajeswaran et al., 2017; Hester et al., 2018; Salimans & Chen, 2018; Sun et al., 2018; Borsa et al., 2019; Tirumala et al., 2019). While BC approaches have been successful in some settings (Niekum et al., 2015; Giusti et al., 2016; Bojarski et al., 2016), they are also susceptible to failures stemming from covariate shift where minute errors in the actions of the imitation policy compound and force the agent into regions of the state space not captured in the original demonstration data. While some preventative measures for covariate shift do exist (Laskey et al., 2017b), a more principled solution can be found in methods like DAgger (Ross et al., 2011) and its descendants (Ross & Bagnell, 2014; Sun et al., 2017; Le et al., 2018) that remedy covariate shift by querying an expert to provide on-policy action labels. It is worth noting, however, that these approaches are only feasible in settings that admit such online interaction with an expert (Laskey et al., 2016) and, even then, failure modes leading to poor imitation policies do exist (Laskey et al., 2017a). The algorithms presented in this work fall in with IRL-based approaches to IL. Early successes in this regime tend to rely on hand-engineered feature representations for success (Abbeel & Ng, 2004; Ziebart et al., 2008; Levine et al., 2011). Only in recent years, with the aid of deep neural networks, has there been a surge in the number of approaches that are capable of scaling to the raw, highdimensional observations found in real-world control problems (Finn et al., 2016; Ho & Ermon, 2016; Duan et al., 2017; Li et al., 2017; Fu et al., 2017; Kim & Park, 2018). Our work focuses attention exclusively on adversarial methods for their widespread effectiveness across a range of imitation tasks without requiring interactive experts (Ho & Ermon, 2016; Li et al., 2017; Fu et al., 2017; Kostrikov et al., 2018); at the heart of these methods is the Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) approach which produces high-fidelity imitation policies and achieves state-of-the-art results across numerous continuous-control benchmarks by leveraging the expressive power of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) for modeling complex distributions over a high-dimensional support. From an IRL perspective, GAIL can be viewed as iteratively optimizing a parameterized reward function (discriminator) that, when used to optimize an imitation policy (generator) via policy-gradient reinforcement learning (Sutton et al., 2000), allows the agent to shift its own behavior closer to that of the expert. From the perspective of GANs, this is achieved by discriminating between the respective distributions over state-action pairs visited by the imitation and expert policies before training a generator to fool the discriminator and induce a state-action visitation distribution similar to that of the expert. While a large body of prior work exists for IL, numerous recent works have drawn attention to the more challenging problem of imitation learning from observation (Sermanet et al., 2017; Liu et al., 2018; Goo & Niekum, 2018; Kimura et al., 2018; Torabi et al., 2018a;b; Edwards et al., 2019; Sun et al., 2019). In an effort to more closely resemble observational learning in humans and leverage the wealth of publicly-available, observation-only data sources, the ILfO problem considers learning from expert demonstration data where no expert action labels are provided. Many early approaches to ILfO use expert observation sequences to learn a semantic embedding space so that distances between observation sequences of the imitation and expert policies can serve as a cost signal to be minimized via reinforcement learning (Gupta et al., 2017; Sermanet et al., 2017; Dwibedi et al., 2018; Liu et al., 2018). In contrast, Torabi et al. (2018a) introduce Behavioral Cloning from Observation (BCO) which leverages state-action trajectories collected under a random policy to train an inverse dynamics model for inferring the action responsible for a transition between two input states (assuming the two represent a state and next-state pair). With this inverse model in hand, the observation-only demonstration data can be converted into the more traditional dataset of stateaction pairs over which standard BC can be applied. Recognizing the previously discussed limitations of BC approaches, Torabi et al. (2018b) introduce the natural GAIL counterpart for ILfO, Generative Adversarial Imitation from Observation (GAIFO); GAIFO is identical to GAIL except the distributions under consideration in the adversarial game are over state transitions (state and next-state pairs), as opposed to state-action pairs requiring expert action labels. While Torabi et al. (2018b) offer empirical results for continuous-control tasks with low-dimensional features as well as raw image observations, GAIFO falls short of expert performance in both settings leaving an open challenge for scalable ILfO algorithms that achieve expert performance across a wide spectrum of tasks. A central question of this work is to explore how alternative formulations of the GAN objective that underlies GAIFO might yield superior ILfO algorithms. For a more in-depth survey of ILfO approaches, we refer readers to Torabi et al. (2019). A.2 GENERATIVE ADVERSARIAL NETWORKS With a focus on generative-adversarial methods for IL, this work leverages several related ideas in the GAN literature for offering alternative formulations as well as improving understanding of their underlying mathematical foundations (Li et al., 2015; Dziugaite et al., 2015; Zhao et al., 2016; Nowozin et al., 2016; Roth et al., 2017; Arjovsky et al., 2017; Gulrajani et al., 2017; Roth et al., 2018; Mescheder et al., 2018). Critical to the ideas presented in many of these previous works is an understanding that discriminator networks are estimating a divergence between two probability distributions of interest, usually taken to be the real data distribution and the fake or synthetic distribution represented by the generator. Formal characterizations of this divergence, either by Integral Probability Metrics (IPMs) (Müller, 1997; Gretton et al., 2012) or f -divergences (Ali & Silvey, 1966; Csiszár et al., 2004; Liese & Vajda, 2006), yield different variations on the classic GAN formulation which is itself a slight variation on the Jensen-Shannon (JS) divergence (Li et al., 2015; Dziugaite et al., 2015; Zhao et al., 2016; Nowozin et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017). Following from work by Nowozin et al. (2016) to generalize the GAN objective to arbitrary f -divergences, Ke et al. (2019) offer a generalization of GAIL to an arbitrary choice of f -divergence for quantifying the gap between the state-action visitation distributions of the imitation and expert policies; moreover, Ke et al. (2019) propose a unifying framework for IL, f -Variational IMitation (f -VIM), in which they highlight a correspondence between particular choices of f -divergences and existing IL algorithms (specifically BC⇐⇒ Kullback-Leibler (KL) divergence, DAgger⇐⇒ TotalVariation distance, and GAIL ⇐⇒ JS-divergence 2). While Ke et al. (2019) focus on providing empirical results in smaller toy problems to better understand the interplay between f -divergence 2The discriminator loss optimized in the original GAN formulation is 2 ·DJS − log(4) whereDJS denotes the Jensen-Shannon divergence (Goodfellow et al., 2014; Nowozin et al., 2016). choice and the multimodality of the expert trajectory distribution, we provide an empirical evaluation of their f -VIM framework across a range of continous control tasks in the Mujoco domain (Todorov et al., 2012). Empirically, we find that some of the design choices f -VIM inherits from the original f -GAN work (Nowozin et al., 2016) are problematic when coupled with adversarial IL and training of the generator by policy-gradient reinforcement learning, instead of via direct backpropagation as in traditional GANs. Consequently, we refactor their framework to expose this point and provide one practical instantiation that works well empirically. We then go on to extend the f -VIM framework to the IFO problem (f -VIMO) and evaluate the resulting algorithms empirically against the state-of-the-art, GAIFO. B EXPERIMENT DETAILS Here we provide details of the MuJoCo environments (Todorov et al., 2012) used in our experiments as well as the details of the hyperparameter search conducted for all algorithms (IL and ILfO) presented. B.1 MUJOCO ENVIRONMENTS All environments have continuous observation and action spaces of varying dimensionality (as shown below). All algorithms evaluated in each environment were trained for a total of 500 iterations, collecting 50, 000 environment transitions per iteration. Task Observation Space Action Space Ant-v2 R111 R8 Hopper-v2 R11 R3 HalfCheetah-v2 R17 R6 Walker2d-v2 R17 R6 B.2 HYPERPARAMETERS Below we outline the full set of hyperparameters examined for all experiments presented in this work. We conducted a full grid search over 10 random trials with 10 random seeds and report results for the best hyperparameter setting. Hyperparameter Values Discriminator learning rate {1e−4, 1e−3} PPO epochs {5, 10} Discriminator epochs {1, 5, 10} Preliminary experiments were conducted to test smaller values for PPO epochs and policy learning rates before settling on the grid shown above. C ADDITIONAL RESULTS C.1 UNREGULARIZED f -VIM/VIMO C.2 SAMPLE COMPLEXITY LEARNING CURVES C.3 f -DIVERGENCE VARIATIONAL BOUND SWAP Throughout this paper, we advocate for the use of the following variational lower bound to the f -divergence for both f -VIM and f -VIMO: Df (ρ π∗ ||ρπθ ) ≥ min θ max ω E(s,s′)∼ρπ∗ [f∗−1(r(Vω(s, s′)))]− E(s,s′)∼ρπθ [r(Vω(s, s′))] (11) In particular, we value the above form as it clearly exposes the choice of reward function for the imitation policy as a free parameter that, in practice, has strong implications for the stability and convergence of adversarial IL/ILfO algorithms. Alternatively, one may consider appealing to the original lower bound of Nguyen et al. (2010), used in f -GANs (Nowozin et al., 2016) unmodified, but swapping the positions of the two distributions: Df (ρ πθ ||ρπ ∗ ) ≥ min θ max ω E(s,s′)∼ρπθ [gf (Vω(s, s′))]− E(s,s′)∼ρπ∗ [f∗(gf (Vω(s, s′)))] (12) Consequently, the term in this lower bound pertaining to the imitation policy is now similar to that of the bound in Equation 11; namely, an almost arbitrary activation function, gf , applied to the output of the variational function (discriminator) Vω . The difference being that the codomain of gf must obey the domain of the convex conjugate, f∗, while the codomain of r must respect the domain of the inverse convex conjugate, f∗−1. We evaluate these two choices empirically below for the specific choice of the KL-divergence in the Ant and Hopper domains (the two most difficult domains of our evaluation). We find that the original unswapped bound in Equation 11 used throughout this paper outperforms the variants with the distributions swapper, for both the IL and ILfO settings. Crucially, we find that the KL-VIM in the Ant domain no longer achieves expert performance while optimizing the swapped bound.
1. What are the main contributions of the paper, particularly in empirical analysis and imitation learning? 2. How does the reviewer assess the significance and impact of the proposed contributions? 3. What are the concerns regarding the choice of output activation and its theoretical analysis? 4. How does the reviewer evaluate the effectiveness of the suggested regularizer in adversarial imitation learning? 5. What are the issues with handling state-only demonstrations, and how does the reviewer suggest improving this aspect? 6. How does the reviewer assess the clarity and quality of the writing in the paper? 7. What is the final decision on the paper, and what are the reasons behind it? 8. Are there any specific questions or points that the reviewer would like the authors to address in their response?
Review
Review Summary: The submission performs empirical analysis on f-VIM (Ke, 2019), a method for imitation learning by f-divergence minimization. The paper especially focues on a state-only formulation akin to GAILfO (Torabi et al., 2018b). The main contributions are: 1) The paper identifies numerical proplems with the output activations of f-VIM and suggest a scheme to choose them such that the resulting rewards are bounded. 2) A regularizer that was proposed by Mescheder et al. (2018) for GANs is tested in the adversarial imitation learning setting. 3) In order to handle state-only demonstrations, the technique of GAILfO is applied to f-VIM (then denoted f-VIMO) which inputs state-nextStates instead of state-actions to the discriminator. Contribution / Significance: I think that the contributions of the paper are rather marginal. I do think that the choice of output activation may have large impact on the performance and it seems that the activation suggested by Ke et al. (2019) are somewhat arbitrary. However, the activations proposed in the current submission are also seem somewhat arbitrary and are not accompanied by any theoretical analysis. 2) and 3) are marginal combinations of existing work that are only insufficiently evaluated and do not seem particular effective. Hence, I think that the current submission is of rather limited interest. Soundness: The "reparametrization" of f-VIM is motivated based on exploding policy gradients when using unbounded reward functions, especially when minimizing the (R)KL. I am not convinced by this motivation, given that GAIL and AIRL (which approximatly minimizes the RKL) use unbounded reward functions and do not seem to suffer from such problems. Evaluation: The effect of the "reparametrization" is only evaluated for total variation. The regularization loss is only evaluated with a single fixed coefficient of 10 on all experiment. I think that a sweep over the coefficient would be mandatory, especially given that current experiments do not show a clear benefit of the regularization loss (the regularized version performs worse on roughly half of the experiments). When learning from observations only, the submission only evaluates the proposed combination of f-VIM and GAILfO. However, it seems like it would be perfectly possible to handle state-only observations by simply making the discriminator independent of the actions, i.e. using D(s,a) = D(s). Such technique matches the marginal distributions over states and is commonly applied to GAIL, e.g. by Peng et al. [1]. It is not clear whether the reported problems of learning from observations only is really a general problem of the learning setting (as claimed in the submission) or a problem of the proposed method. Clarity: The paper is well written and easy to follow. Using different linestyles to distinguish the learning with regularization versus without regularization would help a lot. Decision: Due to the marginal contribution and the insufficient evaluation I have to recommend rejection. Question: I am maily interested in the authors response to my critique, especially regarding - the choice not to compare with state-only f-VIM, and - the motivation of the proposed output activations. [1] Peng, Xue Bin, et al. "Variational discriminator bottleneck: Improving imitation learning, inverse rl, and gans by constraining information flow." arXiv preprint arXiv:1810.00821 (2018).
ICLR
Title Reparameterized Variational Divergence Minimization for Stable Imitation Abstract State-of-the-art results in imitation learning are currently held by adversarial methods that iteratively estimate the divergence between student and expert policies and then minimize this divergence to bring the imitation policy closer to expert behavior. Analogous techniques for imitation learning from observations alone (without expert action labels), however, have not enjoyed the same ubiquitous successes. Recent work in adversarial methods for generative models has shown that the measure used to judge the discrepancy between real and synthetic samples is an algorithmic design choice, and that different choices can result in significant differences in model performance. Choices including Wasserstein distance and various f -divergences have already been explored in the adversarial networks literature, while more recently the latter class has been investigated for imitation learning (Ke et al., 2019). Unfortunately, we find that in practice this existing imitation-learning framework for using f -divergences suffers from numerical instabilities stemming from the combination of function approximation and policygradient reinforcement learning. In this work, we alleviate these challenges and offer a reparameterization of adversarial imitation learning as f -divergence minimization before further extending the framework to handle the problem of imitation from observations only. Empirically, we demonstrate that our design choices for coupling imitation learning and f -divergences are critical to recovering successful imitation policies. Moreover, we find that with the appropriate choice of f divergence, we can obtain imitation-from-observation algorithms that outperform baseline approaches and more closely match expert performance in continouscontrol tasks with low-dimensional observation spaces. With high-dimensional observations, we still observe a significant gap with and without action labels, offering an interesting avenue for future work. 1 INTRODUCTION Imitation Learning (IL) (Osa et al., 2018) refers to a paradigm of reinforcement learning in which the learning agent has access to an optimal, reward-maximizing expert for the underlying environment. In most work, this access is provided through a dataset of trajectories where each observed state is annotated with the action prescribed by the expert policy. This is often an extremely powerful learning paradigm in contrast to standard reinforcement learning, since not all tasks of interest admit easily-specified reward functions. Additionally, not all environments are amenable to the prolonged and potentially unsafe exploration needed for reward-maximizing agents to arrive at satisfactory policies (Achiam et al., 2017; Chow et al., 2019). While the traditional formulation of the IL problem assumes access to optimal expert action labels, the provision of such information can often be laborious (in the case of a real, human expert) or incur significant financial cost (such as using elaborate instrumentation to record expert actions). Additionally, this restrictive assumption removes a vast number of rich, observation-only data sources from consideration (Zhou et al., 2018). To bypass these challenges, recent work (Liu et al., 2018; Torabi et al., 2018a;b; Edwards et al., 2019; Sun et al., 2019) has explored what is perhaps a more natural problem formulation in which an agent must recover an imitation policy from a dataset containing only expert observation sequences. While this Imitation Learning from Observations (ILfO) setting carries tremendous potential, such as enabling an agent to learn complex tasks from watching freely available videos on the Internet, it also is fraught with significant additional challenges. In this paper, we show how to incorporate recent advances in generative-adversarial training of deep neural networks to tackle imitation-learning problems and advance the state-of-the-art in ILfO. With these considerations in mind, the overarching goal of this work is to enable sample-efficient imitation from expert demonstrations, both with and without the provision of expert action labels. The rich literature on Generative Adversarial Networks (Goodfellow et al., 2014) has expanded in recent years to include alternative formulations of the underlying objective that yield qualitatively different solutions to the saddle-point optimization problem (Li et al., 2015; Dziugaite et al., 2015; Zhao et al., 2016; Nowozin et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017). Of notable interest are the findings of Nowozin et al. (2016) who present Variational Divergence Minimization (VDM), a generalization of the generative-adversarial approach to arbitrary choices of distance measures between probability distributions drawn from the class of f -divergences (Ali & Silvey, 1966; Csiszár et al., 2004). Applying VDM with varying choices of f - divergence, Nowozin et al. (2016) encounter learned synthetic distribu- tions that can exhibit differences from one another while producing equally realistic samples. Translating this idea for imitation is complicated by the fact that the optimization of the generator occurs via policy-gradient reinforcement learning (Sutton et al., 2000). Existing work in combining adversarial IL and f -divergences (Ke et al., 2019), despite being well-motivated, fails to account for this difference; the end results (shown partially in Figure 1, where TV-VIM is the method of Ke et al. (2019), and discussed further in later sections) are imitation-learning algorithms that scale poorly to environments with higher-dimensional observations. In this work, we assess the effect of the VDM principle and consideration of alternative f - divergences in the contexts of IL and ILfO. We begin by reparameterizing the framework of Ke et al. (2019) for the standard IL problem. Our version transparently exposes the choices practitioners must make when designing adversarial imitation algorithms for arbitrary choices of f -divergence. We then offer a single instantiation of our framework that, in practice, allows stable training of good policies across multiple choices of f -divergence. An example is illustrated in Figure 1 where our methods (TV-VIM-sigmoid and TV-VIMO-sigmoid) result in significantly superior policies. We go on to extend our framework to encapsulate the ILfO setting and examine the efficacy of the resulting new algorithms across a range of continuous-control tasks in the MuJoCo (Todorov et al., 2012) domain. Our empirical results validate our framework as a viable unification of adversarial imitation methods under the VDM principle. With the assistance of recent advances in stabilizing regularization for adversarial training (Mescheder et al., 2018), improvements in performance can be attained under an appropriate choice of f -divergence. However, there is still a significant performance gap between the recovered imitation policies and expert behavior for tasks with high dimensional observations, leaving open directions for future work in developing improved ILfO algorithms. 2 RELATED WORK The algorithms presented in this work fall in with inverse reinforcement learning (IRL) (Ng et al.; Abbeel & Ng, 2004; Syed & Schapire, 2007; Ziebart et al., 2008; Finn et al., 2016; Ho & Ermon, 2016) approaches to IL. Early successes in this regime tend to rely on hand-engineered feature rep- resentations for success (Abbeel & Ng, 2004; Ziebart et al., 2008; Levine et al., 2011). Only in recent years, with the aid of deep neural networks, has there been a surge in the number of approaches that are capable of scaling to the raw, high-dimensional observations found in real-world control problems (Finn et al., 2016; Ho & Ermon, 2016; Duan et al., 2017; Li et al., 2017; Fu et al., 2017; Kim & Park, 2018). Our work focuses attention exclusively on adversarial methods for their widespread effectiveness across a range of imitation tasks without requiring interactive experts (Ho & Ermon, 2016; Li et al., 2017; Fu et al., 2017; Kostrikov et al., 2018); at the heart of these methods is the Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) approach which produces high-fidelity imitation policies and achieves state-of-the-art results across numerous continuous-control benchmarks by leveraging the expressive power of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) for modeling complex distributions over a high-dimensional support. From an IRL perspective, GAIL can be viewed as iteratively optimizing a parameterized reward function (discriminator) that, when used to optimize an imitation policy (generator) via policy-gradient reinforcement learning (Sutton et al., 2000), allows the agent to shift its own behavior closer to that of the expert. From the perspective of GANs, this is achieved by discriminating between the respective distributions over state-action pairs visited by the imitation and expert policies before training a generator to fool the discriminator and induce a state-action visitation distribution similar to that of the expert. While a large body of prior work exists for IL, recent work has drawn attention to the more challenging problem of imitation learning from observation (Sermanet et al., 2017; Liu et al., 2018; Goo & Niekum, 2018; Kimura et al., 2018; Torabi et al., 2018a;b; Edwards et al., 2019; Sun et al., 2019). To more closely resemble observational learning in humans and leverage the wealth of publiclyavailable, observation-only data sources, the ILfO problem considers learning from expert demonstration data where no expert action labels are provided. Many early approaches to ILfO use expert observation sequences to learn a semantic embedding space so that distances between observation sequences of the imitation and expert policies can serve as a cost signal to be minimized via reinforcement learning (Gupta et al., 2017; Sermanet et al., 2017; Dwibedi et al., 2018; Liu et al., 2018). In contrast, Torabi et al. (2018a) introduce Behavioral Cloning from Observation (BCO) which leverages state-action trajectories collected under a random policy to train an inverse dynamics model for inferring the action responsible for a transition between two input states (assuming the two represent a state and next-state pair). With this inverse model in hand, the observation-only demonstration data can be converted into the more traditional dataset of state-action pairs over which standard BC can be applied. Recognizing the previously discussed limitations of BC approaches, Torabi et al. (2018b) introduce the natural GAIL counterpart for ILfO, Generative Adversarial Imitation from Observation (GAIFO); GAIFO is identical to GAIL except the distributions under consideration in the adversarial game are over state transitions (state and next-state pairs), as opposed to stateaction pairs requiring expert action labels. While Torabi et al. (2018b) offer empirical results for continuous-control tasks with low-dimensional features as well as raw image observations, GAIFO falls short of expert performance in both settings leaving an open challenge for scalable ILfO algorithms that achieve expert performance across a wide spectrum of tasks. A central question of this work is to explore how alternative formulations of the GAN objective that underlies GAIFO might yield superior ILfO algorithms. For a more in-depth survey of ILfO approaches, we refer readers to Torabi et al. (2019). We refer readers to the Appendix for a broader overview of prior work. 3 BACKGROUND We begin by formulating the problems of imitation learning and imitation learning from observation respectively before taking a closer look at f -divergences and connecting them to imitation learning. 3.1 IMITATION LEARNING & IMITATION FROM OBSERVATION We operate within the Markov Decision Process (MDP) formalism (Bellman, 1957; Puterman, 2014) defined as a five-tupleM = 〈S,A,R, T , γ〉 where S denotes a (potentially infinite) set of states, A denotes a (potentially infinite) set of actions, R : S × A × S → R is a reward function, T : S × A → ∆(S) is a transition function, and γ ∈ [0, 1) is a discount factor. At each timestep, the agent observes the current state of the world, st ∈ S, and randomly samples an action according to its stochastic policy π : S → ∆(A). The environment then transitions to a new state according to the transition function T and produces a reward signal according to the reward function R that is communicative of the agent’s progress through the overall task. Unlike, the traditional reinforcement learning paradigm, the decision-making problem presented in IL lacks a concrete reward function; in lieu of R, a learner is provided with a dataset of expert demonstrationsD = {τ1, τ2, . . . τN}where each τi = (si1, ai1, si2, ai2, . . .) represents the sequence of states and corresponding actions taken by an expert policy, π∗. Naturally, the goal of an IL algorithm is to synthesize a policy π using D, along with access to the MDPM, whose behavior matches that of π∗. While the previous section outlines several possible avenues for using D to arrive at a satisfactory imitation policy, our work focuses on adversarial methods that build around GAIL (Ho & Ermon, 2016). Following from the widespread success of GANs (Goodfellow et al., 2014), GAIL offers a highly-performant approach to IL wherein, at each iteration of the algorithm, transitions sampled from the current imitation policy are first used to update a discriminator, Dω(s, a), that acts a binary classifier to distinguish between state-action pairs sampled according to the distributions induced by the expert and student. Subsequently, treating the imitation policy as a generator, policy-gradient reinforcement learning is used to shift the current policy towards expert behavior, issuing higher rewards for those generated state-action pairs that are regarded as belonging to the expert according to Dω(s, a). More formally, this minimax optimization follows as min π max ω E(s,a)∼ρπ∗ [log(Dω(s, a))] + E(s,a)∼ρπ [log(1−Dω(s, a))] (1) where ρπ ∗ (s, a) and ρπ(s, a) denote the undiscounted stationary distributions over state-action pairs for the expert and imitation policies respectively. Here Dω(s, a) = σ(Vω(s, a)) where Vω(s, a) represents the unconstrained output of a discriminator neural network with parameters ω and σ(v) = (1 + e−x)−1 denotes the sigmoid activation function. Since the imitation policy only exerts control over the latter term in the above objective, the per-timestep reward function maximized by reinforcement learning is given as r(s, a, s′) = − log(1 − Dω(s, a)). In practice, an entropy regularization term is often added to the objective when optimizing the imitation policy so as to avoid premature convergence to a suboptimal solution (Mnih et al., 2016; Ho & Ermon, 2016; Neu et al., 2017). In order to accommodate various observation-only data sources (Zhou et al., 2018) and remove the burden of requiring expert action labels, the imitation from observation setting adjusts the expert demonstration dataset D such that each trajectory τi = (si1, si2, . . .) consists only of expert observation sequences. Retaining the goal of recovering an imitation policy that closely resembles expert behavior, Torabi et al. (2018b) introduce GAIFO as the natural extension of GAIL for matching the state transition distribution of the expert policy. Note that an objective for matching the stationary distribution over expert state transitions enables the provision of per-timestep feedback while simultaneously avoid the issues of temporal alignment that arise when trying to match trajectories directly. The resulting algorithm iteratively finds a solution to the following minimax optimization: min π max ω E(s,s′)∼ρπ∗ [log(Dω(s, s′))] + E(s,s′)∼ρπ [log(1−Dω(s, s′))] (2) where ρπ ∗ (s, s′) and ρπ(s, s′) now denote the analogous stationary distributions over successive state pairs while Dω(s, s′) = σ(Vω(s, s′)) represents binary classifier over state pairs. Similar to GAIL, the imitation policy is optimized via policy-gradient reinforcement learning with per-timestep rewards computed according to r(s, a, s′) = − log(1−Dω(s, s′)) and using entropy regularization as needed. 4 APPROACH In this section, we begin with an overview of f -divergences, their connection to GANs, and their impact on IL through the f -VIM framework (Ke et al., 2019) (Section 4.1). We then present an alternative view of the framework that transparently exposes the fundamental choice practictioners must make in order to circumvent practical issues that arise when applying f -VIM to high-dimensional tasks (Section 4.2). We conclude by presenting our approach for ILfO as f -divergence minimization (Section 4.3) followed by a brief discussion of a regularization technique used to stabilize discriminator training in our experiments (Section 4.4). 4.1 f -DIVERGENCES AND IMITATION LEARNING The GAIL and GAIFO approaches engage in an adversarial game where the discriminator estimates the divergence between state-action or state transition distributions according to the JensenShannon divergence (Goodfellow et al., 2014). In this work, our focus is on a more general class of divergences, that includes the Jensen-Shannon divergence, known as Ali-Silvey distances or f - divergences (Ali & Silvey, 1966; Csiszár et al., 2004). For two distributions P and Q with support over a domainX and corresponding continuous densities p and q, we have the f -divergence between them according to: Df (P ||Q) = ∫ X q(x)f( p(x) q(x) )dx (3) where f : R+ → R is a convex, lower-semicontinuous function such that f(1) = 0. As illustrated in Table 1, different choices of function f yield well-known divergences between probability distributions. In order to accommodate the tractable estimation of f -divergences when only provided samples from P and Q, Nguyen et al. (2010) offer an approach for variational estimation of f -divergences. Central to their procedure is the use of the convex conjugate function or Fenchel conjugate (Hiriart-Urruty & Lemaréchal, 2004), f∗, which exists for all convex, lower-semicontinuous functions f and is defined as the following supremum: f∗(t) = sup u∈domf {ut− f(u)} (4) Using the duality of the convex conjugate (f∗∗ = f ), Nguyen et al. (2010) represent f(u) = sup t∈domf∗ {tu− f∗(t)} enabling a variational bound: Df (P ||Q) = ∫ X q(x) sup t∈domf∗ { t p(x) q(x) − f∗(t) } dx ≥ sup T∈T ( ∫ X p(x)T (x)dx− ∫ X q(x)f∗(T (x))dx) = sup T∈T (Ex∼P [T (x)]− Ex∼Q[f∗(T (x))]) (5) where T is an arbitrary class of functions T : X → domf∗ . Nowozin et al. (2016) extend the use of this variational lower bound for GANs that utilize arbitrary f -divergences, or f -GANs. Specifically, the two distributions of interest are the real data distribution P and a synthetic distribution represented by a generative model Qθ with parameters θ. The variational function is also parameterized as Tω acting as the discriminator. This gives rise to the VDM principle which defines the f -GAN objective min θ max ω Ex∼P [Tω(x)]− Ex∼Qθ [f∗(Tω(x))] (6) Nowozin et al. (2016) represent the variational function as Tω(x) = gf (Vω(x)) such that Vω(x) : X → R represents the unconstrained discriminator network while gf : R→ domf∗ is an activation function chosen in accordance with the f -divergence being optimized. Table 1 includes the “somewhat arbitrary” but effective choices for gf suggested by Nowozin et al. (2016) and we refer readers to their excellent work for more details and properties of f -divergences and f -GANs. Recently, Ke et al. (2019) have formalized the generalization from GAN to f -GAN for the traditional IL problem. They offer the f -Variational Imitation (f -VIM) framework for the specific case of estimating and then minimizing the divergence between state-action distributions induced by expert and imitation policies: min θ max ω E(s,a)∼ρπ∗ [gf (Vω(s, a))]− E(s,a)∼ρπθ [f∗(gf (Vω(s, a)))] (7) where Vω : S × A → R denotes the discriminator network that will supply per-timestep rewards during the outer policy optimization which itself is carried out over policy parameters θ via policygradient reinforcement learning (Sutton et al., 2000). In particular, the per-timestep rewards provided to the agent are given according to r(s, a, s′) = f∗(gf (Vω(s, a))). While Ke et al. (2019) do an excellent job of motivating the use of f -divergences for IL (by formalizing the relationship between divergences over trajectory distributions vs. state-action distributions) and connecting f -VIM to existing imitation-learning algorithms, their experiments focus on smaller problems to study the mode-seeking/mode-covering aspects of different f -divergences and the implications of such behavior depending on the multimodality of the expert trajectory distribution. Meanwhile, in the course of attempting to apply f -VIM to large-scale imitation problems, we empirically observe numerical instabilities stemming from function approximation, demanding a reformulation of the framework. 4.2 REPARAMETERIZING f -VIM In their presentation of the f -VIM framework, Ke et al. (2019) retain the choices for activation function gf introduced by Nowozin et al. (2016) for f -GANs. Recall that these choices of gf play a critical role in defining the reward function optimized by the imitation policy on each iteration of f -VIM, r(s, a, s′) = f∗(gf (Vω(s, a))). It is well known in the reinforcement-learning literature that the nature of the rewards provided to an agent have strong implications on learning success and efficiency (Ng et al., 1999; Singh et al., 2010). While the activation choices made for f -GANs are suitable given that both optimization problems are carried out by backpropagation, we assert that special care must be taken when specifying these activations (and implicitly, the reward function) for imitation-learning algorithms. A combination of convex conjugate and activation function could induce a reward function that engenders numerical instability or a simply challenging reward landscape, depending on the underlying policy-gradient algorithm utilized (Henderson et al., 2018). Empirically, we found that the particular activation choices for the KL and reverse KL divergences shown in Table 1 (linear and exponential, respectively) produced imitation-learning algorithms that, in all of our evaluation environments, failed to complete execution due to numerical instabilities caused by exploding policy gradients. In the case of the Total Variation distance, the corresponding f -GAN activation for the variational function is a tanh, requiring a learning agent to traverse a reward interval of [−1, 1] by crossing an intermediate region with reward signals centered around 0. To refactor the f -VIM framework so that it more clearly exposes the choice of reward function to practictioners and shifts the issues of reward scale away from the imitation policy, we propose uniformly applying an activation function gf (v) = f∗−1(r(v)) where f∗−1(t) denotes the inverse of the convex conjugate (see Table 1). Here r is effectively a free parameter that can be set according to one of the many heuristics used throughout the field of deep reinforcement learning for maintaining a reasonable reward scale (Mnih et al., 2015; 2016; Henderson et al., 2018) so long as it obeys the domain of the inverse conjugate domf∗−1 . In selecting gf accordingly, the reparameterized saddlepoint optimization for f -VIM becomes min θ max ω E(s,a)∼ρπ∗ [f∗−1(r(Vω(s, a)))]− E(s,a)∼ρπθ [r(Vω(s, a))] (8) where the per-timestep rewards used during policy optimization are given by r(s, a, s′) = r(Vω(s, a)). In applying this choice, we shift the undesirable scale of the latter term in VDM towards the discriminator, expecting it to be indifferent since training is done by backpropagation. As one potential instantiation, we consider r(u) = σ(u) where σ(·) denotes the sigmoid function leading to bounded rewards in the interval [0, 1] that conveniently adhere to domf∗−1 for almost all of the f -divergences examined in this work1. In Section 5, we evaluate imitation-learning algorithms with this choice against those using f -VIM with the original f -GAN activations; we find that, without regard for the scale of rewards and the underlying reinforcement-learning problem being solved, the f -GAN activation choices either produce degenerate solutions or completely fail to produce an imitation policy altogether. 4.3 f -DIVERGENCES AND IMITATION FROM OBSERVATION Applying the variational lower bound of Nguyen et al. (2010) and the corresponding f -GAN extension, we can now present our Variational Imitation from Observation (f -VIMO) extension for a general family of ILfO algorithms that leverage the VDM principle in the underlying saddle-point optimization. Since optimization of the generator will continue to be carried out by policy-gradient reinforcement learning, we adhere to our reparameterization of the f -VIM framework and present the f -VIMO objective as: min θ max ω E(s,s′)∼ρπ∗ [f∗−1(r(Vω(s, s′)))]− E(s,s′)∼ρπθ [r(Vω(s, s′))] (9) with the per-timestep rewards given according to r(s, a, s′) = r(Vω(s, s′)). We present the full approach as Algorithm 1. Just as in Section 4.2, we again call attention to Line 5 where the discriminator outputs (acting as individual reward signals) scale the policy gradient, unlike the more conventional discriminator optimization of Line 4 by backpropagation; this key difference is the primary motivator for our specific reparameterization of the f -VIM framework. Just as in the previous section, we take r(u) = σ(u) as a particularly convenient choice of activation given its agreement to the inverse conjugate domains domf∗−1 for many choices of f -divergence and we employ this instantiation throughout all of our experiments. We leave the examination of alternative choices for r to future work. Algorithm 1 f -VIMO 1: INPUT: Dataset of expert trajectories D, initial policy and discriminator parameters θ0 and ω0, number of iterations N , discount factor γ 2: for i = 0, 1, . . . , N do 3: Sample trajectories from current imitation policy τi ∼ πθi 4: ωi+1 = ωi +∇ω ( E(s,s′)∼D[f∗−1(r(Vω(s, s′)))]− E(s,s′)∼τi [r(Vω(s, s′))] ) 5: Update θi to θi+1 via a policy-gradient update with rewards given by r(Vω(s, s′)): θi+1 = θi + E(s,a,s′)∼τi [ ∇θ log(πθi(a|s))Eτi [ ∞∑ t=1 γt−1r(Vω(st−1, st))|s0 = s, s1 = s′] ] 6: end for 4.4 DISCRIMINATOR REGULARIZATION The refactored version of f -VIM presented in Section 4.2 is fundamentally addressing instability issues that may occur on the generator side of adversarial training; in our experiments, we also examine the utility of regularizing the discriminator side of the optimization for improved stability. Following from a line of work examining the underlying mathematical properties of GAN optimization (Roth et al., 2017; 2018; Mescheder et al., 2018), we opt for the simple gradient-based regularization of Mescheder et al. (2018) which (for f -VIMO) augments the discriminator loss with the following regularization term: R(ω) = ψ 2 E(s,s′)∼ρπ∗ [||∇ωf∗−1(r(Vω(s, s′)))||2] (10) where ψ is a hyperparameter controlling the strength of the regularization. The form of this specific penalty follows from the analysis of Roth et al. (2017); intuitively, its purpose is to disincentivize the 1For Total Variation distance, we use r(u) = 1 2 σ(u) to remain within domf∗−1 . discriminator from producing a non-zero gradient that shifts away from the Nash equilibrium of the minimax optimization when presented with a generator that perfectly matches the true data distribution. While originally developed for traditional GANs and shown to empirically exhibit stronger convergence properties over Wasserstein GANs (Gulrajani et al., 2017), this effect is still desirable for the adversarial IL setting where the reward function (discriminator) used for optimizing the imitation policy should stop changing once the expert state-transition distribution has been matched. In practice, we compare f -VIM and f -VIMO both with and without the use of this regularization term and find thatR(ω) can improve the stability and convergence of f -VIMO across almost all domains. 5 EXPERIMENTS We examine four instantiations of the f -VIM and f -VIMO frameworks (as presented in Sections 4.2 and 4.3) corresponding to imitation algorithms with the following choices of f -divergence: GAN, Kullback-Leibler, reverse KL, and Total Variation. We conduct our evaluation across four MuJoCo environments (Todorov et al., 2012) of varying difficulty: Ant, Hopper, HalfCheetah, and Walker (see the Appendix for more details on individual environments). The core questions we seek to answer through our empirical results are as follows: 1. What are the implications of the choice of activation for the variational function in f -VIM on imitation policy performance? 2. Do f -divergences act as a meaningful axis of variation for IL and ILfO algorithms? 3. What is the impact of discriminator regularization on the stability and convergence proper- ties of f -VIM/f -VIMO? 4. How does the impact of different f -divergences vary with the amount of expert demonstra- tion data provided? To answer the first three questions above, we report the average total reward achieved by the imitation policy throughout the course of learning with rewards as defined by the corresponding OpenAI Gym environment (Brockman et al., 2016). Shading in all plots denote 95% confidence intervals computed over 10 random trials with 10 random seeds. Expert demonstration datasets of 50 trajectories were collected from agents trained via Proximal Policy Optimization (PPO) (Schulman et al., 2017); 20 expert demonstrations were randomly subsampled at the start of learning and held fixed for the duration of the algorithm. We also utilize PPO as the underlying reinforcement-learning algorithm for training the imitation policy with a clipping parameter of 0.2, advantage normalization, entropy regularization coefficient 1e−3, and the Adam optimizer (Kingma & Ba, 2014). Just as in Ho & Ermon (2016) we use a discount factor of γ = 0.995 and apply Generalized Advantage Estimation (Schulman et al., 2015) with parameter λ = 0.97. We run both f -VIM and f -VIMO for a total of 500 iterations, collecting 50000 environment samples per iteration. The policy and discriminator architectures are identically two separate multi-layer perceptrons each with two hidden layers of 100 units separated by tanh nonlinearities. A grid search was used for determining the initial learning rate, number of PPO epochs, and number of epochs used for discriminator training (please see the Appendix for more details) and we report results for the best hyperparameter settings. To address our final question, we take the best hyperparameter settings recovered when given 20 expert demonstrations and re-run all algorithms with {1, 5, 10, 15} expert demonstrations that are randomly sampled at the start of each random trial and held fixed for the duration of the algorithm. We then record the average return of the final imitation policy for each level of expert demonstration. 6 RESULTS & DISCUSSION To highlight the importance of carefully selecting the variational function activation gf and validate our modifications to the f -VIM framework, we present results in Figure 2 comparing to the original f -VIM framework of Ke et al. (2019) and its natural ILfO counterpart. Activation functions for the original methods are chosen according to the choices outlined in Ke et al. (2019); Nowozin et al. (2016). In our experiments using the KL and reverse KL divergences, we found that none of the trials reached completion due to numerical instabilities caused by exploding policy gradients. Consequently, we only present results for the Total Variation distance. We observe that under the original f -GAN activation selection, we fail to produce meaningful imitation policies with learning stagnating after 100 iterations or less. As previously mentioned, we suspect that this stems from the use of tanh with TV leading to a dissipating reward signal. We present results in Figure 3 to assess the utility of varying the choice of divergence in f -VIM and f -VIMO across each domain. In considering the impact of f -divergence choice, we find that most of the domains must be examined in isolation to observe a particular subset of f -divergences that stand out. In the IL setting, we find that varying the choice of f -divergence can yield different learning curves but, ultimately, produce near-optimal (if not optimal) imitation policies across all domains. In contrast, we find meaningful choices of f -divergence in the ILfO setting including {KL, TV} for Hopper, RKL for HalfCheetah, and {GAN, TV} for Walker. We note that the use of discriminator regularization per Mescheder et al. (2018) is crucial to achieving these performance gains, whereas the regularization generally fails to help performance in the IL setting. This finding is supportive of the logical intuition that ILfO poses a fundamentally more-challenging problem than standard IL. As a negative result, we find that the Ant domain (the most difficult environment with S ⊂ R111 and A ⊂ R8) still poses a challenge for ILfO algorithms across the board. More specifically, we observe that discriminator regularization hurts learning in both the IL and ILfO settings. While the choice of RKL does manage to produce a marginal improvement over GAIFO, the gap between existing stateof-the-art and expert performance remains unchanged. It is an open challenge for future work to either identify the techniques needed to achieve optimal imitation policies from observations only or characterize a fundamental performance gap when faced with sufficiently large observation spaces. In Figure 4, we vary the total number of expert demonstrations available during learning and observe that certain choices of f -divergences can be more robust in the face of less expert data, both in the IL and ILfO settings. We find that KL-VIM and TV-VIM are slightly more performant than GAIL when only provided with a single expert demonstration. Notably, in each domain we see that certain choices of divergence for f -VIMO do a better job of residing close to their f -VIM counterparts suggesting that future improvements may come from examining f -divergences in the small-data regime. This idea is further exemplified when accounting for results collected while using discriminator regularization (Mescheder et al., 2018). We refer readers to the Appendix for the associated learning curves. Our work leaves many open directions for future work to close the performance gap between student and expert policies in the ILfO setting. While we found the sigmoid function to be a suitable instantiation of our framework, exploring alternative choices of variational function activations could prove useful in synthesizing performant ILfO algorithms. Alternative choices of f -divergences could lead to more substantial improvements than the choices we examine in this paper. Moreover, while this work has a direct focus on f -divergences, Integral Probability Metrics (IPMs) (Müller, 1997; Gretton et al., 2012) represent a distinct but well-established family of divergences between probability distributions. The success of Total Variation distance in our experiments, which doubles as both a f - divergence and IPM (Sriperumbudur et al., 2009), is suggestive of future work building IPM-based ILfO algorithms (Sun et al., 2019). 7 CONCLUSION In this work, we present a general framework for imitation learning and imitation learning from observations under arbitrary choices of f -divergence. We empirically validate a single instantiation of our framework across multiple f -divergences, demonstrating that we overcome the shortcomings of prior work and offer a wide class of IL and ILfO algorithms capable of scaling to larger problems. A RELATED WORK A.1 LEARNING FROM DEMONSTRATION Our work broadly falls within the category of Learning from Demonstration (LfD) (Schaal, 1997; Atkeson & Schaal, 1997; Argall et al., 2009), where an agent must leverage demonstration data (typically provided as trajectories, each consisting of expert state-action pairs) to produce an imitation policy that correctly captures the demonstrated behavior. Within the context of LfD, a finer distinction can be made between behavioral cloning (BC) (Bain & Sommut, 1999; Pomerleau, 1989) and inverse reinforcement learning (IRL) (Ng et al.; Abbeel & Ng, 2004; Syed & Schapire, 2007; Ziebart et al., 2008; Finn et al., 2016; Ho & Ermon, 2016) approaches; BC approaches view the demonstration data as a standard dataset of input-output pairs and apply traditional supervisedlearning techniques to recover an imitation policy. Alternatively, IRL-based methods synthesize an estimate of the reward function used to train the expert policy before subsequently applying a reinforcement-learning algorithm (Sutton & Barto, 1998; Abbeel & Ng, 2004) to recover the corresponding imitation policy. Although not a focus of this work, we also acknowledge the myriad of approaches that operate at the intersection of IL and reinforcement learning or augment reinforcement learning with IL (Rajeswaran et al., 2017; Hester et al., 2018; Salimans & Chen, 2018; Sun et al., 2018; Borsa et al., 2019; Tirumala et al., 2019). While BC approaches have been successful in some settings (Niekum et al., 2015; Giusti et al., 2016; Bojarski et al., 2016), they are also susceptible to failures stemming from covariate shift where minute errors in the actions of the imitation policy compound and force the agent into regions of the state space not captured in the original demonstration data. While some preventative measures for covariate shift do exist (Laskey et al., 2017b), a more principled solution can be found in methods like DAgger (Ross et al., 2011) and its descendants (Ross & Bagnell, 2014; Sun et al., 2017; Le et al., 2018) that remedy covariate shift by querying an expert to provide on-policy action labels. It is worth noting, however, that these approaches are only feasible in settings that admit such online interaction with an expert (Laskey et al., 2016) and, even then, failure modes leading to poor imitation policies do exist (Laskey et al., 2017a). The algorithms presented in this work fall in with IRL-based approaches to IL. Early successes in this regime tend to rely on hand-engineered feature representations for success (Abbeel & Ng, 2004; Ziebart et al., 2008; Levine et al., 2011). Only in recent years, with the aid of deep neural networks, has there been a surge in the number of approaches that are capable of scaling to the raw, highdimensional observations found in real-world control problems (Finn et al., 2016; Ho & Ermon, 2016; Duan et al., 2017; Li et al., 2017; Fu et al., 2017; Kim & Park, 2018). Our work focuses attention exclusively on adversarial methods for their widespread effectiveness across a range of imitation tasks without requiring interactive experts (Ho & Ermon, 2016; Li et al., 2017; Fu et al., 2017; Kostrikov et al., 2018); at the heart of these methods is the Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) approach which produces high-fidelity imitation policies and achieves state-of-the-art results across numerous continuous-control benchmarks by leveraging the expressive power of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) for modeling complex distributions over a high-dimensional support. From an IRL perspective, GAIL can be viewed as iteratively optimizing a parameterized reward function (discriminator) that, when used to optimize an imitation policy (generator) via policy-gradient reinforcement learning (Sutton et al., 2000), allows the agent to shift its own behavior closer to that of the expert. From the perspective of GANs, this is achieved by discriminating between the respective distributions over state-action pairs visited by the imitation and expert policies before training a generator to fool the discriminator and induce a state-action visitation distribution similar to that of the expert. While a large body of prior work exists for IL, numerous recent works have drawn attention to the more challenging problem of imitation learning from observation (Sermanet et al., 2017; Liu et al., 2018; Goo & Niekum, 2018; Kimura et al., 2018; Torabi et al., 2018a;b; Edwards et al., 2019; Sun et al., 2019). In an effort to more closely resemble observational learning in humans and leverage the wealth of publicly-available, observation-only data sources, the ILfO problem considers learning from expert demonstration data where no expert action labels are provided. Many early approaches to ILfO use expert observation sequences to learn a semantic embedding space so that distances between observation sequences of the imitation and expert policies can serve as a cost signal to be minimized via reinforcement learning (Gupta et al., 2017; Sermanet et al., 2017; Dwibedi et al., 2018; Liu et al., 2018). In contrast, Torabi et al. (2018a) introduce Behavioral Cloning from Observation (BCO) which leverages state-action trajectories collected under a random policy to train an inverse dynamics model for inferring the action responsible for a transition between two input states (assuming the two represent a state and next-state pair). With this inverse model in hand, the observation-only demonstration data can be converted into the more traditional dataset of stateaction pairs over which standard BC can be applied. Recognizing the previously discussed limitations of BC approaches, Torabi et al. (2018b) introduce the natural GAIL counterpart for ILfO, Generative Adversarial Imitation from Observation (GAIFO); GAIFO is identical to GAIL except the distributions under consideration in the adversarial game are over state transitions (state and next-state pairs), as opposed to state-action pairs requiring expert action labels. While Torabi et al. (2018b) offer empirical results for continuous-control tasks with low-dimensional features as well as raw image observations, GAIFO falls short of expert performance in both settings leaving an open challenge for scalable ILfO algorithms that achieve expert performance across a wide spectrum of tasks. A central question of this work is to explore how alternative formulations of the GAN objective that underlies GAIFO might yield superior ILfO algorithms. For a more in-depth survey of ILfO approaches, we refer readers to Torabi et al. (2019). A.2 GENERATIVE ADVERSARIAL NETWORKS With a focus on generative-adversarial methods for IL, this work leverages several related ideas in the GAN literature for offering alternative formulations as well as improving understanding of their underlying mathematical foundations (Li et al., 2015; Dziugaite et al., 2015; Zhao et al., 2016; Nowozin et al., 2016; Roth et al., 2017; Arjovsky et al., 2017; Gulrajani et al., 2017; Roth et al., 2018; Mescheder et al., 2018). Critical to the ideas presented in many of these previous works is an understanding that discriminator networks are estimating a divergence between two probability distributions of interest, usually taken to be the real data distribution and the fake or synthetic distribution represented by the generator. Formal characterizations of this divergence, either by Integral Probability Metrics (IPMs) (Müller, 1997; Gretton et al., 2012) or f -divergences (Ali & Silvey, 1966; Csiszár et al., 2004; Liese & Vajda, 2006), yield different variations on the classic GAN formulation which is itself a slight variation on the Jensen-Shannon (JS) divergence (Li et al., 2015; Dziugaite et al., 2015; Zhao et al., 2016; Nowozin et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017). Following from work by Nowozin et al. (2016) to generalize the GAN objective to arbitrary f -divergences, Ke et al. (2019) offer a generalization of GAIL to an arbitrary choice of f -divergence for quantifying the gap between the state-action visitation distributions of the imitation and expert policies; moreover, Ke et al. (2019) propose a unifying framework for IL, f -Variational IMitation (f -VIM), in which they highlight a correspondence between particular choices of f -divergences and existing IL algorithms (specifically BC⇐⇒ Kullback-Leibler (KL) divergence, DAgger⇐⇒ TotalVariation distance, and GAIL ⇐⇒ JS-divergence 2). While Ke et al. (2019) focus on providing empirical results in smaller toy problems to better understand the interplay between f -divergence 2The discriminator loss optimized in the original GAN formulation is 2 ·DJS − log(4) whereDJS denotes the Jensen-Shannon divergence (Goodfellow et al., 2014; Nowozin et al., 2016). choice and the multimodality of the expert trajectory distribution, we provide an empirical evaluation of their f -VIM framework across a range of continous control tasks in the Mujoco domain (Todorov et al., 2012). Empirically, we find that some of the design choices f -VIM inherits from the original f -GAN work (Nowozin et al., 2016) are problematic when coupled with adversarial IL and training of the generator by policy-gradient reinforcement learning, instead of via direct backpropagation as in traditional GANs. Consequently, we refactor their framework to expose this point and provide one practical instantiation that works well empirically. We then go on to extend the f -VIM framework to the IFO problem (f -VIMO) and evaluate the resulting algorithms empirically against the state-of-the-art, GAIFO. B EXPERIMENT DETAILS Here we provide details of the MuJoCo environments (Todorov et al., 2012) used in our experiments as well as the details of the hyperparameter search conducted for all algorithms (IL and ILfO) presented. B.1 MUJOCO ENVIRONMENTS All environments have continuous observation and action spaces of varying dimensionality (as shown below). All algorithms evaluated in each environment were trained for a total of 500 iterations, collecting 50, 000 environment transitions per iteration. Task Observation Space Action Space Ant-v2 R111 R8 Hopper-v2 R11 R3 HalfCheetah-v2 R17 R6 Walker2d-v2 R17 R6 B.2 HYPERPARAMETERS Below we outline the full set of hyperparameters examined for all experiments presented in this work. We conducted a full grid search over 10 random trials with 10 random seeds and report results for the best hyperparameter setting. Hyperparameter Values Discriminator learning rate {1e−4, 1e−3} PPO epochs {5, 10} Discriminator epochs {1, 5, 10} Preliminary experiments were conducted to test smaller values for PPO epochs and policy learning rates before settling on the grid shown above. C ADDITIONAL RESULTS C.1 UNREGULARIZED f -VIM/VIMO C.2 SAMPLE COMPLEXITY LEARNING CURVES C.3 f -DIVERGENCE VARIATIONAL BOUND SWAP Throughout this paper, we advocate for the use of the following variational lower bound to the f -divergence for both f -VIM and f -VIMO: Df (ρ π∗ ||ρπθ ) ≥ min θ max ω E(s,s′)∼ρπ∗ [f∗−1(r(Vω(s, s′)))]− E(s,s′)∼ρπθ [r(Vω(s, s′))] (11) In particular, we value the above form as it clearly exposes the choice of reward function for the imitation policy as a free parameter that, in practice, has strong implications for the stability and convergence of adversarial IL/ILfO algorithms. Alternatively, one may consider appealing to the original lower bound of Nguyen et al. (2010), used in f -GANs (Nowozin et al., 2016) unmodified, but swapping the positions of the two distributions: Df (ρ πθ ||ρπ ∗ ) ≥ min θ max ω E(s,s′)∼ρπθ [gf (Vω(s, s′))]− E(s,s′)∼ρπ∗ [f∗(gf (Vω(s, s′)))] (12) Consequently, the term in this lower bound pertaining to the imitation policy is now similar to that of the bound in Equation 11; namely, an almost arbitrary activation function, gf , applied to the output of the variational function (discriminator) Vω . The difference being that the codomain of gf must obey the domain of the convex conjugate, f∗, while the codomain of r must respect the domain of the inverse convex conjugate, f∗−1. We evaluate these two choices empirically below for the specific choice of the KL-divergence in the Ant and Hopper domains (the two most difficult domains of our evaluation). We find that the original unswapped bound in Equation 11 used throughout this paper outperforms the variants with the distributions swapper, for both the IL and ILfO settings. Crucially, we find that the KL-VIM in the Ant domain no longer achieves expert performance while optimizing the swapped bound.
1. What is the focus of the paper, and how does it build upon previous works in the field? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its stability and effectiveness in imitation learning? 3. Do you have concerns about the novelty of the paper's contributions, especially compared to prior works like GAILFO and WAIL? 4. How do the experimental results support the claims made in the paper, and what are some potential limitations or areas for further investigation? 5. Are there any minor issues or suggestions you have for improving the clarity or readability of the paper?
Review
Review This paper proposes the application of the f-VIM framework (Ke et. al., 2019) to the problem of imitation learning from observations (no expert actions). The authors first identify a potential source of numerical instability in the application of f-VIM to imitation learning – the rewards for the policy-gradient RL are given by a combination of a convex conjugate and an activation function. To alleviate this, f-VIM is reparameterized by curating the activation using conjugate inverse (Equation 8), yielding a potentially more stable reward for deep-RL. I have the following concerns about the paper: 1. Lack of novelty – Although I appreciate the reparameterization applied to f-VIM to make it potentially more stable for imitation learning in large state- and action-spaces, I don’t think that by itself meets the bar for ICLR. Algorithm 1 is basically the GAILFO algorithm (Torabi et al. 2018) written in the f-Vim framework, with the proposed reparameterization. The discriminator regularization (Section 4.4) has been used before. 2. Experiments – Figure 2 shows the improvement with TV when using the reparameterization, and the authors mention in text about the difficulty with KL and reverse-KL. What about the JS divergence (GAIL)? Does reparameterization help or affect that? 3. In Figure 3, is GAIL from the original paper, or does it use the sigmoid rewards? Figure 3 does not offer any evidence that the proposed methods in the paper lead to algorithms that should be preferred over the current state-of-the-art in imitation learning with divergence minimization such GAIL and WAIL. Minor comment: In Table 1: GAN is not a divergence. Please use Jensen-Shannon, with the corresponding tweaks to the columns.
ICLR
Title On the Convergence of Gradient Flow on Multi-layer Linear Models Abstract In this paper, we analyze the convergence of gradient flow on a multi-layer linear model with a loss function of the form f(W1W2 · · ·WL). We show that when f satisfies the gradient dominance property, proper weight initialization leads to exponential convergence of the gradient flow to a global minimum of the loss. Moreover, the convergence rate depends on two trajectory-specific quantities that are controlled by the weight initialization: the imbalance matrices, which measure the difference between the weights of adjacent layers, and the least singular value of the weight product W = W1W2 · · ·WL. Our analysis provides improved rate bounds for several multi-layer network models studied in the literature, leading to novel characterizations of the effect of weight imbalance on the rate of convergence. Our results apply to most regression losses and extend to classification ones. 1 INTRODUCTION The mysterious ability of gradient-based optimization algorithms to solve the non-convex neural network training problem is one of the many unexplained puzzles behind the success of deep learning in various applications (Krizhevsky et al., 2012; Hinton et al., 2012; Silver et al., 2016). A vast body of work has tried to theoretically understand this phenomenon by analyzing either the loss landscape or the dynamics of the training parameters. The landscape-based analysis is motivated by the empirical observation that deep neural networks used in practice often have a benign landscape (Li et al., 2018a), which can facilitate convergence. Existing theoretical analysis (Lee et al., 2016; Sun et al., 2015; Jin et al., 2017) shows that gradient descent converges when the loss function satisfies the following properties: 1) all of its local minimums are global minima; and 2) every saddle point has a Hessian with at least one strict negative eigenvalue. Prior work suggests that the matrix factorization model (Ge et al., 2017), shallow networks (Kawaguchi, 2016), and certain positively homogeneous networks (Haeffele & Vidal, 2015; 2017) have such a landscape property, but unfortunately condition 2) does not hold for networks with multiple hidden layers (Kawaguchi, 2016). Moreover, the landscape-based analysis generally fails to provide a good characterization of the convergence rate, except for a local rate around the equilibrium (Lee et al., 2016; Ge et al., 2017). In fact, during early stages of training, gradient descent could take exponential time to escape some saddle points if not initialized properly (Du et al., 2017). The trajectory-based analyses study the training dynamics of the weights given a specific initialization. For example, the case of small initialization has been studied for various models (Arora et al., 2019a; Gidel et al., 2019; Li et al., 2018b; Stöger & Soltanolkotabi, 2021; Li et al., 2021b;a). Under this type of initialization, the trained model is implicitly biased towards low-rank (Arora et al., 2019a; Gidel et al., 2019; Li et al., 2018b; Stöger & Soltanolkotabi, 2021; Li et al., 2021b), and sparse (Li et al., 2021a) models. While the analysis for small initialization gives rich insights on the generalization of neural networks, the number of iterations required for gradient descent to find a good model often increases as the initialization scale decreases. Such dependence proves to be logarithmic on the scale for symmetric matrix factorization model (Li et al., 2018b; Stöger & Soltanolkotabi, 2021; Li et al., 2021b), but for deep networks, existing analysis at best shows a polynomial dependency (Li et al., 2021a). Therefore, the analysis for small initialization, while insightful in understanding the implicit bias of neural network training, is not suitable for understanding the training efficiency in practice since small initialization is rarely implemented due to its slow convergence. Another line of work studies the initialization in the kernel regime, where a randomly initialized sufficiently wide neural network can be well approximated by its linearization at initialization Jacot et al. (2018); Chizat et al. (2019); Arora et al. (2019b). In this regime, gradient descent enjoys a linear rate of convergence toward the global minimum (Du et al., 2019; Allen-Zhu et al., 2019; Du & Hu, 2019). However, the width requirement in the analysis is often unrealistic, and empirical evidence has shown that practical neural networks generally do not operate in the kernel regime (Chizat et al., 2019). The study of non-small, non-kernel-regime initialization has been mostly centered around linear models. For matrix factorization models, spectral initialization (Saxe et al., 2014; Gidel et al., 2019; Tarmoun et al., 2021) allows for decoupling the training dynamics into several scalar dynamics. For non-spectral initialization, the notion of weight imbalance, a quantity that depends on the differences between the weights matrices of adjacent layers, is crucial in most analyses. When the initialization is balanced, i.e., when the imbalance matrices are zero, the convergence relies on the initial end-to-end linear model being close to its optimum (Arora et al., 2018a;b). It has been shown that having a non-zero imbalance potentially improves the convergence rate (Tarmoun et al., 2021; Min et al., 2021), but the analysis only works for two-layer models. For deep linear networks, the effect of weight imbalance on the convergence has been only studied in the case when all imbalance matrices are positive semi-definite (Yun et al., 2020), which is often unrealistic in practice. Lastly, most of the aforementioned analyses study the l2 loss for regression tasks, and it remains unknown whether they can be generalized to other types of losses commonly used in classification tasks. Our contribution: This paper aims to provide a general framework for analyzing the convergence of gradient flow on multi-layer linear models. We consider the gradient flow on a loss function of the form L = f(W1W2 · · ·WL), where f satisfies the gradient dominance property. We show that with proper initialization, the loss converges to its global minimum exponentially. More specifically: • Our analysis shows that the convergence rate depends on two trajectory-specific quantities: 1) the imbalance matrices, which measure the difference between the weights of adjacent layers, and 2) a lower bound on the least singular values of weight product W = W1W2 · · ·WL. The former is time-invariant under gradient flow, thus it is fully determined by the initialization, while the latter can be controlled by initializing the product sufficiently close to its optimum. • Our analysis covers most initialization schemes used in prior work (Saxe et al., 2014; Tarmoun et al., 2021; Arora et al., 2018a;b; Min et al., 2021; Yun et al., 2020) for both multi-layer linear networks and diagonal linear networks while providing convergence guarantees for a wider range of initializations. Furthermore, our rate bounds characterize the general effect of weight imbalance on convergence. • Our convergence results directly apply to loss functions commonly used in regression tasks, and can be extended to loss functions used in classification tasks with an alternative assumption on f , under which we show O(1/t) convergence of the loss. Notations: For an n×m matrix A, we let AT denote the matrix transpose of A, σi(A) denote its i-th singular value in decreasing order and we conveniently write σmin(A) = σmin{n,m}(A) and let σk(A) = 0 if k > min{n,m}. We also let ∥A∥2 = σ1(A) and ∥A∥F = √ tr(ATA). For a square matrix of size n, we let tr(A) denote its trace and we let diag{ai}ni=1 be a diagonal matrix with ai specifying its i-th diagonal entry. For a Hermitian matrix A of size n, we let λi(A) denote its i-th eigenvalue and we write A ⪰ 0 (A ⪯ 0) when A is positive semi-definite (negative semi-definite). For two square matrices A,B of the same size, we let ⟨A,B⟩F = tr(ATB). For a scalar-valued or matrix-valued function of time, F (t), we write Ḟ , Ḟ (t) or ddtF (t) for its time derivative. Additionally, we use In to denote the identity matrix of order n and O(n) to denote the set of n× n orthogonal matrices. Lastly, we use [·]+ := max{·, 0}. 2 OVERVIEW OF THE ANALYSIS This paper considers the problem of finding a matrix W that solves min W∈Rn×m f(W ) , (1) with the following assumption on f . Assumption 1. The function f is differentiable and satisfies1: A1: f satisfies the Polyak-Łojasiewicz (PL) condition, i.e. ∥∇f(W )∥2F ≥ γ(f(W ) − f∗),∀W . This condition is also known as gradient dominance. A2: f is K-smooth, i.e., ∥∇f(W ) − ∇f(V )∥F ≤ K∥W − V ∥F ,∀W,V , and f is µ-strongly convex, i.e., f(W ) ≥ f(V ) + ⟨∇f(V ),W − V ⟩F + µ 2 ∥W − V ∥ 2 F ,∀W,V . While classic work (Polyak, 1987) has shown that the gradient descent update on W with proper step size ensures a linear rate of convergence of f(W ) towards its optimal value f∗, the recent surge of research on the convergence and implicit bias of gradient-based methods for deep neural networks has led to a great amount of work on the overparametrized problem: min {Wl}Ll=1 L ( {Wl}Ll=1 ) = f(W1W2 · · ·WL) , (2) where L ≥ 2, Wl ∈ Rhl−1×hl , i = 1, · · · , L, with h0 = n, hL = m and min{h1, · · · , hL−1} ≥ min{n,m}. This assumption on min{h1, · · · , hL−1} is necessary to ensure that the optimal value of (2) is also f∗, and in this case, the product ∏L l=1 Wl can represent an overparametrized linear network/model (Arora et al., 2018b; Tarmoun et al., 2021; Min et al., 2021) 2.1 CONVERGENCE VIA GRADIENT DOMINANCE For problem (2), consider the gradient flow dynamics on the loss function L ( {Wl}Ll=1 ) : Ẇl = − ∂ ∂Wl L ( {Wl}Ll=1 ) , l = 1, · · · , L . (3) The gradient flow dynamics can be viewed as gradient descent with “infinitesimal” step size and convergence results for gradient flow can be used to understand the corresponding gradient descent algorithm with sufficiently small step size (Elkabetz & Cohen, 2021). We have the following result regarding the time-derivative of L under gradient flow (3). Lemma 1. Under continuous dynamics in (3), we have L̇ = −∥∇L ( {Wl}Ll=1 ) ∥2F = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F , (4) where W = ∏L l=1 Wl, and T{Wl}Ll=1 is the following positive semi-definite linear operator on R n×m T{Wl}Ll=1E = L∑ l=1 ( l−1∏ i=0 Wi )( l−1∏ i=0 Wi )T E ( L+1∏ i=l+1 Wi )T ( L+1∏ i=l+1 Wi ) ,W0 = In,WL+1 = Im . Such an expression of ∥∇L∥2F has been studied in Arora et al. (2018b), and we include a proof in Appendix C for completeness. Our convergence analysis is as follows. For this overparameterized problem, the minimum L∗ of (2) is f∗. Then from Lemma 1 and Assumption A1, we have L̇ = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F ≤ −λmin(T{Wl}Ll=1)∥∇f(W )∥ 2 F (min-max theorem (Teschl, 2014)) (5) (A1) ≤ −λmin(T{Wl}Ll=1)γ(f(W )− f ∗) = −λmin(T{Wl}Ll=1)γ(L − L ∗). If we can find a lower bound α > 0 such that λmin(T{Wl(t)}Ll=1) ≥ α,∀t ≥ 0, then the following inequality holds on the entire training trajectory ddt (L − L ∗) ≤ −αγ (L − L∗). Therefore, by using Grönwall’s inequality (Grönwall, 1919), we can show that the loss function L converges exponential to its minimum, i.e., L(t)− L∗ ≤ exp (−αγt) (L(0)− L∗) ,∀t ≥ 0 . (6) 1Note that A2 assumes µ-strong convexity, which implies A1 with γ = 2µ. However, we list A1 and A2 separately since they have different roles in our analysis. Therefore, to show exponential convergence of the loss, we need to lower bound λmin(T{Wl(t)}Ll=1). Most existing work on the convergence of gradient flow/descent on linear networks implicitly provides such a lower bound, given additional assumptions on the initialization {Wl(0)}Ll=1, though not presented with such generality. We revisit previous analyses to see how such a problem can be solved for two-layer linear networks, then present our new results regarding deep linear networks. 3 LESSONS FROM TWO-LAYER LINEAR MODELS In this section, we revisit prior work through the lens of our general convergence analysis in Section 2.1. A lower bound on λmin(T{Wl(t)}Ll=1) can be obtained from the training invariance of the gradient flow. We first consider the following imbalance matrices: Dl := W T l Wl −Wl+1WTl+1, l = 1, · · · , L− 1 . (7) For such imbalance matrices, we have Lemma 2. Under the continuous dynamics (3), we have Ḋl(t) = 0,∀t ≥ 0, l = 1, · · · , L− 1. Such invariance of weight imbalance has been studied in most work on linear networks (Arora et al., 2018a; Du et al., 2018; Yun et al., 2020). We include the proof in Appendix C for completeness. Since the imbalance matrices {Dl}L−1l=1 are fixed at its initial value, any point {Wl(t)}Ll=1 on the training trajectory must satisfy the imbalance constraints Wl(t)TWl(t)−Wl+1WTl+1 = Dl(0), l = 1, · · · , L− 1. Previous work has shown that enforcing certain non-zero imbalance at initialization leads to exponential convergence of the loss for two-layer networks (Tarmoun et al., 2021; Min et al., 2021), and for deep networks (Yun et al., 2020). Another line of work (Arora et al., 2018a;b) has shown that balanced initialization (Dl = 0,∀l) haves exactly λmin(T{Wl(t)}Ll=1) = Lσ 2−2/L min (W (t)), where W (t) = ∏L l=1 Wl(t). This suggests that the bound on λmin(T{Wl(t)}Ll=1) we are looking for should potentially depend on both the weight imbalance matrices and weight product matrix. Indeed, for two-layer models, a re-statement2 of the results in (Min et al., 2022) provides a lower bound on λmin(T{W1,W2}) with the knowledge of the imbalance and the product. Lemma 3 (re-stated from Min et al. (2022)). When L = 2, given weights {W1,W2} with imbalance matrix D = WT1 W1 −W2WT2 and product W = W1W2, define ∆+=[λ1(D)]+−[λn(D)]+ ,∆−=[λ1(−D)]+−[λm(−D)]+ ,∆=[λn(D)]++[λm(−D)]+ . (8) Then for the linear operator T{W1,W2} defined in Lemma 1, we have λmin ( T{W1,W2} ) ≥ 1 2 ( −∆+ + √ (∆+ +∆)2 + 4σ2n (W )−∆− + √ (∆− +∆)2 + 4σ2m (W ) ) . (9) Min et al. (2022) include a detailed discussion on the bound, including tightness. For our purpose, we note the following: Effect of imbalance: It follows from (9) that λmin ( T{W1,W2} ) ≥ ∆ since σmin(W ) ≥ 0. Therefore, ∆ is always a lower bound on the convergence rate. This means that, for most initializations, the fact that the imbalance matrices are bounded away from zero (characterized by ∆ > 0) is already sufficient for exponential convergence. Effect of product: The role of the product in (9) is more nuanced: Assume n = m for simplicity so that σn(WWT ) = σm(WTW ) = σ2min(W ). We see that the non-negative quantities ∆+,∆− control how much the product affects the convergence. More precisely, the lower bound in (9) is a decreasing function of both ∆+ and ∆−. When ∆+ = ∆− = 0, the lower bound reduces to√ ∆2 + 4σ2min(W ), showing a joint contribution to convergence from both imbalance and product. However, as ∆+,∆− increases, the bound decreases towards ∆, which means that the effect of 2In Min et al. (2022), there is no general idea of lower bounding λmin ( T{W1,W2} ) , but their analyses essentially provide such a bound. imbalance always exists, but the effect of the product diminishes for large ∆+,∆−. We note that ∆+,∆− measure how the eigenvalues of the imbalance matrix D are different in magnitude, i.e., how “ill-conditioned" the imbalance matrix is. Implication on convergence: Note that (9) is almost a lower bound for λmin ( T{W1(t),W2(t)} ) , t ≥ 0, as the imbalance matrix D is time-invariant (so are ∆+,∆−,∆), except the right-hand side of (9) also depends on σmin(W (t)). If f satisfies A2, then f has a unique minimizer W ∗. Moreover, one can show that given a initial product W (0), W (t) is constrained to lie within a closed ball{ W : ∥W −W ∗∥F ≤ √ K µ ∥W (0)−W ∗∥F } . That is, the product W (t) does not get too far away from W ∗ during training. We can use this to derive the following lower bound on σmin(W (t)): σmin(W (t)) ≥ [ σmin(W ∗)− √ K µ ∥W (0)−W ∗∥F ] + := margin (See Appendix A). (10) This margin term being positive guarantees that the closed ball excludes any W with σmin(W ) = 0. With this observation, we find a lower bound λmin ( T{W1(t),W2(t)} ) , t ≥ 0 that depends on both the weight imbalance and margin, and the exponential convergence of loss L follows: Theorem 1. Let D be the imbalance matrix for L = 2. The continuous dynamics in (3) satisfy L(t)− L∗ ≤ exp (−α2γt) (L(0)− L∗),∀t ≥ 0 , (11) where 1. If f satisfies only A1, then α2 = ∆ ; 2. If f satisfies both A1 and A2, then α2 = −∆+ + √ (∆+ +∆)2 + 4 ( [ σn (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 −∆− + √ (∆− +∆)2 + 4 ( [ σm (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 , (12) with W (0) = ∏L l=1 Wl(0) and W ∗ equal to the unique optimizer of f . Please see Appendix E for the proof. Theorem 1 is new as it generalizes the convergence result in Min et al. (2022) for two-layer linear networks, which is only for l2 loss in linear regression. Our result considers a general loss function defined by f , including the losses for matrix factorization (Arora et al., 2018a), linear regression (Min et al., 2022), and matrix sensing (Arora et al., 2019a). Additionally, Arora et al. (2018a) first introduced the notion of margin for f in matrix factorization problems (K = 1, µ = 1), and we extend it to any f that is smooth and strongly convex. Towards deep models: So far, we revisited prior results on two-layer networks, showing how λmin(TW1,W2) can be lower bounded by weight imbalance and product, from which the convergence result is derived. Can we generalize the analysis to deep networks? The main challenge is that even computing λmin(T{Wl}Ll=1) given the weights {Wl} L l=1 is complicated: For L = 2, λmin(TW1,W2) = λn(W1W T 1 ) + λm(W T 2 W2), but such nice relation does not exist for L > 3, which makes the search for a tight lower bound as in (9) potentially difficult. On the other hand, the findings in (9) shed light on what can be potentially shown for the deep layer case: 1. For two-layer networks, we always have the bound λmin ( T{W1,W2} ) ≥ ∆, which depends only on the imbalance. Can we find a lower bound on the convergence rate of a deep network that depends only on an imbalance quantity analogous to ∆? If yes, how does such a quantity depend on network depth? 2. For two-layer networks, the bound reduces to √ ∆2 + 4σ2min(W ) when the imbalance is “well- conditioned" (∆+,∆− are small). For deep networks, can we characterize such joint contribution from the imbalance and product, given a similar assumption? We will answer these questions as we present our convergence results for deep networks. 4 CONVERGENCE RESULTS FOR DEEP LINEAR MODELS 4.1 THREE-LAYER MODEL Beyond two-layer models, the convergence analysis for imbalanced networks not in the kernel regime has only been studied for specific initializations (Yun et al., 2020). In this section, we derive a novel rate bound for three-layer models that applies to a wide range of imbalanced initializations. For ease of presentation, we denote the two imbalance matrices for three-layer models, D1 and D2, as −D1 = W2WT2 −WT1 W1 := D21 , D2 = WT2 W2 −W3WT3 := D23. (13) Our lower bound on λmin ( T{W1,W2,W3} ) comes after a few definitions. Definition 1. Given two real symmetric matrices A,B of order n, we define the non-commutative binary operation ∧r as A∧rB := diag{min{λi(A), λi+1−r(B)}}ni=1 , where λj(·) = +∞,∀j ≤ 0. Definition 2. Given imbalance matrices (D21, D23) ∈ Rh1×h1 × Rh2×h2 , define D̄h1 =diag{max{λi(D21), λi(D23), 0}} h1 i=1, D̄h2 =diag{max{λi(D21), λi(D23), 0}} h2 i=1, (14) ∆21=tr(D̄h1)− tr(D̄h1 ∧n D21), ∆ (2) 21 =tr(D̄ 2 h1)− tr ( (D̄h1 ∧n D21 )2 ), (15) ∆23=tr(D̄h2)− tr(D̄h2 ∧m D23), ∆ (2) 23 =tr(D̄ 2 h2)− tr ( (D̄h2 ∧m D23 )2 ). (16) Theorem 2. When L = 3, given weights {W1,W2,W3} with imbalance matrices (D21, D23), then for the linear operator T{W1,W2,W3} defined in Lemma 1, we have λmin ( T{W1,W2,W3} ) ≥ 1 2 (∆ (2) 21 +∆ 2 21) + ∆21∆23 + 1 2 (∆ (2) 23 +∆ 2 23) (17) Proof Sketch. Generally, it is difficult to directly work on λmin ( T{W1,W2,W3} ) , and we use the lower bound λmin ( T{W1,W2,W3} ) ≥ λn(W1W2WT2 WT1 ) + λn(W1WT1 )λm(WT3 W3) + λm(W T 3 W T 2 W2W3). We show that given D21, D23, the optimal value of min W1,W2,W3 λn(W1W2W T 2 W T 1 ) + λn(W1W T 1 )λm(W T 3 W3) + λm(W T 3 W T 2 W2W3) (18) s.t. W2W T 2 −WT1 W1 = D21, WT2 W2 −W3WT3 = D23 is ∆∗(D21, D23) = 12 (∆ (2) 21 +∆ 2 21) + ∆21∆23 + 1 2 (∆ (2) 23 +∆ 2 23), the bound shown in (17). Please see Appendix F for the complete proof and a detailed discussion on the proof idea. With the theorem we immediately have the following corollary. Corollary 1. When L = 3, given initialization with imbalance matrices (D21, D23) and f satisfying A1, the continuous dynamics in (3) satisfy L(t)− L∗ ≤ exp (−α3γt) (L(0)− L∗),∀t ≥ 0 , (19) where α3 = 12 (∆ (2) 21 +∆ 2 21) + ∆21∆23 + 1 2 (∆ (2) 23 +∆ 2 23). We make the following remarks regarding the contribution. Optimal bound via imbalance: First of all, as shown in the proof sketch, our bound should be considered as the best lower bound on λmin(T{W1(t),W2(t),W3(t)}) one can obtain given knowledge of the imbalance matrices D21 and D23 only. More importantly, this lower bound works for ANY initialization and has the same role as ∆ does in two-layer linear networks, i.e., (17) quantifies the general effect imbalance on the convergence. Finding an improved bound that takes the effect of product σmin(W ) into account is an interesting future research direction. Implication on convergence: Corollary 2 shows exponential convergence of the loss L(t) if α3 > 0. While it is challenging to characterize all initialization such that α3 > 0, the case n = m = 1 is rather simpler: In this case, D̄h1 ∧1 D21 = D21 and D̄h2 ∧1 D23 = D23. Then we have ∆21 = tr(D̄h1)− tr(D21) = h1∑ i=1 (λi(D̄h1)− λi(D21)) + λh1(D̄h1)− λh1(D21) ≥ −λh1(D21) , and similarly we have ∆23 ≥ −λh2(D23). Therefore, α3 ≥ ∆21∆23 ≥ λh1(D21)λh2(D23) > 0 when both D21 and D23 have negative eigenvalues, which is easy to satisfy as both D21 and D23 are given by the difference between two positive semi-definite matrices. Such observation can be generalized to show that α3 > 0 when D21 has at least n negative eigenvalues and D23 has at least m negative eigenvalues. Moreover, we show that α3 > 0 under certain definiteness assumptions on D21 and D23, please refer to the remark after Theorem 3 in Section 4.2. A better characterization of the initialization that has α3 > 0 is an interesting future research topic. Technical contribution: The way we find the lower bound in (17) is by studying the generalized eigenvalue interlacing relation imposed by the imbalance constraints. Specifically, W2WT2 −WT1 W1 = D21 suggests that λi+n(W2WT2 ) ≤ λi(D21) ≤ λi(W2WT2 ),∀i because W2WT2 −D21 is a matrix of at most rank-n. We derive, from such interlacing relation, novel eigenvalue bounds (See Lemma F.6) on λn(WT1 W1) and λn(W1W2W T 2 W1) that depends on eigenvalues of both W2W T 2 and D21. Then the eigenvalues of W2WT2 can also be controlled by the fact that W2 must satisfy both imbalance equations in (13). Since imbalance equations like those in (13) appear in deep networks and certain nonlinear networks Du et al. (2018); Le & Jegelka (2022), we believe our mathematical results are potentially useful for understanding those networks. Comparison with prior work: The convergence of multi-layer linear networks under balanced initialization (Dl = 0,∀l) has been studied in Arora et al. (2018a;b), and our result is complementary as we study the effect of non-zero imbalance on the convergence of three-layer networks. Some settings with imbalanced weights have been studied: Yun et al. (2020) studies a special initialization scheme (Dl ⪰ 0, l = 1, · · · , L − 2, and DL−1 ⪰ λIhL−1) that forces the partial ordering of the weights, and Wu et al. (2019) uses a similar initialization to study the linear residual networks. Our bound works for such initialization and also show such partial ordering is not necessary for convergence. 4.2 DEEP LINEAR MODELS The lower bound we derived for three-layer networks applies to any initialization. However, the bound is a fairly complicated function of all the imbalance matrices that is hard to interpret. Searching for such a general bound is even more challenging for models with arbitrary depth (L ≥ 3). Therefore, our results for deep networks will rely on extra assumptions on the weights that simplify the lower bound to facilite interpretability. Specifically, we consider the following properties of the weights: Definition 3. A set of weights {Wl}Ll=1 with imbalance matrices {Dl := WTl Wl −Wl+1WTl+1} L−1 l=1 is said to be unimodal with index l∗ if there exists some l∗ ∈ [L] such that Dl ⪰ 0, for l < l∗ and Dl ⪯ 0, for l ≥ l∗ . We define its cumulative imbalances {d̃(i)}L−1i=1 as d̃(i) = {∑i l=l∗ λm(−Dl), i ≥ l∗∑l∗−1 l=i λn(Dl), i < l ∗ . Furthermore, for weights with unimodality index l∗, if additionally, Dl = dlIhl , l = 1, · · · , L− 1 for dl ≥ 0, for l < l∗ and dl ≤ 0, for l ≥ l∗ , then those weights are said to have homogeneous imbalance. The unimodality assumption enforces an ordering of the weights w.r.t. the positive semi-definite cone. This is more clear when considering scalar weights {wl}Ll=1, in which case unimodality requires w2l to be descending until index l ∗ and ascending afterward. Under this unimodality assumption, we show that imbalance contributes to the convergence of the loss via a product of cumulative imbalanaces. Furthermore, we also show the combined effects of imbalance and weight product when the imbalance matrices are “well-conditioned" (in this case, homogeneous). More formally, we have: Theorem 3. For weights {Wl}Ll=1 with unimodality index l∗, we have λmin ( T{Wl}Ll=1 ) ≥ L−1∏ l=1 d̃(i) . (20) Furthermore, if the weights have homogeneous imbalance, then λmin ( T{Wl}Ll=1 ) ≥ √√√√(L−1∏ l=1 d̃(i) )2 + ( Lσ 2−2/L min (W ) )2 , W = L∏ l=1 Wl . (21) We make the following remarks: Connection to results for three-layer: For three-layer networks, we present an optimal bound λmin(TW1,W2,W3) ≥ 1 2 (∆ (2) 21 +∆ 2 21) + ∆21∆23 + 1 2 (∆ (2) 23 +∆ 2 23) , given knowledge of the imbalance. Interestingly, when comparing it with our bound in (20), we have: Claim. When L = 3, for weights {W1,W2,W3} with unimodality index l∗, 1. If l∗ = 1, then 12 (∆ (2) 23 +∆ 2 23) = ∏L−1 l=1 d̃(i) and 1 2 (∆ (2) 21 +∆ 2 21) = ∆21∆23 = 0; 2. If l∗ = 2, then ∆21∆23 = ∏L−1 l=1 d̃(i) and 1 2 (∆ (2) 21 +∆ 2 21) = 1 2 (∆ (2) 23 +∆ 2 23) = 0; 3. If l∗ = 3, then 12 (∆ (2) 21 +∆ 2 21) = ∏L−1 l=1 d̃(i) and 1 2 (∆ (2) 23 +∆ 2 23) = ∆21∆23 = 0. We refer the readers to Appendix G for the proof. The claim shows that the bound in (20) is optimal for three-layer unimodal weights as it coincides with the one in Theorem 2. We conjecture that (20) is also optimal for multi-layer unimodal weights and leave the proof for future research. Interestingly, while the bound for three-layer models is complicated, the three terms 12 (∆ (2) 23 + ∆ 2 23), ∆21∆23, 1 2 (∆ (2) 21 +∆ 2 21), seem to roughly capture how close the weights are to those with unimodality. This hints at potential generalization of Theorem 2 to the deep case where the bound should have L terms capturing how close the weights are to those with different unimodality (l∗ = 1, · · · , L). Effect of imbalance under unimodality: For simplicity, we assume unimodality index l∗ = L. The bound ∏L−1 i=1 d̃(i), as a product of cumulative imbalances, generally grows exponentially with the depth L. Prior work Yun et al. (2020) studies the case Dl ⪰ 0, l = 1, · · · , L−2, and DL−1 ⪰ λIhL−1 , in which case ∏L−1 i=1 d̃(i) ≥ λL−1. Our bound ∏L−1 i=1 d̃(i) suggests the dependence on L could be super-exponential: When λn(Dl) ≥ ϵ > 0, for l = 1, · · · , L − 1, we have ∏L−1 i=1 d̃(i) =∏L−1 i=1 ∑L−1 l=i λn(Dl) ≥ ∏L−1 l=1 lϵ = ϵ L−1(L − 1)!, which grows faster in L than λL−1 for any λ. Therefore, for gradient flow dynamics, the depth L could greatly improve convergence in the presence of weight imbalance. One should note, however, that such analysis can not be directly translated into fast convergence guarantees of gradient descent algorithm as one requires careful tuning of the step size for the discrete weight updates to follow the trajectory of the continuous dynamics (Elkabetz & Cohen, 2021). With our bound in Theorem 3, we show convergence of deep linear models under various initialization: Convergence under unimodality: The following immediately comes from Theorem 3: Corollary 2. If the initialization weights {Wl(0)}Ll=1 are unimodal, then the continuous dynamics in (3) satisfy L(t)− L∗ ≤ exp (−αLγt) (L(0)− L∗),∀t ≥ 0, (22) where 1. If f satisfies A1 only, then αL = ∏L−1 i=1 d̃(i) ; 2. If f satisfies both A1, A2, and the weights additionally have homogeneous imbalance, then αL = √√√√(L−1∏ i=1 d̃(i) )2 + ( L ( [ σmin (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2−2/L)2 , with W (0) = ∏L l=1 Wl(0) and W ∗ equal to the unique optimizer of f . Spectral initialization under l2 loss: Suppose f = 12∥Y −W∥ 2 F and W = ∏L l=1 Wl. We write the SVD of Y ∈ Rn×m as Y = P [ ΣY 0 0 0 ] [ Q 0 ] := P Σ̃Y Q̃, where P ∈ O(n), Q ∈ O(m) . Consider the spectral initialization W1(0) = RΣ1V T1 , Wl(0) = Vl−1ΣlV T l , l = 2, · · · , L − 1, WL(0) = VL−1ΣLQ̃, where Σl, l = 1, · · · , L are diagonal matrices of our choice and Vl ∈ Rn×hl , l = 1, · · · , L− 1 with V Tl Vl = Ihl . It can be shown that (See Appendix D.1 for details) W1(t) = RΣ1(t)V T 1 , Wl(t) = Vl−1Σl(t)V T l , l = 2, · · · , L− 1, WL(t) = VL−1ΣL(t)Q̃. (23) Moreover, only the first m diagonal entries of Σl are changing. Let σi,l, σi,y denote the i-th diagonal entry of Σl, and Σ̃Y respectively, then the dynamics of {σi,l}Ll=1 follow the gradient flow on Li({σi,l}Ll=1) = 12 ∣∣∣σi,y −∏Ll=1 σi,l∣∣∣2 for i = 1, · · · ,m, which is exactly a multi-layer model with scalar weights: f(w) = |σi,y − w|2/2, w = ∏L l=1 wl. Therefore, spectral initialization under l2 loss can be decomposed into m deep linear models with scalar weights, whose convergence is shown by Corollary 2. Note that networks with scalar weights are always unimodal, because the gradient flow dynamics remain the same under any reordering of the weights, and always have homogeneous imbalance, because the imbalances are scalars. The aforementioned analysis also applies to the linear regression loss f = 12∥Y −XW∥ 2 F , provided that {X,Y } is co-diagonalizable (Gidel et al., 2019), we refer the readers to Appendix D.1 for details. Diagonal linear networks: Consider f a function on Rn satisfying A1 and L = f(w1 ⊙ · · · ⊙ wL), where wl ∈ Rn and ⊙ denote the Hadamard (entrywise) product. The gradient flow on L can not be decomposed into several scalar dynamics as in the previous example, but we can show that (See Appendix D.2 for details) L̇ = −∥∇L∥2F ≤ −(min1≤i≤n λmin(T{wl,i}Ll=1))γ(L − L ∗) , where wl,i is the i-th entry of wl. Then Theorem 3 gives lower bound on each λmin(T{wl,i}Ll=1). Again, here the scalar weights {wl,i}Ll always have homogeneous imbalance. Comparison with prior work: Regarding unimodality, Yun et al. (2020) studies the initialization scheme Dl ⪰ 0, l = 1, · · · , L − 2 and DL−1 ⪰ λIhL−1 , which is a special case (l∗ = L) of ours. The homogeneous imbalance assumption was first introduced in Tarmoun et al. (2021) for two-layer networks, and we generalize it to the deep case. We compare, in Table 1, our bound to the existing work (Arora et al., 2018a; Yun et al., 2020) on convergence of deep linear networks outside the kernel regime. Note that Yun et al. (2020) only studies a special case of unimodal weights (l∗ = L with d̃(i) ≥ λ > 0,∀i). For homogeneous imbalance, Yun et al. (2020) studied spectral initialization and diagonal linear networks, whose initialization necessarily has homogeneous imbalance, but the result does not generalize to the case of matrix weights. Our results for homogeneous imbalance works also for deep networks with matrix weights, and our rate also shown the effect of the product Lσ 2−2/L min (W ), thus covers the balanced initialization (Arora et al., 2018a) as well. Remark 1. Note that the loss functions used in Gunasekar et al. (2018); Yun et al. (2020) are classification losses, such as the exponential loss, which do not satisfy A1. However, they do satisfy Polyak-Łojasiewicz-inequality-like condition ∥∇f(W )∥F ≥ γ(f(W )− f∗),∀W ∈ Rn×m, which allows us to show O ( 1 t ) convergence of the loss function. We refer readers to Section 4.3 for details. 4.3 CONVERGENCE RESULTS FOR CLASSIFICATION TASKS As we discussed in Remark 1, the loss functions used in classification tasks generally do not satisfy our assumption A1 for f . Suppose instead we have the following assumption for f . Assumption 2. f satisfies (A1’) ∥∇f(W )∥F ≥ γ(f(W )− f∗),∀W ∈ Rn×m. Then we can show O ( 1 t ) convergence of the loss function, as stated below. Theorem 4. Given initialization {Wl(0)}Ll=1 such that λmin(T{Wl(t)}Ll=1) ≥ α, ∀t ≥ 0 , and f satisfying (A1´), then L(t)− L∗ ≤ L(0)− L ∗ (L(0)− L∗)αγ2t+ 1 . (24) We refer readers to Appendix B for the proof. The lower bound on λmin(T{Wl(t)}Ll=1) can be obtained for different networks by our results in previous sections. The exponential loss satisfies A1´ (see Appendix D.2)and is studied in Gunasekar et al. (2017); Yun et al. (2020) for diagonal linear networks. 5 CONCLUSION AND DISCUSSION In this paper, we study the convergence of gradient flow on multi-layer linear models with a loss of the form f(W1W2 · · ·WL), where f satisfies the gradient dominance property. We show that with proper initialization, the loss converges to its global minimum exponentially. Moreover, we derive a lower bound on the convergence rate that depends on two trajectory-specific quantities: the imbalance matrices, which measure the difference between the weights of adjacent layers, and the least singular value of the weight product W = W1W2 · · ·WL. Our analysis applies to various types of multi-layer linear networks, and our assumptions on f are general enough to include loss functions used for both regression and classification tasks. Future directions include extending our results to analyzing gradient descent algorithms as well as to nonlinear networks. Convergence of gradient descent: Exponential convergence of the gradient flow often suggests a linear rate of convergence of gradient descent when the step size is sufficiently small, and Elkabetz & Cohen (2021) formally establishe such a relation. Indeed, Arora et al. (2018a) shows linear rate of convergence of gradient descent on multi-layer linear networks under balanced initialization. A natural future direction is to translate the convergence results under imbalanced initialization for gradient flow to the convergence of gradient descent with a small step size. Nonlinear networks: While the crucial ingredient of our analysis, invariance of weight imbalance, no longer holds in the presence of nonlinearities such as ReLU activations, Du et al. (2018) shows the diagonal entries of the imbalance are preserved, and Le & Jegelka (2022) shows a stronger version of such invariance given additional assumptions on the training trajectory. Therefore, the weight imbalance could still be used to understand the training of nonlinear networks. A CONTROLLING PRODUCT WITH MARGIN Most of our results regarding the lower bound on λminT{Wl}Ll=1 are given as a value that depends on 1) the imbalance of the weights; 2) the minimum singular value of the product W = ∏L l=1. The former is time-invariant, thus is determined at initialization. As we discussed in Section 3, we require the notion of margin to lower bound σmin(W (t)) for the entire training trajectory. The following Lemma that will be used in subsequent proofs. Lemma A.1. If f satisfies A2, then the gradient flow dynamics (3) satisfies σmin (W (t)) ≥ σmin (W ∗)− √ K µ ∥W (0)−W ∗∥F ,∀t ≥ 0 where W (t) = ∏L l=1 Wl(t) and W ∗ is the unique minimizer of f . Proof. From Polyak (1987), we know if f is µ-strongly convex, then it has unique minimizer W ∗ and f(W )− f∗ ≥ µ 2 ∥W −W ∗∥2F . Additionally, if f is K-smooth, then f(W )− f∗ ≤ K 2 ∥W −W ∗∥2F . This suggests that for any t ≥ 0, K 2 ∥W (t)−W ∗∥2F ≥ L(t)− L∗ ≥ µ 2 ∥W −W ∗∥2F . Therefore we have the following σmin (W (t)) = σmin (W (t)−W ∗ +W ∗) (Weyl’s inequality (Horn & Johnson, 2012, 7.3.P16)) ≥ σmin(W ∗)− ∥W (t)−W ∗∥2 ≥ σmin(W ∗)− ∥W (t)−W ∗∥F (f is µ-strongly convex) ≥ σmin(W ∗)− √ 2 µ (L(t)− L∗) (L(t) non-decreasing under (3)) ≥ σmin(W ∗)− √ 2 µ (L(0)− L∗) (f is K-smooth) ≥ σmin(W ∗)− √ K µ ∥W (0)−W ∗∥2F = σmin (W ∗)− √ K µ ∥W (0)−W ∗∥F . Lemma A.1 directly suggests σmin(W (t)) ≥ [ σmin (W ∗)− √ K µ ∥W (0)−W ∗∥F ] + := margin , and the margin is positive when the initial product W (0) is sufficiently close to the optimal W ∗. B CONVERGENCE ANALYSIS FOR CLASSIFICATION LOSSES In this section, we consider f that satisfies, instead of A1, the following Assumption 3. f satisfies (A1´) the Łojasiewicz inequality-like condition ∥∇f(W )∥F ≥ γ(f(W )− f∗),∀W ∈ Rn×m . Theorem 4 (Restated). Given initialization {Wl(0)}Ll=1 such that λminT{Wl(t)}Ll=1 ≥ α, ∀t ≥ 0 , and f satisfying (A1´), then L(t)− L∗ ≤ L(0)− L ∗ (L(0)− L∗)αγ2t+ 1 . Proof. When f satisfies (A1´), then (5) becomes L̇ = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F ≤ −λmin ( T{Wl}Ll=1 ) ∥∇f(W )∥2F (A1′) ≤ −λmin ( T{Wl}Ll=1 ) γ2(f(W )− f∗)2 = −λmin ( T{Wl}Ll=1 ) γ2(L − L∗)2 . This shows − 1 (L − L∗)2 d dt (L − L∗) ≥ λmin ( T{Wl}Ll=1 ) γ2 ≥ αγ2 . Take integral ∫ dt on both sides, we have for any t ≥ 0, 1 L − L∗ ∣∣∣∣t 0 ≥ αγ2t , which is L(t)− L∗ ≤ L(0)− L ∗ (L(0)− L∗)αγ2t+ 1 . Following similar argument as in Yun et al. (2020), we can show that exponential loss on linearly separable data satisfies A1´. Claim. Let f(w) = ∑N i=1 exp ( −yi · (xTi w) ) , if there exists z ∈ Sn−1 and γ > 0 such that yi(x T i z) ≥ γ , ∀i = 1, · · · , N , then ∥∇f(w)∥F ≥ γf(w) ,∀w ∈ Rn . Proof. Using the linear separability, we have ∥∇f(w)∥2F = ∥∥∥∥∥ N∑ i=1 exp ( −yi · (xTi w) ) yixi ∥∥∥∥∥ 2 F (Cauchy-Schwarz inequality) ≥ ∣∣∣∣∣ 〈 z, N∑ i=1 exp ( −yi · (xTi w) ) yixi 〉∣∣∣∣∣ 2 ≥ ∣∣∣∣∣ N∑ i=1 exp ( −yi · (xTi w) ) γ ∣∣∣∣∣ 2 = |f(w)γ|2 , as desired. Therefore, our convergence results applies to classification tasks with exponential loss. C PROOFS IN SECTION 2 First we prove the expression for L̇ in Lemma 1 Lemma 1 (Restated). Under continuous dynamics in (3), we have L̇ = −∥∇L ( {Wl}Ll=1 ) ∥2F = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F , where W = ∏L l=1 Wi, and T{Wl}Ll=1 is a positive semi-definite linear operator on R n×m with T{Wl}Ll=1E = L∑ l=1 ( l−1∏ i=1 Wi )( l−1∏ i=1 Wi )T E ( L+1∏ i=l+1 Wi )T ( L+1∏ i=l+1 Wi ) ,W0 = In,WL+1 = Im . Proof. The gradient flow dynamics (3) satisfies d dt Wl = − ∂ ∂Wl L ( {Wl}Ll=1 ) = − ( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T , (C.1) where W = ∏L l=1 Wi and W0 = In,WL+1 = Im. Therefore L̇ = L∑ l=1 〈 ∂ ∂Wl L ( {Wl}Ll=1 ) , d dt Wl 〉 F = − L∑ l=1 ∥∥∥∥ ∂∂WlL ({Wl}Ll=1) ∥∥∥∥2 F = − L∑ l=1 〈( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T , ( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T〉 F = − L∑ l=1 〈( l−1∏ i=1 Wi )( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T ( L+1∏ i=l+1 Wi ) ,∇f(W ) 〉 F = − 〈 L∑ l=1 ( l−1∏ i=1 Wi )( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T ( L+1∏ i=l+1 Wi ) ,∇f(W ) 〉 F = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F . Next, we prove that the imbalance matrices are time-invariant Lemma 2 (Restated). Under continuous dynamics (3), we have Ḋl(t) = 0,∀t ≥ 0, l = 1, · · · , L−1. Proof. Each imbalance matrix is defined as Dl = W T l Wl −Wl+1WTl+1, l = 1, · · · , L− 1 We only need to check that ddt ( WTl Wl ) and ddt ( Wl+1W T l+1 ) are identical. From the following derivation, for l = 1, · · · , L− 1, d dt ( WTl Wl ) = ẆTl Wl +W T l Ẇl = − ( L+1∏ i=l+1 Wi ) ∇T f(W ) ( l−1∏ i=1 Wi ) Wl −WTl ( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T = − ( L+1∏ i=l+1 Wi ) ∇T f(W ) ( l∏ i=1 Wi ) − ( l∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T , d dt ( Wl+1W T l+1 ) = Ẇl+1W T l+1 +Wl+1Ẇ T l+1 = − ( l∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+2 Wi )T WTl+1 −Wl+1 ( L+1∏ i=l+2 Wi ) ∇T f(W ) ( l∏ i=1 Wi ) = − ( l∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T − ( L+1∏ i=l+1 Wi ) ∇T f(W ) ( l∏ i=1 Wi ) we know ddt ( WTl Wl ) = ddt ( Wl+1W T l+1 ) , therefore Ḋl(t) = 0, l = 1, · · · , L− 1 D LINEAR MODELS RELATED TO SCALAR DYNAMICS D.1 SPECTRAL INITIALIZATION UNDER l2 LOSS The spectral initialization Saxe et al. (2014); Gidel et al. (2019); Tarmoun et al. (2021) considers the following: Suppose f = 12∥Y −XW∥ 2 F and we have overparametrized model W = ∏L l=1 Wl. Additionally, we assume Y ∈ RN×m, X ∈ RN×n (n ≥ m) are co-diagonalizable, i.e. there exist P ∈ RN×n with PTP = In and Q ∈ O(m), R ∈ O(n) such that we can write the SVDs of Y,X as Y = P [ ΣY 0 0 0 ] [ Q 0 ] := P Σ̃Y Q̃ and X = PΣXRT . Remark 2. In Section 4, we discussed the case f = 12∥Y −W∥ 2 F , which is essentially considering the aforementioned setting with N = n and X = In. Given any set of weights {Wl}Ll=1 such that W1 = RΣ1V T 1 , Wl = Vl−1ΣlV T l , l = 2, · · · , L− 1, WL = VL−1ΣLQ̃ , where Σl, l = 1, · · · , L are diagonal matrices and Vl ∈ Rn×hl , l = 1, · · · , L− 1 with V Tl Vl = Ihl . The gradient flow dynamics requires Ẇ1 = − ∂L ∂W1 = −XT (Y −XW )WTL WTL−1 · · ·WT2 = −RΣXPT · (P Σ̃Y Q̃− PΣXRT ·R L∏ l=1 ΣLQ̃) · Q̃TΣLVL−1 · VL−1ΣL−1V TL−2 · · ·V2Σ2V T1 = −R ( ΣX ( ΣY − ΣX L∏ l=1 Σl ) Q̃Q̃T L∏ l=2 Σl ) V T1 = −R ( ΣX ( ΣY − ΣX L∏ l=1 Σl )[ Im 0 0 0 ] L∏ l=2 Σl ) V T1 , which shows that the singular space R, V1 for W1 do not change under the gradient flow, and the singular values σi,1of W1 satisfies σ̇i,1 = ( σi,y − σi,x L∏ l=1 σi,l ) σi,x L∏ l=2 σi,l , i = 1, · · · ,m , and σ̇i,1 = 0, i = m+ 1, · · · , n. Similarly, we can show that Ẇl = Vl−1 ΣX (ΣY − ΣX L∏ i=1 Σi )[ Im 0 0 0 ]∏ i ̸=l Σi V Tl , l = 2, · · · , L− 1 , ẆL = VL−1 ΣX (ΣY − ΣX L∏ i=1 Σi )[ Im 0 0 0 ]∏ i̸=L Σi Q̃ . Overall, this suggests that the singular space of {Wl}Ll=1 do not change under the gradient flow, and their singular values satisfies, for i = 1, · · · ,m, σ̇i,l = ( σi,y − σi,x L∏ k=1 σi,k ) σi,x L∏ k ̸=l σi,k , l = 1, · · · , L . Each dynamic equation is equivalent to the one from gradient flow on Li({σi,l}Ll=1) = 1 2 ∣∣∣σi,y − σi,x∏Ll=1 σi,l∣∣∣2 . Therefore, under spectral initialization, the dynamics of the weights are decoupled into at most m dynamics discussed in Section 4.2. D.2 DIAGONAL LINEAR NETWORKS The loss function of diagonal linear networks Gunasekar et al. (2017); Yun et al. (2020) is of the form f(w1 ⊙ · · · ⊙ wL), we write L({wl}Ll=1) = f(w1 ⊙ · · · ⊙ wL) = f(w(1), · · · , w(n)) = f ( L∏ l=1 wl,1 , · · · , L∏ l=1 wl,n ) , i.e. f takes n variables w(1), · · · , w(n) and each variable w(i) is overparametrized into ∏L l=1 wl,i. Then we can show that L̇ = −∥∇{wl}Ll=1L∥ 2 F = n∑ i=1 L∑ l=1 ∣∣∣∣ ∂L∂wl,i ∣∣∣∣2 = n∑ i=1 L∑ l=1 ∣∣∣∣ ∂f∂w(i) ∣∣∣∣2 ∣∣∣∣∂w(i)∂wl,i ∣∣∣∣2 = n∑ i=1 ∣∣∣∣ ∂f∂w(i) ∣∣∣∣2 L∑ l=1 ∣∣∣∣∂w(i)∂wl,i ∣∣∣∣2 = n∑ i=1 ∣∣∣∣ ∂f∂w(i) ∣∣∣∣2 τ{wl,i}Ll=1 ≤ − ( min 1≤i≤n τ{wl,i}Ll=1 ) n∑ i=1 ∣∣∣∣ ∂f∂w(i) ∣∣∣∣2 (f satisfies A1) ≤ − ( min 1≤i≤n τ{wl,i}Ll=1 ) γ(f − f∗) = − ( min 1≤i≤n τ{wl,i}Ll=1 ) γ(L − L∗) . Moreover, the imbalances {d(i)l := w2l,i − w2l+1,i} L−1 l=1 are time-invariant for each i = 1, · · · , n by Lemma 2. Therefore, we can lower bound each τ{wl,i}Ll=1 using the imbalance {d (i) l } L−1 l=1 as in Proposition 3, from which one obtain the exponential convergence of L. E PROOF FOR TWO-LAYER MODEL Using Lemma 3, we can prove Theorem 1 Theorem 1 (Restated). Let D be the imbalance matrix for L = 2. The continuous dynamics in (3) satisfy L(t)− L∗ ≤ exp (−α2γt) (L(0)− L∗),∀t ≥ 0 , (E.2) where 1. If f satisfies only A1, then α2 = ∆ ; 2. If f satisfies both A1 and A2, then α2 = −∆+ + √ (∆+ +∆)2 + 4 ( [ σn (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 −∆− + √ (∆− +∆)2 + 4 ( [ σm (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 , (E.3) with W (0) = ∏L l=1 Wl(0) and W ∗ equal to the unique optimizer of f . Proof. As shown in (5) in Section 2. We have d dt (L(t)− L∗) ≤ −λminT{W1(t),W2(t)}γ(L(t)− L ∗) . Consider any {W1(t),W2(t)} on the trajectory, we have, by Lemma 3, λminT{W1(t),W2(t)} Lemma 3 ≥ 1 2 ( −∆+ + √ (∆+ +∆)2 + 4σ2n (W (t)) −∆− + √ (∆− +∆)2 + 4σ2m (W (t)) ) ≥ 1 2 ( −∆+ + √ (∆+ +∆)2 −∆− + √ (∆− +∆)2 ) = ∆ := α2 . When f also satisfies A2: we need to prove σn (W (t)) ≥ [ σn (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + , (E.4) σm (W (t)) ≥ [ σm (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + . (E.5) When n = m, both inequalities are equivalent to σmin(W (t)) ≥ [ σmin(W ∗)− √ K/µ∥W (0)−W ∗∥F ] + , which is true by Lemma A.1. When n ̸= m, one of the two inequalities become trivial. For example, if n > m, then (E.4) is trivially 0 ≥ 0, and (E.5) is equivalent to σmin(W (t)) ≥ [ σmin(W ∗)− √ K/µ∥W (0)−W ∗∥F ] + , which is true by Lemma A.1. Overall, we have λminT{W1(t),W2(t)} Lemma 3 ≥ 1 2 ( −∆+ + √ (∆+ +∆)2 + 4σ2n (W (t)) −∆− + √ (∆− +∆)2 + 4σ2m (W (t)) ) ≥ 1 2 −∆+ + √ (∆+ +∆)2 + 4 ([ σn (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 −∆− + √ (∆− +∆)2 + 4 ([ σm (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 := α2 . Either case, we have ddt (L(t)− L ∗) ≤ −α2γ(L(t)− L∗), and by Grönwall’s inequality, we have L(t)− L∗ ≤ exp(−α2γt)(L(0)− L∗) . F PROOFS FOR THREE-LAYER MODEL In Section F.1, we discuss the proof idea for Theorem 2, then present the proof afterwards. In Section G, we show a simplified bound when the weights can be ordered w.r.t. positive-semidefiniteness. F.1 PROOF IDEA We first discuss the proof idea behind Theorem 2, then provide the complete proof. Consider the case when n = m = 1, we use the following notations for the weights {wT1 ,W2, w3} ∈ R1×h1 × Rh1×h2 × Rh2×1. The quantity we need to lower bound is λminT{wT1 ,W2,w3} = w T 1 W2W T 2 w1 + w T 1 w1 · wT3 w3 + wT3 WT2 W2w3 = ∥WT2 w1∥2 + ∥w1∥2∥w3∥2 + ∥W2w3∥2 , where our linear operator T{wT1 ,W2,w3} reduces to a scalar. The remaining thing to do is to find min wT1 ,W2,w3 ∥WT2 w1∥2 + ∥w1∥2∥w3∥2 + ∥W2w3∥2 (F.6) s.t. W2W T 2 − w1wT1 = D21 WT2 W2 − w3wT3 = D23 i.e., we try to find the best lower bound on λminT{wT1 ,W2,w3} given the fact that the weights have to satisfies the imbalance constraints from D21, D23, and λminT{wT1 ,W2,w3} is related to the norm of some weights ∥w1∥, ∥w3∥ and the “alignment” between weights ∥WT2 w1∥, ∥W2w3∥. The general idea of the proof is to lower bound each term ∥WT2 w1∥2, ∥w1∥2, ∥w3∥2, ∥W2w3∥2 individually given the imbalance constraints, then show the existence of some {wT1 ,W2, w3} that attains the lower bound simultaneously. The following discussion is most for lower bounding ∥w1∥, ∥WT2 w1∥ but the same argument holds for lower bounding other quantities. Understanding what can be chosen to be the spectrum of W2WT2 (W T 2 W2) is the key to derive an lower bound, and the imbalance constraints implicitly limit such choices. To see this, notice that W2WT2 − w1wT1 = D21 suggests an eigenvalue interlacing relation (Horn & Johnson, 2012, Corollary 4.39) between W2WT2 and D21, i.e. λh1(D21) ≤ λh1(W2WT2 ) ≤ λh1−1(D21) ≤ · · · ≤ λ2(W2WT2 ) ≤ λ1(D21) ≤ λ1(W2WT2 ) . Therefore, any choice of {λi(W2WT2 )} h1 i=1 must satisfy the interlacing relation with {λi(D21)} h1 i=1. Similarly, {λi(WT2 W2)} h2 i=1 must satisfy the interlacing relation with {λi(D23)} h2 i=1. Moreover, {λi(W2WT2 )} h1 i=1 and {λi(WT2 W2)} h2 i=1 agree on non-zero eigenvalues. In short, an appropriate choice of the spectrum of W2WT2 (W T 2 W2) needs to respect the interlacing relation with the eigenvalues of D21 and D23. The following matrix is defined D̄h1 := diag{max{λi(D21), λi(D23), 0}} h1 i=1 to be the “minimum” choice of the spectrum of W2WT2 (W T 2 W2) in the sense that any valid choice of {λi(W2WT2 )} h1 i=1 must satisfies λi(W2W T 2 ) ≥ λi(D̄h1) ≥ λi(D21) , i = 1, · · · , h1 . That is, the spectrum of D̄h1 “lies between” the one of W2W T 2 and of D21. Now we check the imbalance constraint again W2WT2 − w1wT1 = D21, it shows that: using a rank-one update w1wT1 , one obtain the spectrum of D21 starting from the spectrum of W2WT2 , and more importantly, we require the norm ∥w1∥2 to be (taking the trace on the imbalance equation) tr(W2W T 2 )− ∥w1∥2 = tr(D21) ⇒ ∥w1∥2 = tr(W2WT2 )− tr(D21) . Now since D̄h1 “lies inbetween”, we have ∥w1∥2 = tr(W2WT2 )− tr(D21) = (changes from λi(W2WT2 ) to λi(D21)) = (changes from λi(W2WT2 ) to λi(D̄h1)) + (changes from λi(D̄h1) to λi(D21)) ≥ (changes from λi(D̄h1) to λi(D21)) = tr(D̄h1)− tr(D21) , which is a lower bound on ∥w1∥2. It is exactly the ∆21 in Theorem 2 (It takes more complicated form when n > 1). A lower bound on ∥WT2 w1∥2 requires carefully exam the changes from the spectrum of D̄h1 to the one of D21. If λh1(D21) < 0, then “changes from λi(D̄) to λi(D21)” has two parts 1. (changes from λi(D̄) to [λi(D21)]+) through the part where w1 is “aligned" with WT2 , 2. (changes from 0 to λh1(D21)) through the part where w1 is “orthogonal" to W T 2 . Only the former contributes to ∥WT2 w1∥2 hence we need the expression ∆ (2) 21 +∆ 2 21, which excludes the latter part. Using similar argument we can lower bound ∥w3∥2, ∥W2w3∥2. Lastly, the existence of {wT1 ,W2, w3} that attains the lower bound is from the fact that D̄h1 (D̄h2 ) is a valid choice for the spectrum of W2WT2 (W T 2 W2). The complete proof of the Theorem 2 follows the same idea but with a generalized notion of eigenvalue interlacing, and some related novel eigenvalue bounds. F.2 PROOF OF THEOREM 2 Theorem 2 is the direct consequence of the following two results. Lemma F.1. Given any set of weights {W1,W2,W3} ∈ Rn×h1 × Rh1×h2 × Rh2×m, we have λminT{W1,W2,W3} ≥ λn(W1W2W T 2 W T 1 ) + λn(W1W T 1 )λm(W T 3 W3) + λm(W T 3 W T 2 W2W3) . (Note that λminT{W1,W2,W3} does not have a closed-form expression. One can only work with its lower bound λn(W1W2WT2 W T 1 ) + λn(W1W T 1 )λm(W T 3 W3) + λm(W T 3 W T 2 W2W3).) Theorem F.2. Given imbalance matrices pair (D21, D23) ∈ Rh1×h1 × Rh2×h2 , then the optimal value of min W1,W2,W3 2 ( λn(W1W2W T 2 W T 1 ) + λn(W1W T 1 )λm(W T 3 W3) + λm(W T 3 W T 2 W2W3) ) s.t. W2W T 2 −WT1 W1 = D21 WT2 W2 −W3WT3 = D23 is ∆∗(D21, D23) = ∆ (2) 21 +∆ 2 21 + 2∆21∆23 +∆ (2) 23 +∆ 2 23 . Combining those two results gets λminT{W1,W2,W3} ≥ ∆∗(D21, D23)/2, as stated in Theorem 2. The Lemma F.1 is intuitive and easy to prove: Proof of Lemma F.1. Notice that T{W1,W2,W3} is the summation of three positive semi-definite linear operators on Rn×m, i.e. T{W1,W2,W3} = T12 + T13 + T23 , where T12E = W1W2WT2 WT1 E, T13E = W1WT1 EWT3 W3, T23E = EWT3 WT2 W2W3 , and λminT12 = λn(W1W2WT2 WT1 ), λminT13 = λn(W1WT1 )λm(WT3 W3), λminT23 = λm(W T 3 W T 2 W2W3). Therefore, let Emin with ∥Emin∥F = 1 be the eigenmatrix associated with λminT{W1,W2,W3}, we have λminT{W1,W2,W3} = 〈 T{W1,W2,W3}, Emin 〉 F = ⟨T12, Emin⟩F + ⟨T13, Emin⟩F + ⟨T23, Emin⟩F ≥ λminT12 + λminT13 + λminT23 . The rest of this section is dedicated to prove Theorem F.2 We will first state a few Lemmas that will be used in the proof, then show the proof for Theorem F.2, and present the long proofs for the auxiliary Lemmas in the end. F.3 AUXILIARY LEMMAS The main ingredient used in proving Theorem F.2 is the notion of r-interlacing relation between the spectrum of two matrices, which is a natural generalization of the interlacing relation as seen in classical Cauchy Interlacing Theorem (Horn & Johnson, 2012, Theorem 4.3.17). Definition 4. Given real symmetric matrices A,B of order n, write A ⪰r B, if λi+r(A) ≤ λi(B) ≤ λi(A) ,∀i where λj(·) = +∞, j ≤ 0 and λj(·) = −∞
1. What is the main contribution of the paper regarding the convergence of gradient flow for deep networks? 2. What are the strengths and weaknesses of the proposed approach, particularly in its application to three-layer networks and deeper networks? 3. Do you have any concerns or questions about the lower bound in Theorem 2 and its implications for Corollary 1? 4. How does the reviewer assess the novelty and significance of the condition defined in Definition 3, especially in comparison with prior works such as [2], [3], [4], and [5]? 5. What are some minor issues mentioned by the reviewer regarding the paper's organization and clarity?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper extends the interesting results of [1] on the convergence of gradient flow for two-layer linear networks to the case of deep networks, and to a wider variety of loss functions, obtaining bounds on the rate of convergence in terms of notions of the ``imbalance'' of the initialization of the network. They also extend results from [2] on deep networks to a wider variety of initializations, again exposing a wider variety of effects. Their bounds for three-layer networks are quite general, and they provide bounds for deeper networks whose initializations satisfy a condition like one used in [2], but more general. [1] Hancheng Min, Salma Tarmoun, René Vidal, and Enrique Mallada. Convergence and implicit bias of gradient flow on overparametrized linear networks. arXiv preprint arXiv:2105.06351, 2022. [2] Yun, Chulhee, Shankar Krishnan, and Hossein Mobahi. "A unifying view on implicit bias in training linear neural networks." International Conference on Learning Representations. 2020. Strengths And Weaknesses The way that the PL condition is used to formulate the results is unfamiliar to me, and seems nice. Where the authors write ``cannot explain well the training efficiency in practice'', I don't find this completely convincing, because people don't use extremely small initializations in practice. Of course it is interesting to determine how the convergence time depends on the size of the initialization. It is not clear to me how good the lower bound in Theorem 2 is. It is consistent with my current understanding, such as it is, that the bound of Theorem 2 is often zero, which, when true, makes Corollary 1 vacuous. It would be helpful to provide some examples of where Theorem 2 provides a good bound. I don't think that some of the claims made in the text after Theorem 2 are justified by the result. For example, since Theorem 2 only has a lower bound, I don't see that the claim that they fully characterize the effect of imbalance on the convergence of three-layer networks'' is justified. Also, to show that such a partial ordering is not necessary'', you would need to provide an example of where the RHS of (17) was positive without such a partial ordering. Similarly, I don't see that claims made in the intro about characterization are justified. The condition defined in Definition 3 seems like a strong assumption to me. For example, it seems like it is very unlikely to be satisfied, or even nearly satisfied, by a random initialization. I think that more justification is needed that this is an interesting condition to study. Also, while it is more general than the condition used in [2], it seems to make available a similar set of tools. It also reads to me as being crafted to be able to apply Weyl's inequality, rather than to capture a useful product of natural, and random, initializations. Roughly speaking, it looks to me that unimodality is ``looking under the lamppost''. Some more comparison with prior work would be helpful. For example, I believe that the results of this paper are incomparable in strength with the results in [3]-[5], which touch on an overlapping set of issues. I think that the claim that `` the convergence analysis for imbalanced networks not in the kernel regime has only been studied for specific initializations'' is not correct, though I do feel that, despite this, the authors' comparison with [2] seems fair overall. Here are a couple of smaller points. The unimodality index is not defined until Definition 3, but is used earlier in the proof of Theorem 2 -- the authors should move it earlier. Before (13), the authors indicate that they are going to define D 1 and D 2 , but then they define D 21 and D 23 . I will carefully read and consider the authors' reply. [1] Hancheng Min, Salma Tarmoun, René Vidal, and Enrique Mallada. Convergence and implicit bias of gradient flow on overparametrized linear networks. arXiv preprint arXiv:2105.06351, 2022. [2] Yun, Chulhee, Shankar Krishnan, and Hossein Mobahi. "A unifying view on implicit bias in training linear neural networks." International Conference on Learning Representations. 2020. [3] Jin, Chi, et al. "How to escape saddle points efficiently." International Conference on Machine Learning. PMLR, 2017. [4] Hu, Wei, Lechao Xiao, and Jeffrey Pennington. "Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks." International Conference on Learning Representations. 2019. [5] Zou, Difan, Philip M. Long, and Quanquan Gu. "On the Global Convergence of Training Deep Linear ResNets." International Conference on Learning Representations. 2019. Clarity, Quality, Novelty And Reproducibility The mathematical writing is clear. There seem to be a number of substantial new ideas in the paper.
ICLR
Title On the Convergence of Gradient Flow on Multi-layer Linear Models Abstract In this paper, we analyze the convergence of gradient flow on a multi-layer linear model with a loss function of the form f(W1W2 · · ·WL). We show that when f satisfies the gradient dominance property, proper weight initialization leads to exponential convergence of the gradient flow to a global minimum of the loss. Moreover, the convergence rate depends on two trajectory-specific quantities that are controlled by the weight initialization: the imbalance matrices, which measure the difference between the weights of adjacent layers, and the least singular value of the weight product W = W1W2 · · ·WL. Our analysis provides improved rate bounds for several multi-layer network models studied in the literature, leading to novel characterizations of the effect of weight imbalance on the rate of convergence. Our results apply to most regression losses and extend to classification ones. 1 INTRODUCTION The mysterious ability of gradient-based optimization algorithms to solve the non-convex neural network training problem is one of the many unexplained puzzles behind the success of deep learning in various applications (Krizhevsky et al., 2012; Hinton et al., 2012; Silver et al., 2016). A vast body of work has tried to theoretically understand this phenomenon by analyzing either the loss landscape or the dynamics of the training parameters. The landscape-based analysis is motivated by the empirical observation that deep neural networks used in practice often have a benign landscape (Li et al., 2018a), which can facilitate convergence. Existing theoretical analysis (Lee et al., 2016; Sun et al., 2015; Jin et al., 2017) shows that gradient descent converges when the loss function satisfies the following properties: 1) all of its local minimums are global minima; and 2) every saddle point has a Hessian with at least one strict negative eigenvalue. Prior work suggests that the matrix factorization model (Ge et al., 2017), shallow networks (Kawaguchi, 2016), and certain positively homogeneous networks (Haeffele & Vidal, 2015; 2017) have such a landscape property, but unfortunately condition 2) does not hold for networks with multiple hidden layers (Kawaguchi, 2016). Moreover, the landscape-based analysis generally fails to provide a good characterization of the convergence rate, except for a local rate around the equilibrium (Lee et al., 2016; Ge et al., 2017). In fact, during early stages of training, gradient descent could take exponential time to escape some saddle points if not initialized properly (Du et al., 2017). The trajectory-based analyses study the training dynamics of the weights given a specific initialization. For example, the case of small initialization has been studied for various models (Arora et al., 2019a; Gidel et al., 2019; Li et al., 2018b; Stöger & Soltanolkotabi, 2021; Li et al., 2021b;a). Under this type of initialization, the trained model is implicitly biased towards low-rank (Arora et al., 2019a; Gidel et al., 2019; Li et al., 2018b; Stöger & Soltanolkotabi, 2021; Li et al., 2021b), and sparse (Li et al., 2021a) models. While the analysis for small initialization gives rich insights on the generalization of neural networks, the number of iterations required for gradient descent to find a good model often increases as the initialization scale decreases. Such dependence proves to be logarithmic on the scale for symmetric matrix factorization model (Li et al., 2018b; Stöger & Soltanolkotabi, 2021; Li et al., 2021b), but for deep networks, existing analysis at best shows a polynomial dependency (Li et al., 2021a). Therefore, the analysis for small initialization, while insightful in understanding the implicit bias of neural network training, is not suitable for understanding the training efficiency in practice since small initialization is rarely implemented due to its slow convergence. Another line of work studies the initialization in the kernel regime, where a randomly initialized sufficiently wide neural network can be well approximated by its linearization at initialization Jacot et al. (2018); Chizat et al. (2019); Arora et al. (2019b). In this regime, gradient descent enjoys a linear rate of convergence toward the global minimum (Du et al., 2019; Allen-Zhu et al., 2019; Du & Hu, 2019). However, the width requirement in the analysis is often unrealistic, and empirical evidence has shown that practical neural networks generally do not operate in the kernel regime (Chizat et al., 2019). The study of non-small, non-kernel-regime initialization has been mostly centered around linear models. For matrix factorization models, spectral initialization (Saxe et al., 2014; Gidel et al., 2019; Tarmoun et al., 2021) allows for decoupling the training dynamics into several scalar dynamics. For non-spectral initialization, the notion of weight imbalance, a quantity that depends on the differences between the weights matrices of adjacent layers, is crucial in most analyses. When the initialization is balanced, i.e., when the imbalance matrices are zero, the convergence relies on the initial end-to-end linear model being close to its optimum (Arora et al., 2018a;b). It has been shown that having a non-zero imbalance potentially improves the convergence rate (Tarmoun et al., 2021; Min et al., 2021), but the analysis only works for two-layer models. For deep linear networks, the effect of weight imbalance on the convergence has been only studied in the case when all imbalance matrices are positive semi-definite (Yun et al., 2020), which is often unrealistic in practice. Lastly, most of the aforementioned analyses study the l2 loss for regression tasks, and it remains unknown whether they can be generalized to other types of losses commonly used in classification tasks. Our contribution: This paper aims to provide a general framework for analyzing the convergence of gradient flow on multi-layer linear models. We consider the gradient flow on a loss function of the form L = f(W1W2 · · ·WL), where f satisfies the gradient dominance property. We show that with proper initialization, the loss converges to its global minimum exponentially. More specifically: • Our analysis shows that the convergence rate depends on two trajectory-specific quantities: 1) the imbalance matrices, which measure the difference between the weights of adjacent layers, and 2) a lower bound on the least singular values of weight product W = W1W2 · · ·WL. The former is time-invariant under gradient flow, thus it is fully determined by the initialization, while the latter can be controlled by initializing the product sufficiently close to its optimum. • Our analysis covers most initialization schemes used in prior work (Saxe et al., 2014; Tarmoun et al., 2021; Arora et al., 2018a;b; Min et al., 2021; Yun et al., 2020) for both multi-layer linear networks and diagonal linear networks while providing convergence guarantees for a wider range of initializations. Furthermore, our rate bounds characterize the general effect of weight imbalance on convergence. • Our convergence results directly apply to loss functions commonly used in regression tasks, and can be extended to loss functions used in classification tasks with an alternative assumption on f , under which we show O(1/t) convergence of the loss. Notations: For an n×m matrix A, we let AT denote the matrix transpose of A, σi(A) denote its i-th singular value in decreasing order and we conveniently write σmin(A) = σmin{n,m}(A) and let σk(A) = 0 if k > min{n,m}. We also let ∥A∥2 = σ1(A) and ∥A∥F = √ tr(ATA). For a square matrix of size n, we let tr(A) denote its trace and we let diag{ai}ni=1 be a diagonal matrix with ai specifying its i-th diagonal entry. For a Hermitian matrix A of size n, we let λi(A) denote its i-th eigenvalue and we write A ⪰ 0 (A ⪯ 0) when A is positive semi-definite (negative semi-definite). For two square matrices A,B of the same size, we let ⟨A,B⟩F = tr(ATB). For a scalar-valued or matrix-valued function of time, F (t), we write Ḟ , Ḟ (t) or ddtF (t) for its time derivative. Additionally, we use In to denote the identity matrix of order n and O(n) to denote the set of n× n orthogonal matrices. Lastly, we use [·]+ := max{·, 0}. 2 OVERVIEW OF THE ANALYSIS This paper considers the problem of finding a matrix W that solves min W∈Rn×m f(W ) , (1) with the following assumption on f . Assumption 1. The function f is differentiable and satisfies1: A1: f satisfies the Polyak-Łojasiewicz (PL) condition, i.e. ∥∇f(W )∥2F ≥ γ(f(W ) − f∗),∀W . This condition is also known as gradient dominance. A2: f is K-smooth, i.e., ∥∇f(W ) − ∇f(V )∥F ≤ K∥W − V ∥F ,∀W,V , and f is µ-strongly convex, i.e., f(W ) ≥ f(V ) + ⟨∇f(V ),W − V ⟩F + µ 2 ∥W − V ∥ 2 F ,∀W,V . While classic work (Polyak, 1987) has shown that the gradient descent update on W with proper step size ensures a linear rate of convergence of f(W ) towards its optimal value f∗, the recent surge of research on the convergence and implicit bias of gradient-based methods for deep neural networks has led to a great amount of work on the overparametrized problem: min {Wl}Ll=1 L ( {Wl}Ll=1 ) = f(W1W2 · · ·WL) , (2) where L ≥ 2, Wl ∈ Rhl−1×hl , i = 1, · · · , L, with h0 = n, hL = m and min{h1, · · · , hL−1} ≥ min{n,m}. This assumption on min{h1, · · · , hL−1} is necessary to ensure that the optimal value of (2) is also f∗, and in this case, the product ∏L l=1 Wl can represent an overparametrized linear network/model (Arora et al., 2018b; Tarmoun et al., 2021; Min et al., 2021) 2.1 CONVERGENCE VIA GRADIENT DOMINANCE For problem (2), consider the gradient flow dynamics on the loss function L ( {Wl}Ll=1 ) : Ẇl = − ∂ ∂Wl L ( {Wl}Ll=1 ) , l = 1, · · · , L . (3) The gradient flow dynamics can be viewed as gradient descent with “infinitesimal” step size and convergence results for gradient flow can be used to understand the corresponding gradient descent algorithm with sufficiently small step size (Elkabetz & Cohen, 2021). We have the following result regarding the time-derivative of L under gradient flow (3). Lemma 1. Under continuous dynamics in (3), we have L̇ = −∥∇L ( {Wl}Ll=1 ) ∥2F = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F , (4) where W = ∏L l=1 Wl, and T{Wl}Ll=1 is the following positive semi-definite linear operator on R n×m T{Wl}Ll=1E = L∑ l=1 ( l−1∏ i=0 Wi )( l−1∏ i=0 Wi )T E ( L+1∏ i=l+1 Wi )T ( L+1∏ i=l+1 Wi ) ,W0 = In,WL+1 = Im . Such an expression of ∥∇L∥2F has been studied in Arora et al. (2018b), and we include a proof in Appendix C for completeness. Our convergence analysis is as follows. For this overparameterized problem, the minimum L∗ of (2) is f∗. Then from Lemma 1 and Assumption A1, we have L̇ = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F ≤ −λmin(T{Wl}Ll=1)∥∇f(W )∥ 2 F (min-max theorem (Teschl, 2014)) (5) (A1) ≤ −λmin(T{Wl}Ll=1)γ(f(W )− f ∗) = −λmin(T{Wl}Ll=1)γ(L − L ∗). If we can find a lower bound α > 0 such that λmin(T{Wl(t)}Ll=1) ≥ α,∀t ≥ 0, then the following inequality holds on the entire training trajectory ddt (L − L ∗) ≤ −αγ (L − L∗). Therefore, by using Grönwall’s inequality (Grönwall, 1919), we can show that the loss function L converges exponential to its minimum, i.e., L(t)− L∗ ≤ exp (−αγt) (L(0)− L∗) ,∀t ≥ 0 . (6) 1Note that A2 assumes µ-strong convexity, which implies A1 with γ = 2µ. However, we list A1 and A2 separately since they have different roles in our analysis. Therefore, to show exponential convergence of the loss, we need to lower bound λmin(T{Wl(t)}Ll=1). Most existing work on the convergence of gradient flow/descent on linear networks implicitly provides such a lower bound, given additional assumptions on the initialization {Wl(0)}Ll=1, though not presented with such generality. We revisit previous analyses to see how such a problem can be solved for two-layer linear networks, then present our new results regarding deep linear networks. 3 LESSONS FROM TWO-LAYER LINEAR MODELS In this section, we revisit prior work through the lens of our general convergence analysis in Section 2.1. A lower bound on λmin(T{Wl(t)}Ll=1) can be obtained from the training invariance of the gradient flow. We first consider the following imbalance matrices: Dl := W T l Wl −Wl+1WTl+1, l = 1, · · · , L− 1 . (7) For such imbalance matrices, we have Lemma 2. Under the continuous dynamics (3), we have Ḋl(t) = 0,∀t ≥ 0, l = 1, · · · , L− 1. Such invariance of weight imbalance has been studied in most work on linear networks (Arora et al., 2018a; Du et al., 2018; Yun et al., 2020). We include the proof in Appendix C for completeness. Since the imbalance matrices {Dl}L−1l=1 are fixed at its initial value, any point {Wl(t)}Ll=1 on the training trajectory must satisfy the imbalance constraints Wl(t)TWl(t)−Wl+1WTl+1 = Dl(0), l = 1, · · · , L− 1. Previous work has shown that enforcing certain non-zero imbalance at initialization leads to exponential convergence of the loss for two-layer networks (Tarmoun et al., 2021; Min et al., 2021), and for deep networks (Yun et al., 2020). Another line of work (Arora et al., 2018a;b) has shown that balanced initialization (Dl = 0,∀l) haves exactly λmin(T{Wl(t)}Ll=1) = Lσ 2−2/L min (W (t)), where W (t) = ∏L l=1 Wl(t). This suggests that the bound on λmin(T{Wl(t)}Ll=1) we are looking for should potentially depend on both the weight imbalance matrices and weight product matrix. Indeed, for two-layer models, a re-statement2 of the results in (Min et al., 2022) provides a lower bound on λmin(T{W1,W2}) with the knowledge of the imbalance and the product. Lemma 3 (re-stated from Min et al. (2022)). When L = 2, given weights {W1,W2} with imbalance matrix D = WT1 W1 −W2WT2 and product W = W1W2, define ∆+=[λ1(D)]+−[λn(D)]+ ,∆−=[λ1(−D)]+−[λm(−D)]+ ,∆=[λn(D)]++[λm(−D)]+ . (8) Then for the linear operator T{W1,W2} defined in Lemma 1, we have λmin ( T{W1,W2} ) ≥ 1 2 ( −∆+ + √ (∆+ +∆)2 + 4σ2n (W )−∆− + √ (∆− +∆)2 + 4σ2m (W ) ) . (9) Min et al. (2022) include a detailed discussion on the bound, including tightness. For our purpose, we note the following: Effect of imbalance: It follows from (9) that λmin ( T{W1,W2} ) ≥ ∆ since σmin(W ) ≥ 0. Therefore, ∆ is always a lower bound on the convergence rate. This means that, for most initializations, the fact that the imbalance matrices are bounded away from zero (characterized by ∆ > 0) is already sufficient for exponential convergence. Effect of product: The role of the product in (9) is more nuanced: Assume n = m for simplicity so that σn(WWT ) = σm(WTW ) = σ2min(W ). We see that the non-negative quantities ∆+,∆− control how much the product affects the convergence. More precisely, the lower bound in (9) is a decreasing function of both ∆+ and ∆−. When ∆+ = ∆− = 0, the lower bound reduces to√ ∆2 + 4σ2min(W ), showing a joint contribution to convergence from both imbalance and product. However, as ∆+,∆− increases, the bound decreases towards ∆, which means that the effect of 2In Min et al. (2022), there is no general idea of lower bounding λmin ( T{W1,W2} ) , but their analyses essentially provide such a bound. imbalance always exists, but the effect of the product diminishes for large ∆+,∆−. We note that ∆+,∆− measure how the eigenvalues of the imbalance matrix D are different in magnitude, i.e., how “ill-conditioned" the imbalance matrix is. Implication on convergence: Note that (9) is almost a lower bound for λmin ( T{W1(t),W2(t)} ) , t ≥ 0, as the imbalance matrix D is time-invariant (so are ∆+,∆−,∆), except the right-hand side of (9) also depends on σmin(W (t)). If f satisfies A2, then f has a unique minimizer W ∗. Moreover, one can show that given a initial product W (0), W (t) is constrained to lie within a closed ball{ W : ∥W −W ∗∥F ≤ √ K µ ∥W (0)−W ∗∥F } . That is, the product W (t) does not get too far away from W ∗ during training. We can use this to derive the following lower bound on σmin(W (t)): σmin(W (t)) ≥ [ σmin(W ∗)− √ K µ ∥W (0)−W ∗∥F ] + := margin (See Appendix A). (10) This margin term being positive guarantees that the closed ball excludes any W with σmin(W ) = 0. With this observation, we find a lower bound λmin ( T{W1(t),W2(t)} ) , t ≥ 0 that depends on both the weight imbalance and margin, and the exponential convergence of loss L follows: Theorem 1. Let D be the imbalance matrix for L = 2. The continuous dynamics in (3) satisfy L(t)− L∗ ≤ exp (−α2γt) (L(0)− L∗),∀t ≥ 0 , (11) where 1. If f satisfies only A1, then α2 = ∆ ; 2. If f satisfies both A1 and A2, then α2 = −∆+ + √ (∆+ +∆)2 + 4 ( [ σn (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 −∆− + √ (∆− +∆)2 + 4 ( [ σm (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 , (12) with W (0) = ∏L l=1 Wl(0) and W ∗ equal to the unique optimizer of f . Please see Appendix E for the proof. Theorem 1 is new as it generalizes the convergence result in Min et al. (2022) for two-layer linear networks, which is only for l2 loss in linear regression. Our result considers a general loss function defined by f , including the losses for matrix factorization (Arora et al., 2018a), linear regression (Min et al., 2022), and matrix sensing (Arora et al., 2019a). Additionally, Arora et al. (2018a) first introduced the notion of margin for f in matrix factorization problems (K = 1, µ = 1), and we extend it to any f that is smooth and strongly convex. Towards deep models: So far, we revisited prior results on two-layer networks, showing how λmin(TW1,W2) can be lower bounded by weight imbalance and product, from which the convergence result is derived. Can we generalize the analysis to deep networks? The main challenge is that even computing λmin(T{Wl}Ll=1) given the weights {Wl} L l=1 is complicated: For L = 2, λmin(TW1,W2) = λn(W1W T 1 ) + λm(W T 2 W2), but such nice relation does not exist for L > 3, which makes the search for a tight lower bound as in (9) potentially difficult. On the other hand, the findings in (9) shed light on what can be potentially shown for the deep layer case: 1. For two-layer networks, we always have the bound λmin ( T{W1,W2} ) ≥ ∆, which depends only on the imbalance. Can we find a lower bound on the convergence rate of a deep network that depends only on an imbalance quantity analogous to ∆? If yes, how does such a quantity depend on network depth? 2. For two-layer networks, the bound reduces to √ ∆2 + 4σ2min(W ) when the imbalance is “well- conditioned" (∆+,∆− are small). For deep networks, can we characterize such joint contribution from the imbalance and product, given a similar assumption? We will answer these questions as we present our convergence results for deep networks. 4 CONVERGENCE RESULTS FOR DEEP LINEAR MODELS 4.1 THREE-LAYER MODEL Beyond two-layer models, the convergence analysis for imbalanced networks not in the kernel regime has only been studied for specific initializations (Yun et al., 2020). In this section, we derive a novel rate bound for three-layer models that applies to a wide range of imbalanced initializations. For ease of presentation, we denote the two imbalance matrices for three-layer models, D1 and D2, as −D1 = W2WT2 −WT1 W1 := D21 , D2 = WT2 W2 −W3WT3 := D23. (13) Our lower bound on λmin ( T{W1,W2,W3} ) comes after a few definitions. Definition 1. Given two real symmetric matrices A,B of order n, we define the non-commutative binary operation ∧r as A∧rB := diag{min{λi(A), λi+1−r(B)}}ni=1 , where λj(·) = +∞,∀j ≤ 0. Definition 2. Given imbalance matrices (D21, D23) ∈ Rh1×h1 × Rh2×h2 , define D̄h1 =diag{max{λi(D21), λi(D23), 0}} h1 i=1, D̄h2 =diag{max{λi(D21), λi(D23), 0}} h2 i=1, (14) ∆21=tr(D̄h1)− tr(D̄h1 ∧n D21), ∆ (2) 21 =tr(D̄ 2 h1)− tr ( (D̄h1 ∧n D21 )2 ), (15) ∆23=tr(D̄h2)− tr(D̄h2 ∧m D23), ∆ (2) 23 =tr(D̄ 2 h2)− tr ( (D̄h2 ∧m D23 )2 ). (16) Theorem 2. When L = 3, given weights {W1,W2,W3} with imbalance matrices (D21, D23), then for the linear operator T{W1,W2,W3} defined in Lemma 1, we have λmin ( T{W1,W2,W3} ) ≥ 1 2 (∆ (2) 21 +∆ 2 21) + ∆21∆23 + 1 2 (∆ (2) 23 +∆ 2 23) (17) Proof Sketch. Generally, it is difficult to directly work on λmin ( T{W1,W2,W3} ) , and we use the lower bound λmin ( T{W1,W2,W3} ) ≥ λn(W1W2WT2 WT1 ) + λn(W1WT1 )λm(WT3 W3) + λm(W T 3 W T 2 W2W3). We show that given D21, D23, the optimal value of min W1,W2,W3 λn(W1W2W T 2 W T 1 ) + λn(W1W T 1 )λm(W T 3 W3) + λm(W T 3 W T 2 W2W3) (18) s.t. W2W T 2 −WT1 W1 = D21, WT2 W2 −W3WT3 = D23 is ∆∗(D21, D23) = 12 (∆ (2) 21 +∆ 2 21) + ∆21∆23 + 1 2 (∆ (2) 23 +∆ 2 23), the bound shown in (17). Please see Appendix F for the complete proof and a detailed discussion on the proof idea. With the theorem we immediately have the following corollary. Corollary 1. When L = 3, given initialization with imbalance matrices (D21, D23) and f satisfying A1, the continuous dynamics in (3) satisfy L(t)− L∗ ≤ exp (−α3γt) (L(0)− L∗),∀t ≥ 0 , (19) where α3 = 12 (∆ (2) 21 +∆ 2 21) + ∆21∆23 + 1 2 (∆ (2) 23 +∆ 2 23). We make the following remarks regarding the contribution. Optimal bound via imbalance: First of all, as shown in the proof sketch, our bound should be considered as the best lower bound on λmin(T{W1(t),W2(t),W3(t)}) one can obtain given knowledge of the imbalance matrices D21 and D23 only. More importantly, this lower bound works for ANY initialization and has the same role as ∆ does in two-layer linear networks, i.e., (17) quantifies the general effect imbalance on the convergence. Finding an improved bound that takes the effect of product σmin(W ) into account is an interesting future research direction. Implication on convergence: Corollary 2 shows exponential convergence of the loss L(t) if α3 > 0. While it is challenging to characterize all initialization such that α3 > 0, the case n = m = 1 is rather simpler: In this case, D̄h1 ∧1 D21 = D21 and D̄h2 ∧1 D23 = D23. Then we have ∆21 = tr(D̄h1)− tr(D21) = h1∑ i=1 (λi(D̄h1)− λi(D21)) + λh1(D̄h1)− λh1(D21) ≥ −λh1(D21) , and similarly we have ∆23 ≥ −λh2(D23). Therefore, α3 ≥ ∆21∆23 ≥ λh1(D21)λh2(D23) > 0 when both D21 and D23 have negative eigenvalues, which is easy to satisfy as both D21 and D23 are given by the difference between two positive semi-definite matrices. Such observation can be generalized to show that α3 > 0 when D21 has at least n negative eigenvalues and D23 has at least m negative eigenvalues. Moreover, we show that α3 > 0 under certain definiteness assumptions on D21 and D23, please refer to the remark after Theorem 3 in Section 4.2. A better characterization of the initialization that has α3 > 0 is an interesting future research topic. Technical contribution: The way we find the lower bound in (17) is by studying the generalized eigenvalue interlacing relation imposed by the imbalance constraints. Specifically, W2WT2 −WT1 W1 = D21 suggests that λi+n(W2WT2 ) ≤ λi(D21) ≤ λi(W2WT2 ),∀i because W2WT2 −D21 is a matrix of at most rank-n. We derive, from such interlacing relation, novel eigenvalue bounds (See Lemma F.6) on λn(WT1 W1) and λn(W1W2W T 2 W1) that depends on eigenvalues of both W2W T 2 and D21. Then the eigenvalues of W2WT2 can also be controlled by the fact that W2 must satisfy both imbalance equations in (13). Since imbalance equations like those in (13) appear in deep networks and certain nonlinear networks Du et al. (2018); Le & Jegelka (2022), we believe our mathematical results are potentially useful for understanding those networks. Comparison with prior work: The convergence of multi-layer linear networks under balanced initialization (Dl = 0,∀l) has been studied in Arora et al. (2018a;b), and our result is complementary as we study the effect of non-zero imbalance on the convergence of three-layer networks. Some settings with imbalanced weights have been studied: Yun et al. (2020) studies a special initialization scheme (Dl ⪰ 0, l = 1, · · · , L − 2, and DL−1 ⪰ λIhL−1) that forces the partial ordering of the weights, and Wu et al. (2019) uses a similar initialization to study the linear residual networks. Our bound works for such initialization and also show such partial ordering is not necessary for convergence. 4.2 DEEP LINEAR MODELS The lower bound we derived for three-layer networks applies to any initialization. However, the bound is a fairly complicated function of all the imbalance matrices that is hard to interpret. Searching for such a general bound is even more challenging for models with arbitrary depth (L ≥ 3). Therefore, our results for deep networks will rely on extra assumptions on the weights that simplify the lower bound to facilite interpretability. Specifically, we consider the following properties of the weights: Definition 3. A set of weights {Wl}Ll=1 with imbalance matrices {Dl := WTl Wl −Wl+1WTl+1} L−1 l=1 is said to be unimodal with index l∗ if there exists some l∗ ∈ [L] such that Dl ⪰ 0, for l < l∗ and Dl ⪯ 0, for l ≥ l∗ . We define its cumulative imbalances {d̃(i)}L−1i=1 as d̃(i) = {∑i l=l∗ λm(−Dl), i ≥ l∗∑l∗−1 l=i λn(Dl), i < l ∗ . Furthermore, for weights with unimodality index l∗, if additionally, Dl = dlIhl , l = 1, · · · , L− 1 for dl ≥ 0, for l < l∗ and dl ≤ 0, for l ≥ l∗ , then those weights are said to have homogeneous imbalance. The unimodality assumption enforces an ordering of the weights w.r.t. the positive semi-definite cone. This is more clear when considering scalar weights {wl}Ll=1, in which case unimodality requires w2l to be descending until index l ∗ and ascending afterward. Under this unimodality assumption, we show that imbalance contributes to the convergence of the loss via a product of cumulative imbalanaces. Furthermore, we also show the combined effects of imbalance and weight product when the imbalance matrices are “well-conditioned" (in this case, homogeneous). More formally, we have: Theorem 3. For weights {Wl}Ll=1 with unimodality index l∗, we have λmin ( T{Wl}Ll=1 ) ≥ L−1∏ l=1 d̃(i) . (20) Furthermore, if the weights have homogeneous imbalance, then λmin ( T{Wl}Ll=1 ) ≥ √√√√(L−1∏ l=1 d̃(i) )2 + ( Lσ 2−2/L min (W ) )2 , W = L∏ l=1 Wl . (21) We make the following remarks: Connection to results for three-layer: For three-layer networks, we present an optimal bound λmin(TW1,W2,W3) ≥ 1 2 (∆ (2) 21 +∆ 2 21) + ∆21∆23 + 1 2 (∆ (2) 23 +∆ 2 23) , given knowledge of the imbalance. Interestingly, when comparing it with our bound in (20), we have: Claim. When L = 3, for weights {W1,W2,W3} with unimodality index l∗, 1. If l∗ = 1, then 12 (∆ (2) 23 +∆ 2 23) = ∏L−1 l=1 d̃(i) and 1 2 (∆ (2) 21 +∆ 2 21) = ∆21∆23 = 0; 2. If l∗ = 2, then ∆21∆23 = ∏L−1 l=1 d̃(i) and 1 2 (∆ (2) 21 +∆ 2 21) = 1 2 (∆ (2) 23 +∆ 2 23) = 0; 3. If l∗ = 3, then 12 (∆ (2) 21 +∆ 2 21) = ∏L−1 l=1 d̃(i) and 1 2 (∆ (2) 23 +∆ 2 23) = ∆21∆23 = 0. We refer the readers to Appendix G for the proof. The claim shows that the bound in (20) is optimal for three-layer unimodal weights as it coincides with the one in Theorem 2. We conjecture that (20) is also optimal for multi-layer unimodal weights and leave the proof for future research. Interestingly, while the bound for three-layer models is complicated, the three terms 12 (∆ (2) 23 + ∆ 2 23), ∆21∆23, 1 2 (∆ (2) 21 +∆ 2 21), seem to roughly capture how close the weights are to those with unimodality. This hints at potential generalization of Theorem 2 to the deep case where the bound should have L terms capturing how close the weights are to those with different unimodality (l∗ = 1, · · · , L). Effect of imbalance under unimodality: For simplicity, we assume unimodality index l∗ = L. The bound ∏L−1 i=1 d̃(i), as a product of cumulative imbalances, generally grows exponentially with the depth L. Prior work Yun et al. (2020) studies the case Dl ⪰ 0, l = 1, · · · , L−2, and DL−1 ⪰ λIhL−1 , in which case ∏L−1 i=1 d̃(i) ≥ λL−1. Our bound ∏L−1 i=1 d̃(i) suggests the dependence on L could be super-exponential: When λn(Dl) ≥ ϵ > 0, for l = 1, · · · , L − 1, we have ∏L−1 i=1 d̃(i) =∏L−1 i=1 ∑L−1 l=i λn(Dl) ≥ ∏L−1 l=1 lϵ = ϵ L−1(L − 1)!, which grows faster in L than λL−1 for any λ. Therefore, for gradient flow dynamics, the depth L could greatly improve convergence in the presence of weight imbalance. One should note, however, that such analysis can not be directly translated into fast convergence guarantees of gradient descent algorithm as one requires careful tuning of the step size for the discrete weight updates to follow the trajectory of the continuous dynamics (Elkabetz & Cohen, 2021). With our bound in Theorem 3, we show convergence of deep linear models under various initialization: Convergence under unimodality: The following immediately comes from Theorem 3: Corollary 2. If the initialization weights {Wl(0)}Ll=1 are unimodal, then the continuous dynamics in (3) satisfy L(t)− L∗ ≤ exp (−αLγt) (L(0)− L∗),∀t ≥ 0, (22) where 1. If f satisfies A1 only, then αL = ∏L−1 i=1 d̃(i) ; 2. If f satisfies both A1, A2, and the weights additionally have homogeneous imbalance, then αL = √√√√(L−1∏ i=1 d̃(i) )2 + ( L ( [ σmin (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2−2/L)2 , with W (0) = ∏L l=1 Wl(0) and W ∗ equal to the unique optimizer of f . Spectral initialization under l2 loss: Suppose f = 12∥Y −W∥ 2 F and W = ∏L l=1 Wl. We write the SVD of Y ∈ Rn×m as Y = P [ ΣY 0 0 0 ] [ Q 0 ] := P Σ̃Y Q̃, where P ∈ O(n), Q ∈ O(m) . Consider the spectral initialization W1(0) = RΣ1V T1 , Wl(0) = Vl−1ΣlV T l , l = 2, · · · , L − 1, WL(0) = VL−1ΣLQ̃, where Σl, l = 1, · · · , L are diagonal matrices of our choice and Vl ∈ Rn×hl , l = 1, · · · , L− 1 with V Tl Vl = Ihl . It can be shown that (See Appendix D.1 for details) W1(t) = RΣ1(t)V T 1 , Wl(t) = Vl−1Σl(t)V T l , l = 2, · · · , L− 1, WL(t) = VL−1ΣL(t)Q̃. (23) Moreover, only the first m diagonal entries of Σl are changing. Let σi,l, σi,y denote the i-th diagonal entry of Σl, and Σ̃Y respectively, then the dynamics of {σi,l}Ll=1 follow the gradient flow on Li({σi,l}Ll=1) = 12 ∣∣∣σi,y −∏Ll=1 σi,l∣∣∣2 for i = 1, · · · ,m, which is exactly a multi-layer model with scalar weights: f(w) = |σi,y − w|2/2, w = ∏L l=1 wl. Therefore, spectral initialization under l2 loss can be decomposed into m deep linear models with scalar weights, whose convergence is shown by Corollary 2. Note that networks with scalar weights are always unimodal, because the gradient flow dynamics remain the same under any reordering of the weights, and always have homogeneous imbalance, because the imbalances are scalars. The aforementioned analysis also applies to the linear regression loss f = 12∥Y −XW∥ 2 F , provided that {X,Y } is co-diagonalizable (Gidel et al., 2019), we refer the readers to Appendix D.1 for details. Diagonal linear networks: Consider f a function on Rn satisfying A1 and L = f(w1 ⊙ · · · ⊙ wL), where wl ∈ Rn and ⊙ denote the Hadamard (entrywise) product. The gradient flow on L can not be decomposed into several scalar dynamics as in the previous example, but we can show that (See Appendix D.2 for details) L̇ = −∥∇L∥2F ≤ −(min1≤i≤n λmin(T{wl,i}Ll=1))γ(L − L ∗) , where wl,i is the i-th entry of wl. Then Theorem 3 gives lower bound on each λmin(T{wl,i}Ll=1). Again, here the scalar weights {wl,i}Ll always have homogeneous imbalance. Comparison with prior work: Regarding unimodality, Yun et al. (2020) studies the initialization scheme Dl ⪰ 0, l = 1, · · · , L − 2 and DL−1 ⪰ λIhL−1 , which is a special case (l∗ = L) of ours. The homogeneous imbalance assumption was first introduced in Tarmoun et al. (2021) for two-layer networks, and we generalize it to the deep case. We compare, in Table 1, our bound to the existing work (Arora et al., 2018a; Yun et al., 2020) on convergence of deep linear networks outside the kernel regime. Note that Yun et al. (2020) only studies a special case of unimodal weights (l∗ = L with d̃(i) ≥ λ > 0,∀i). For homogeneous imbalance, Yun et al. (2020) studied spectral initialization and diagonal linear networks, whose initialization necessarily has homogeneous imbalance, but the result does not generalize to the case of matrix weights. Our results for homogeneous imbalance works also for deep networks with matrix weights, and our rate also shown the effect of the product Lσ 2−2/L min (W ), thus covers the balanced initialization (Arora et al., 2018a) as well. Remark 1. Note that the loss functions used in Gunasekar et al. (2018); Yun et al. (2020) are classification losses, such as the exponential loss, which do not satisfy A1. However, they do satisfy Polyak-Łojasiewicz-inequality-like condition ∥∇f(W )∥F ≥ γ(f(W )− f∗),∀W ∈ Rn×m, which allows us to show O ( 1 t ) convergence of the loss function. We refer readers to Section 4.3 for details. 4.3 CONVERGENCE RESULTS FOR CLASSIFICATION TASKS As we discussed in Remark 1, the loss functions used in classification tasks generally do not satisfy our assumption A1 for f . Suppose instead we have the following assumption for f . Assumption 2. f satisfies (A1’) ∥∇f(W )∥F ≥ γ(f(W )− f∗),∀W ∈ Rn×m. Then we can show O ( 1 t ) convergence of the loss function, as stated below. Theorem 4. Given initialization {Wl(0)}Ll=1 such that λmin(T{Wl(t)}Ll=1) ≥ α, ∀t ≥ 0 , and f satisfying (A1´), then L(t)− L∗ ≤ L(0)− L ∗ (L(0)− L∗)αγ2t+ 1 . (24) We refer readers to Appendix B for the proof. The lower bound on λmin(T{Wl(t)}Ll=1) can be obtained for different networks by our results in previous sections. The exponential loss satisfies A1´ (see Appendix D.2)and is studied in Gunasekar et al. (2017); Yun et al. (2020) for diagonal linear networks. 5 CONCLUSION AND DISCUSSION In this paper, we study the convergence of gradient flow on multi-layer linear models with a loss of the form f(W1W2 · · ·WL), where f satisfies the gradient dominance property. We show that with proper initialization, the loss converges to its global minimum exponentially. Moreover, we derive a lower bound on the convergence rate that depends on two trajectory-specific quantities: the imbalance matrices, which measure the difference between the weights of adjacent layers, and the least singular value of the weight product W = W1W2 · · ·WL. Our analysis applies to various types of multi-layer linear networks, and our assumptions on f are general enough to include loss functions used for both regression and classification tasks. Future directions include extending our results to analyzing gradient descent algorithms as well as to nonlinear networks. Convergence of gradient descent: Exponential convergence of the gradient flow often suggests a linear rate of convergence of gradient descent when the step size is sufficiently small, and Elkabetz & Cohen (2021) formally establishe such a relation. Indeed, Arora et al. (2018a) shows linear rate of convergence of gradient descent on multi-layer linear networks under balanced initialization. A natural future direction is to translate the convergence results under imbalanced initialization for gradient flow to the convergence of gradient descent with a small step size. Nonlinear networks: While the crucial ingredient of our analysis, invariance of weight imbalance, no longer holds in the presence of nonlinearities such as ReLU activations, Du et al. (2018) shows the diagonal entries of the imbalance are preserved, and Le & Jegelka (2022) shows a stronger version of such invariance given additional assumptions on the training trajectory. Therefore, the weight imbalance could still be used to understand the training of nonlinear networks. A CONTROLLING PRODUCT WITH MARGIN Most of our results regarding the lower bound on λminT{Wl}Ll=1 are given as a value that depends on 1) the imbalance of the weights; 2) the minimum singular value of the product W = ∏L l=1. The former is time-invariant, thus is determined at initialization. As we discussed in Section 3, we require the notion of margin to lower bound σmin(W (t)) for the entire training trajectory. The following Lemma that will be used in subsequent proofs. Lemma A.1. If f satisfies A2, then the gradient flow dynamics (3) satisfies σmin (W (t)) ≥ σmin (W ∗)− √ K µ ∥W (0)−W ∗∥F ,∀t ≥ 0 where W (t) = ∏L l=1 Wl(t) and W ∗ is the unique minimizer of f . Proof. From Polyak (1987), we know if f is µ-strongly convex, then it has unique minimizer W ∗ and f(W )− f∗ ≥ µ 2 ∥W −W ∗∥2F . Additionally, if f is K-smooth, then f(W )− f∗ ≤ K 2 ∥W −W ∗∥2F . This suggests that for any t ≥ 0, K 2 ∥W (t)−W ∗∥2F ≥ L(t)− L∗ ≥ µ 2 ∥W −W ∗∥2F . Therefore we have the following σmin (W (t)) = σmin (W (t)−W ∗ +W ∗) (Weyl’s inequality (Horn & Johnson, 2012, 7.3.P16)) ≥ σmin(W ∗)− ∥W (t)−W ∗∥2 ≥ σmin(W ∗)− ∥W (t)−W ∗∥F (f is µ-strongly convex) ≥ σmin(W ∗)− √ 2 µ (L(t)− L∗) (L(t) non-decreasing under (3)) ≥ σmin(W ∗)− √ 2 µ (L(0)− L∗) (f is K-smooth) ≥ σmin(W ∗)− √ K µ ∥W (0)−W ∗∥2F = σmin (W ∗)− √ K µ ∥W (0)−W ∗∥F . Lemma A.1 directly suggests σmin(W (t)) ≥ [ σmin (W ∗)− √ K µ ∥W (0)−W ∗∥F ] + := margin , and the margin is positive when the initial product W (0) is sufficiently close to the optimal W ∗. B CONVERGENCE ANALYSIS FOR CLASSIFICATION LOSSES In this section, we consider f that satisfies, instead of A1, the following Assumption 3. f satisfies (A1´) the Łojasiewicz inequality-like condition ∥∇f(W )∥F ≥ γ(f(W )− f∗),∀W ∈ Rn×m . Theorem 4 (Restated). Given initialization {Wl(0)}Ll=1 such that λminT{Wl(t)}Ll=1 ≥ α, ∀t ≥ 0 , and f satisfying (A1´), then L(t)− L∗ ≤ L(0)− L ∗ (L(0)− L∗)αγ2t+ 1 . Proof. When f satisfies (A1´), then (5) becomes L̇ = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F ≤ −λmin ( T{Wl}Ll=1 ) ∥∇f(W )∥2F (A1′) ≤ −λmin ( T{Wl}Ll=1 ) γ2(f(W )− f∗)2 = −λmin ( T{Wl}Ll=1 ) γ2(L − L∗)2 . This shows − 1 (L − L∗)2 d dt (L − L∗) ≥ λmin ( T{Wl}Ll=1 ) γ2 ≥ αγ2 . Take integral ∫ dt on both sides, we have for any t ≥ 0, 1 L − L∗ ∣∣∣∣t 0 ≥ αγ2t , which is L(t)− L∗ ≤ L(0)− L ∗ (L(0)− L∗)αγ2t+ 1 . Following similar argument as in Yun et al. (2020), we can show that exponential loss on linearly separable data satisfies A1´. Claim. Let f(w) = ∑N i=1 exp ( −yi · (xTi w) ) , if there exists z ∈ Sn−1 and γ > 0 such that yi(x T i z) ≥ γ , ∀i = 1, · · · , N , then ∥∇f(w)∥F ≥ γf(w) ,∀w ∈ Rn . Proof. Using the linear separability, we have ∥∇f(w)∥2F = ∥∥∥∥∥ N∑ i=1 exp ( −yi · (xTi w) ) yixi ∥∥∥∥∥ 2 F (Cauchy-Schwarz inequality) ≥ ∣∣∣∣∣ 〈 z, N∑ i=1 exp ( −yi · (xTi w) ) yixi 〉∣∣∣∣∣ 2 ≥ ∣∣∣∣∣ N∑ i=1 exp ( −yi · (xTi w) ) γ ∣∣∣∣∣ 2 = |f(w)γ|2 , as desired. Therefore, our convergence results applies to classification tasks with exponential loss. C PROOFS IN SECTION 2 First we prove the expression for L̇ in Lemma 1 Lemma 1 (Restated). Under continuous dynamics in (3), we have L̇ = −∥∇L ( {Wl}Ll=1 ) ∥2F = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F , where W = ∏L l=1 Wi, and T{Wl}Ll=1 is a positive semi-definite linear operator on R n×m with T{Wl}Ll=1E = L∑ l=1 ( l−1∏ i=1 Wi )( l−1∏ i=1 Wi )T E ( L+1∏ i=l+1 Wi )T ( L+1∏ i=l+1 Wi ) ,W0 = In,WL+1 = Im . Proof. The gradient flow dynamics (3) satisfies d dt Wl = − ∂ ∂Wl L ( {Wl}Ll=1 ) = − ( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T , (C.1) where W = ∏L l=1 Wi and W0 = In,WL+1 = Im. Therefore L̇ = L∑ l=1 〈 ∂ ∂Wl L ( {Wl}Ll=1 ) , d dt Wl 〉 F = − L∑ l=1 ∥∥∥∥ ∂∂WlL ({Wl}Ll=1) ∥∥∥∥2 F = − L∑ l=1 〈( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T , ( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T〉 F = − L∑ l=1 〈( l−1∏ i=1 Wi )( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T ( L+1∏ i=l+1 Wi ) ,∇f(W ) 〉 F = − 〈 L∑ l=1 ( l−1∏ i=1 Wi )( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T ( L+1∏ i=l+1 Wi ) ,∇f(W ) 〉 F = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F . Next, we prove that the imbalance matrices are time-invariant Lemma 2 (Restated). Under continuous dynamics (3), we have Ḋl(t) = 0,∀t ≥ 0, l = 1, · · · , L−1. Proof. Each imbalance matrix is defined as Dl = W T l Wl −Wl+1WTl+1, l = 1, · · · , L− 1 We only need to check that ddt ( WTl Wl ) and ddt ( Wl+1W T l+1 ) are identical. From the following derivation, for l = 1, · · · , L− 1, d dt ( WTl Wl ) = ẆTl Wl +W T l Ẇl = − ( L+1∏ i=l+1 Wi ) ∇T f(W ) ( l−1∏ i=1 Wi ) Wl −WTl ( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T = − ( L+1∏ i=l+1 Wi ) ∇T f(W ) ( l∏ i=1 Wi ) − ( l∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T , d dt ( Wl+1W T l+1 ) = Ẇl+1W T l+1 +Wl+1Ẇ T l+1 = − ( l∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+2 Wi )T WTl+1 −Wl+1 ( L+1∏ i=l+2 Wi ) ∇T f(W ) ( l∏ i=1 Wi ) = − ( l∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T − ( L+1∏ i=l+1 Wi ) ∇T f(W ) ( l∏ i=1 Wi ) we know ddt ( WTl Wl ) = ddt ( Wl+1W T l+1 ) , therefore Ḋl(t) = 0, l = 1, · · · , L− 1 D LINEAR MODELS RELATED TO SCALAR DYNAMICS D.1 SPECTRAL INITIALIZATION UNDER l2 LOSS The spectral initialization Saxe et al. (2014); Gidel et al. (2019); Tarmoun et al. (2021) considers the following: Suppose f = 12∥Y −XW∥ 2 F and we have overparametrized model W = ∏L l=1 Wl. Additionally, we assume Y ∈ RN×m, X ∈ RN×n (n ≥ m) are co-diagonalizable, i.e. there exist P ∈ RN×n with PTP = In and Q ∈ O(m), R ∈ O(n) such that we can write the SVDs of Y,X as Y = P [ ΣY 0 0 0 ] [ Q 0 ] := P Σ̃Y Q̃ and X = PΣXRT . Remark 2. In Section 4, we discussed the case f = 12∥Y −W∥ 2 F , which is essentially considering the aforementioned setting with N = n and X = In. Given any set of weights {Wl}Ll=1 such that W1 = RΣ1V T 1 , Wl = Vl−1ΣlV T l , l = 2, · · · , L− 1, WL = VL−1ΣLQ̃ , where Σl, l = 1, · · · , L are diagonal matrices and Vl ∈ Rn×hl , l = 1, · · · , L− 1 with V Tl Vl = Ihl . The gradient flow dynamics requires Ẇ1 = − ∂L ∂W1 = −XT (Y −XW )WTL WTL−1 · · ·WT2 = −RΣXPT · (P Σ̃Y Q̃− PΣXRT ·R L∏ l=1 ΣLQ̃) · Q̃TΣLVL−1 · VL−1ΣL−1V TL−2 · · ·V2Σ2V T1 = −R ( ΣX ( ΣY − ΣX L∏ l=1 Σl ) Q̃Q̃T L∏ l=2 Σl ) V T1 = −R ( ΣX ( ΣY − ΣX L∏ l=1 Σl )[ Im 0 0 0 ] L∏ l=2 Σl ) V T1 , which shows that the singular space R, V1 for W1 do not change under the gradient flow, and the singular values σi,1of W1 satisfies σ̇i,1 = ( σi,y − σi,x L∏ l=1 σi,l ) σi,x L∏ l=2 σi,l , i = 1, · · · ,m , and σ̇i,1 = 0, i = m+ 1, · · · , n. Similarly, we can show that Ẇl = Vl−1 ΣX (ΣY − ΣX L∏ i=1 Σi )[ Im 0 0 0 ]∏ i ̸=l Σi V Tl , l = 2, · · · , L− 1 , ẆL = VL−1 ΣX (ΣY − ΣX L∏ i=1 Σi )[ Im 0 0 0 ]∏ i̸=L Σi Q̃ . Overall, this suggests that the singular space of {Wl}Ll=1 do not change under the gradient flow, and their singular values satisfies, for i = 1, · · · ,m, σ̇i,l = ( σi,y − σi,x L∏ k=1 σi,k ) σi,x L∏ k ̸=l σi,k , l = 1, · · · , L . Each dynamic equation is equivalent to the one from gradient flow on Li({σi,l}Ll=1) = 1 2 ∣∣∣σi,y − σi,x∏Ll=1 σi,l∣∣∣2 . Therefore, under spectral initialization, the dynamics of the weights are decoupled into at most m dynamics discussed in Section 4.2. D.2 DIAGONAL LINEAR NETWORKS The loss function of diagonal linear networks Gunasekar et al. (2017); Yun et al. (2020) is of the form f(w1 ⊙ · · · ⊙ wL), we write L({wl}Ll=1) = f(w1 ⊙ · · · ⊙ wL) = f(w(1), · · · , w(n)) = f ( L∏ l=1 wl,1 , · · · , L∏ l=1 wl,n ) , i.e. f takes n variables w(1), · · · , w(n) and each variable w(i) is overparametrized into ∏L l=1 wl,i. Then we can show that L̇ = −∥∇{wl}Ll=1L∥ 2 F = n∑ i=1 L∑ l=1 ∣∣∣∣ ∂L∂wl,i ∣∣∣∣2 = n∑ i=1 L∑ l=1 ∣∣∣∣ ∂f∂w(i) ∣∣∣∣2 ∣∣∣∣∂w(i)∂wl,i ∣∣∣∣2 = n∑ i=1 ∣∣∣∣ ∂f∂w(i) ∣∣∣∣2 L∑ l=1 ∣∣∣∣∂w(i)∂wl,i ∣∣∣∣2 = n∑ i=1 ∣∣∣∣ ∂f∂w(i) ∣∣∣∣2 τ{wl,i}Ll=1 ≤ − ( min 1≤i≤n τ{wl,i}Ll=1 ) n∑ i=1 ∣∣∣∣ ∂f∂w(i) ∣∣∣∣2 (f satisfies A1) ≤ − ( min 1≤i≤n τ{wl,i}Ll=1 ) γ(f − f∗) = − ( min 1≤i≤n τ{wl,i}Ll=1 ) γ(L − L∗) . Moreover, the imbalances {d(i)l := w2l,i − w2l+1,i} L−1 l=1 are time-invariant for each i = 1, · · · , n by Lemma 2. Therefore, we can lower bound each τ{wl,i}Ll=1 using the imbalance {d (i) l } L−1 l=1 as in Proposition 3, from which one obtain the exponential convergence of L. E PROOF FOR TWO-LAYER MODEL Using Lemma 3, we can prove Theorem 1 Theorem 1 (Restated). Let D be the imbalance matrix for L = 2. The continuous dynamics in (3) satisfy L(t)− L∗ ≤ exp (−α2γt) (L(0)− L∗),∀t ≥ 0 , (E.2) where 1. If f satisfies only A1, then α2 = ∆ ; 2. If f satisfies both A1 and A2, then α2 = −∆+ + √ (∆+ +∆)2 + 4 ( [ σn (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 −∆− + √ (∆− +∆)2 + 4 ( [ σm (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 , (E.3) with W (0) = ∏L l=1 Wl(0) and W ∗ equal to the unique optimizer of f . Proof. As shown in (5) in Section 2. We have d dt (L(t)− L∗) ≤ −λminT{W1(t),W2(t)}γ(L(t)− L ∗) . Consider any {W1(t),W2(t)} on the trajectory, we have, by Lemma 3, λminT{W1(t),W2(t)} Lemma 3 ≥ 1 2 ( −∆+ + √ (∆+ +∆)2 + 4σ2n (W (t)) −∆− + √ (∆− +∆)2 + 4σ2m (W (t)) ) ≥ 1 2 ( −∆+ + √ (∆+ +∆)2 −∆− + √ (∆− +∆)2 ) = ∆ := α2 . When f also satisfies A2: we need to prove σn (W (t)) ≥ [ σn (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + , (E.4) σm (W (t)) ≥ [ σm (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + . (E.5) When n = m, both inequalities are equivalent to σmin(W (t)) ≥ [ σmin(W ∗)− √ K/µ∥W (0)−W ∗∥F ] + , which is true by Lemma A.1. When n ̸= m, one of the two inequalities become trivial. For example, if n > m, then (E.4) is trivially 0 ≥ 0, and (E.5) is equivalent to σmin(W (t)) ≥ [ σmin(W ∗)− √ K/µ∥W (0)−W ∗∥F ] + , which is true by Lemma A.1. Overall, we have λminT{W1(t),W2(t)} Lemma 3 ≥ 1 2 ( −∆+ + √ (∆+ +∆)2 + 4σ2n (W (t)) −∆− + √ (∆− +∆)2 + 4σ2m (W (t)) ) ≥ 1 2 −∆+ + √ (∆+ +∆)2 + 4 ([ σn (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 −∆− + √ (∆− +∆)2 + 4 ([ σm (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 := α2 . Either case, we have ddt (L(t)− L ∗) ≤ −α2γ(L(t)− L∗), and by Grönwall’s inequality, we have L(t)− L∗ ≤ exp(−α2γt)(L(0)− L∗) . F PROOFS FOR THREE-LAYER MODEL In Section F.1, we discuss the proof idea for Theorem 2, then present the proof afterwards. In Section G, we show a simplified bound when the weights can be ordered w.r.t. positive-semidefiniteness. F.1 PROOF IDEA We first discuss the proof idea behind Theorem 2, then provide the complete proof. Consider the case when n = m = 1, we use the following notations for the weights {wT1 ,W2, w3} ∈ R1×h1 × Rh1×h2 × Rh2×1. The quantity we need to lower bound is λminT{wT1 ,W2,w3} = w T 1 W2W T 2 w1 + w T 1 w1 · wT3 w3 + wT3 WT2 W2w3 = ∥WT2 w1∥2 + ∥w1∥2∥w3∥2 + ∥W2w3∥2 , where our linear operator T{wT1 ,W2,w3} reduces to a scalar. The remaining thing to do is to find min wT1 ,W2,w3 ∥WT2 w1∥2 + ∥w1∥2∥w3∥2 + ∥W2w3∥2 (F.6) s.t. W2W T 2 − w1wT1 = D21 WT2 W2 − w3wT3 = D23 i.e., we try to find the best lower bound on λminT{wT1 ,W2,w3} given the fact that the weights have to satisfies the imbalance constraints from D21, D23, and λminT{wT1 ,W2,w3} is related to the norm of some weights ∥w1∥, ∥w3∥ and the “alignment” between weights ∥WT2 w1∥, ∥W2w3∥. The general idea of the proof is to lower bound each term ∥WT2 w1∥2, ∥w1∥2, ∥w3∥2, ∥W2w3∥2 individually given the imbalance constraints, then show the existence of some {wT1 ,W2, w3} that attains the lower bound simultaneously. The following discussion is most for lower bounding ∥w1∥, ∥WT2 w1∥ but the same argument holds for lower bounding other quantities. Understanding what can be chosen to be the spectrum of W2WT2 (W T 2 W2) is the key to derive an lower bound, and the imbalance constraints implicitly limit such choices. To see this, notice that W2WT2 − w1wT1 = D21 suggests an eigenvalue interlacing relation (Horn & Johnson, 2012, Corollary 4.39) between W2WT2 and D21, i.e. λh1(D21) ≤ λh1(W2WT2 ) ≤ λh1−1(D21) ≤ · · · ≤ λ2(W2WT2 ) ≤ λ1(D21) ≤ λ1(W2WT2 ) . Therefore, any choice of {λi(W2WT2 )} h1 i=1 must satisfy the interlacing relation with {λi(D21)} h1 i=1. Similarly, {λi(WT2 W2)} h2 i=1 must satisfy the interlacing relation with {λi(D23)} h2 i=1. Moreover, {λi(W2WT2 )} h1 i=1 and {λi(WT2 W2)} h2 i=1 agree on non-zero eigenvalues. In short, an appropriate choice of the spectrum of W2WT2 (W T 2 W2) needs to respect the interlacing relation with the eigenvalues of D21 and D23. The following matrix is defined D̄h1 := diag{max{λi(D21), λi(D23), 0}} h1 i=1 to be the “minimum” choice of the spectrum of W2WT2 (W T 2 W2) in the sense that any valid choice of {λi(W2WT2 )} h1 i=1 must satisfies λi(W2W T 2 ) ≥ λi(D̄h1) ≥ λi(D21) , i = 1, · · · , h1 . That is, the spectrum of D̄h1 “lies between” the one of W2W T 2 and of D21. Now we check the imbalance constraint again W2WT2 − w1wT1 = D21, it shows that: using a rank-one update w1wT1 , one obtain the spectrum of D21 starting from the spectrum of W2WT2 , and more importantly, we require the norm ∥w1∥2 to be (taking the trace on the imbalance equation) tr(W2W T 2 )− ∥w1∥2 = tr(D21) ⇒ ∥w1∥2 = tr(W2WT2 )− tr(D21) . Now since D̄h1 “lies inbetween”, we have ∥w1∥2 = tr(W2WT2 )− tr(D21) = (changes from λi(W2WT2 ) to λi(D21)) = (changes from λi(W2WT2 ) to λi(D̄h1)) + (changes from λi(D̄h1) to λi(D21)) ≥ (changes from λi(D̄h1) to λi(D21)) = tr(D̄h1)− tr(D21) , which is a lower bound on ∥w1∥2. It is exactly the ∆21 in Theorem 2 (It takes more complicated form when n > 1). A lower bound on ∥WT2 w1∥2 requires carefully exam the changes from the spectrum of D̄h1 to the one of D21. If λh1(D21) < 0, then “changes from λi(D̄) to λi(D21)” has two parts 1. (changes from λi(D̄) to [λi(D21)]+) through the part where w1 is “aligned" with WT2 , 2. (changes from 0 to λh1(D21)) through the part where w1 is “orthogonal" to W T 2 . Only the former contributes to ∥WT2 w1∥2 hence we need the expression ∆ (2) 21 +∆ 2 21, which excludes the latter part. Using similar argument we can lower bound ∥w3∥2, ∥W2w3∥2. Lastly, the existence of {wT1 ,W2, w3} that attains the lower bound is from the fact that D̄h1 (D̄h2 ) is a valid choice for the spectrum of W2WT2 (W T 2 W2). The complete proof of the Theorem 2 follows the same idea but with a generalized notion of eigenvalue interlacing, and some related novel eigenvalue bounds. F.2 PROOF OF THEOREM 2 Theorem 2 is the direct consequence of the following two results. Lemma F.1. Given any set of weights {W1,W2,W3} ∈ Rn×h1 × Rh1×h2 × Rh2×m, we have λminT{W1,W2,W3} ≥ λn(W1W2W T 2 W T 1 ) + λn(W1W T 1 )λm(W T 3 W3) + λm(W T 3 W T 2 W2W3) . (Note that λminT{W1,W2,W3} does not have a closed-form expression. One can only work with its lower bound λn(W1W2WT2 W T 1 ) + λn(W1W T 1 )λm(W T 3 W3) + λm(W T 3 W T 2 W2W3).) Theorem F.2. Given imbalance matrices pair (D21, D23) ∈ Rh1×h1 × Rh2×h2 , then the optimal value of min W1,W2,W3 2 ( λn(W1W2W T 2 W T 1 ) + λn(W1W T 1 )λm(W T 3 W3) + λm(W T 3 W T 2 W2W3) ) s.t. W2W T 2 −WT1 W1 = D21 WT2 W2 −W3WT3 = D23 is ∆∗(D21, D23) = ∆ (2) 21 +∆ 2 21 + 2∆21∆23 +∆ (2) 23 +∆ 2 23 . Combining those two results gets λminT{W1,W2,W3} ≥ ∆∗(D21, D23)/2, as stated in Theorem 2. The Lemma F.1 is intuitive and easy to prove: Proof of Lemma F.1. Notice that T{W1,W2,W3} is the summation of three positive semi-definite linear operators on Rn×m, i.e. T{W1,W2,W3} = T12 + T13 + T23 , where T12E = W1W2WT2 WT1 E, T13E = W1WT1 EWT3 W3, T23E = EWT3 WT2 W2W3 , and λminT12 = λn(W1W2WT2 WT1 ), λminT13 = λn(W1WT1 )λm(WT3 W3), λminT23 = λm(W T 3 W T 2 W2W3). Therefore, let Emin with ∥Emin∥F = 1 be the eigenmatrix associated with λminT{W1,W2,W3}, we have λminT{W1,W2,W3} = 〈 T{W1,W2,W3}, Emin 〉 F = ⟨T12, Emin⟩F + ⟨T13, Emin⟩F + ⟨T23, Emin⟩F ≥ λminT12 + λminT13 + λminT23 . The rest of this section is dedicated to prove Theorem F.2 We will first state a few Lemmas that will be used in the proof, then show the proof for Theorem F.2, and present the long proofs for the auxiliary Lemmas in the end. F.3 AUXILIARY LEMMAS The main ingredient used in proving Theorem F.2 is the notion of r-interlacing relation between the spectrum of two matrices, which is a natural generalization of the interlacing relation as seen in classical Cauchy Interlacing Theorem (Horn & Johnson, 2012, Theorem 4.3.17). Definition 4. Given real symmetric matrices A,B of order n, write A ⪰r B, if λi+r(A) ≤ λi(B) ≤ λi(A) ,∀i where λj(·) = +∞, j ≤ 0 and λj(·) = −∞
1. What is the focus of the paper regarding gradient flow and multi-layer linear models? 2. What are the strengths of the paper, particularly in its convergence results and applicability? 3. What are the limitations of the paper, especially regarding its scope and novelty compared to prior works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proves the exponential convergence of gradient flow on multi-layer linear models in which the loss function f satisfies the gradient dominance property. It also provides a lower bound on the convergence rate that depends on the imbalance matrices and the least singular value of the weight product. Strengths And Weaknesses Strength: The paper is well written and clearly motivated. The obtained convergence result is quite general and can be applied to a wide range of imbalanced initializations. Weakness: Limited to linear networks Clarity, Quality, Novelty And Reproducibility Clarity: High Quality: High Novelty: Fair. It seems that the work is an extension of Min et al. 2022 to multi-layer setting.
ICLR
Title On the Convergence of Gradient Flow on Multi-layer Linear Models Abstract In this paper, we analyze the convergence of gradient flow on a multi-layer linear model with a loss function of the form f(W1W2 · · ·WL). We show that when f satisfies the gradient dominance property, proper weight initialization leads to exponential convergence of the gradient flow to a global minimum of the loss. Moreover, the convergence rate depends on two trajectory-specific quantities that are controlled by the weight initialization: the imbalance matrices, which measure the difference between the weights of adjacent layers, and the least singular value of the weight product W = W1W2 · · ·WL. Our analysis provides improved rate bounds for several multi-layer network models studied in the literature, leading to novel characterizations of the effect of weight imbalance on the rate of convergence. Our results apply to most regression losses and extend to classification ones. 1 INTRODUCTION The mysterious ability of gradient-based optimization algorithms to solve the non-convex neural network training problem is one of the many unexplained puzzles behind the success of deep learning in various applications (Krizhevsky et al., 2012; Hinton et al., 2012; Silver et al., 2016). A vast body of work has tried to theoretically understand this phenomenon by analyzing either the loss landscape or the dynamics of the training parameters. The landscape-based analysis is motivated by the empirical observation that deep neural networks used in practice often have a benign landscape (Li et al., 2018a), which can facilitate convergence. Existing theoretical analysis (Lee et al., 2016; Sun et al., 2015; Jin et al., 2017) shows that gradient descent converges when the loss function satisfies the following properties: 1) all of its local minimums are global minima; and 2) every saddle point has a Hessian with at least one strict negative eigenvalue. Prior work suggests that the matrix factorization model (Ge et al., 2017), shallow networks (Kawaguchi, 2016), and certain positively homogeneous networks (Haeffele & Vidal, 2015; 2017) have such a landscape property, but unfortunately condition 2) does not hold for networks with multiple hidden layers (Kawaguchi, 2016). Moreover, the landscape-based analysis generally fails to provide a good characterization of the convergence rate, except for a local rate around the equilibrium (Lee et al., 2016; Ge et al., 2017). In fact, during early stages of training, gradient descent could take exponential time to escape some saddle points if not initialized properly (Du et al., 2017). The trajectory-based analyses study the training dynamics of the weights given a specific initialization. For example, the case of small initialization has been studied for various models (Arora et al., 2019a; Gidel et al., 2019; Li et al., 2018b; Stöger & Soltanolkotabi, 2021; Li et al., 2021b;a). Under this type of initialization, the trained model is implicitly biased towards low-rank (Arora et al., 2019a; Gidel et al., 2019; Li et al., 2018b; Stöger & Soltanolkotabi, 2021; Li et al., 2021b), and sparse (Li et al., 2021a) models. While the analysis for small initialization gives rich insights on the generalization of neural networks, the number of iterations required for gradient descent to find a good model often increases as the initialization scale decreases. Such dependence proves to be logarithmic on the scale for symmetric matrix factorization model (Li et al., 2018b; Stöger & Soltanolkotabi, 2021; Li et al., 2021b), but for deep networks, existing analysis at best shows a polynomial dependency (Li et al., 2021a). Therefore, the analysis for small initialization, while insightful in understanding the implicit bias of neural network training, is not suitable for understanding the training efficiency in practice since small initialization is rarely implemented due to its slow convergence. Another line of work studies the initialization in the kernel regime, where a randomly initialized sufficiently wide neural network can be well approximated by its linearization at initialization Jacot et al. (2018); Chizat et al. (2019); Arora et al. (2019b). In this regime, gradient descent enjoys a linear rate of convergence toward the global minimum (Du et al., 2019; Allen-Zhu et al., 2019; Du & Hu, 2019). However, the width requirement in the analysis is often unrealistic, and empirical evidence has shown that practical neural networks generally do not operate in the kernel regime (Chizat et al., 2019). The study of non-small, non-kernel-regime initialization has been mostly centered around linear models. For matrix factorization models, spectral initialization (Saxe et al., 2014; Gidel et al., 2019; Tarmoun et al., 2021) allows for decoupling the training dynamics into several scalar dynamics. For non-spectral initialization, the notion of weight imbalance, a quantity that depends on the differences between the weights matrices of adjacent layers, is crucial in most analyses. When the initialization is balanced, i.e., when the imbalance matrices are zero, the convergence relies on the initial end-to-end linear model being close to its optimum (Arora et al., 2018a;b). It has been shown that having a non-zero imbalance potentially improves the convergence rate (Tarmoun et al., 2021; Min et al., 2021), but the analysis only works for two-layer models. For deep linear networks, the effect of weight imbalance on the convergence has been only studied in the case when all imbalance matrices are positive semi-definite (Yun et al., 2020), which is often unrealistic in practice. Lastly, most of the aforementioned analyses study the l2 loss for regression tasks, and it remains unknown whether they can be generalized to other types of losses commonly used in classification tasks. Our contribution: This paper aims to provide a general framework for analyzing the convergence of gradient flow on multi-layer linear models. We consider the gradient flow on a loss function of the form L = f(W1W2 · · ·WL), where f satisfies the gradient dominance property. We show that with proper initialization, the loss converges to its global minimum exponentially. More specifically: • Our analysis shows that the convergence rate depends on two trajectory-specific quantities: 1) the imbalance matrices, which measure the difference between the weights of adjacent layers, and 2) a lower bound on the least singular values of weight product W = W1W2 · · ·WL. The former is time-invariant under gradient flow, thus it is fully determined by the initialization, while the latter can be controlled by initializing the product sufficiently close to its optimum. • Our analysis covers most initialization schemes used in prior work (Saxe et al., 2014; Tarmoun et al., 2021; Arora et al., 2018a;b; Min et al., 2021; Yun et al., 2020) for both multi-layer linear networks and diagonal linear networks while providing convergence guarantees for a wider range of initializations. Furthermore, our rate bounds characterize the general effect of weight imbalance on convergence. • Our convergence results directly apply to loss functions commonly used in regression tasks, and can be extended to loss functions used in classification tasks with an alternative assumption on f , under which we show O(1/t) convergence of the loss. Notations: For an n×m matrix A, we let AT denote the matrix transpose of A, σi(A) denote its i-th singular value in decreasing order and we conveniently write σmin(A) = σmin{n,m}(A) and let σk(A) = 0 if k > min{n,m}. We also let ∥A∥2 = σ1(A) and ∥A∥F = √ tr(ATA). For a square matrix of size n, we let tr(A) denote its trace and we let diag{ai}ni=1 be a diagonal matrix with ai specifying its i-th diagonal entry. For a Hermitian matrix A of size n, we let λi(A) denote its i-th eigenvalue and we write A ⪰ 0 (A ⪯ 0) when A is positive semi-definite (negative semi-definite). For two square matrices A,B of the same size, we let ⟨A,B⟩F = tr(ATB). For a scalar-valued or matrix-valued function of time, F (t), we write Ḟ , Ḟ (t) or ddtF (t) for its time derivative. Additionally, we use In to denote the identity matrix of order n and O(n) to denote the set of n× n orthogonal matrices. Lastly, we use [·]+ := max{·, 0}. 2 OVERVIEW OF THE ANALYSIS This paper considers the problem of finding a matrix W that solves min W∈Rn×m f(W ) , (1) with the following assumption on f . Assumption 1. The function f is differentiable and satisfies1: A1: f satisfies the Polyak-Łojasiewicz (PL) condition, i.e. ∥∇f(W )∥2F ≥ γ(f(W ) − f∗),∀W . This condition is also known as gradient dominance. A2: f is K-smooth, i.e., ∥∇f(W ) − ∇f(V )∥F ≤ K∥W − V ∥F ,∀W,V , and f is µ-strongly convex, i.e., f(W ) ≥ f(V ) + ⟨∇f(V ),W − V ⟩F + µ 2 ∥W − V ∥ 2 F ,∀W,V . While classic work (Polyak, 1987) has shown that the gradient descent update on W with proper step size ensures a linear rate of convergence of f(W ) towards its optimal value f∗, the recent surge of research on the convergence and implicit bias of gradient-based methods for deep neural networks has led to a great amount of work on the overparametrized problem: min {Wl}Ll=1 L ( {Wl}Ll=1 ) = f(W1W2 · · ·WL) , (2) where L ≥ 2, Wl ∈ Rhl−1×hl , i = 1, · · · , L, with h0 = n, hL = m and min{h1, · · · , hL−1} ≥ min{n,m}. This assumption on min{h1, · · · , hL−1} is necessary to ensure that the optimal value of (2) is also f∗, and in this case, the product ∏L l=1 Wl can represent an overparametrized linear network/model (Arora et al., 2018b; Tarmoun et al., 2021; Min et al., 2021) 2.1 CONVERGENCE VIA GRADIENT DOMINANCE For problem (2), consider the gradient flow dynamics on the loss function L ( {Wl}Ll=1 ) : Ẇl = − ∂ ∂Wl L ( {Wl}Ll=1 ) , l = 1, · · · , L . (3) The gradient flow dynamics can be viewed as gradient descent with “infinitesimal” step size and convergence results for gradient flow can be used to understand the corresponding gradient descent algorithm with sufficiently small step size (Elkabetz & Cohen, 2021). We have the following result regarding the time-derivative of L under gradient flow (3). Lemma 1. Under continuous dynamics in (3), we have L̇ = −∥∇L ( {Wl}Ll=1 ) ∥2F = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F , (4) where W = ∏L l=1 Wl, and T{Wl}Ll=1 is the following positive semi-definite linear operator on R n×m T{Wl}Ll=1E = L∑ l=1 ( l−1∏ i=0 Wi )( l−1∏ i=0 Wi )T E ( L+1∏ i=l+1 Wi )T ( L+1∏ i=l+1 Wi ) ,W0 = In,WL+1 = Im . Such an expression of ∥∇L∥2F has been studied in Arora et al. (2018b), and we include a proof in Appendix C for completeness. Our convergence analysis is as follows. For this overparameterized problem, the minimum L∗ of (2) is f∗. Then from Lemma 1 and Assumption A1, we have L̇ = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F ≤ −λmin(T{Wl}Ll=1)∥∇f(W )∥ 2 F (min-max theorem (Teschl, 2014)) (5) (A1) ≤ −λmin(T{Wl}Ll=1)γ(f(W )− f ∗) = −λmin(T{Wl}Ll=1)γ(L − L ∗). If we can find a lower bound α > 0 such that λmin(T{Wl(t)}Ll=1) ≥ α,∀t ≥ 0, then the following inequality holds on the entire training trajectory ddt (L − L ∗) ≤ −αγ (L − L∗). Therefore, by using Grönwall’s inequality (Grönwall, 1919), we can show that the loss function L converges exponential to its minimum, i.e., L(t)− L∗ ≤ exp (−αγt) (L(0)− L∗) ,∀t ≥ 0 . (6) 1Note that A2 assumes µ-strong convexity, which implies A1 with γ = 2µ. However, we list A1 and A2 separately since they have different roles in our analysis. Therefore, to show exponential convergence of the loss, we need to lower bound λmin(T{Wl(t)}Ll=1). Most existing work on the convergence of gradient flow/descent on linear networks implicitly provides such a lower bound, given additional assumptions on the initialization {Wl(0)}Ll=1, though not presented with such generality. We revisit previous analyses to see how such a problem can be solved for two-layer linear networks, then present our new results regarding deep linear networks. 3 LESSONS FROM TWO-LAYER LINEAR MODELS In this section, we revisit prior work through the lens of our general convergence analysis in Section 2.1. A lower bound on λmin(T{Wl(t)}Ll=1) can be obtained from the training invariance of the gradient flow. We first consider the following imbalance matrices: Dl := W T l Wl −Wl+1WTl+1, l = 1, · · · , L− 1 . (7) For such imbalance matrices, we have Lemma 2. Under the continuous dynamics (3), we have Ḋl(t) = 0,∀t ≥ 0, l = 1, · · · , L− 1. Such invariance of weight imbalance has been studied in most work on linear networks (Arora et al., 2018a; Du et al., 2018; Yun et al., 2020). We include the proof in Appendix C for completeness. Since the imbalance matrices {Dl}L−1l=1 are fixed at its initial value, any point {Wl(t)}Ll=1 on the training trajectory must satisfy the imbalance constraints Wl(t)TWl(t)−Wl+1WTl+1 = Dl(0), l = 1, · · · , L− 1. Previous work has shown that enforcing certain non-zero imbalance at initialization leads to exponential convergence of the loss for two-layer networks (Tarmoun et al., 2021; Min et al., 2021), and for deep networks (Yun et al., 2020). Another line of work (Arora et al., 2018a;b) has shown that balanced initialization (Dl = 0,∀l) haves exactly λmin(T{Wl(t)}Ll=1) = Lσ 2−2/L min (W (t)), where W (t) = ∏L l=1 Wl(t). This suggests that the bound on λmin(T{Wl(t)}Ll=1) we are looking for should potentially depend on both the weight imbalance matrices and weight product matrix. Indeed, for two-layer models, a re-statement2 of the results in (Min et al., 2022) provides a lower bound on λmin(T{W1,W2}) with the knowledge of the imbalance and the product. Lemma 3 (re-stated from Min et al. (2022)). When L = 2, given weights {W1,W2} with imbalance matrix D = WT1 W1 −W2WT2 and product W = W1W2, define ∆+=[λ1(D)]+−[λn(D)]+ ,∆−=[λ1(−D)]+−[λm(−D)]+ ,∆=[λn(D)]++[λm(−D)]+ . (8) Then for the linear operator T{W1,W2} defined in Lemma 1, we have λmin ( T{W1,W2} ) ≥ 1 2 ( −∆+ + √ (∆+ +∆)2 + 4σ2n (W )−∆− + √ (∆− +∆)2 + 4σ2m (W ) ) . (9) Min et al. (2022) include a detailed discussion on the bound, including tightness. For our purpose, we note the following: Effect of imbalance: It follows from (9) that λmin ( T{W1,W2} ) ≥ ∆ since σmin(W ) ≥ 0. Therefore, ∆ is always a lower bound on the convergence rate. This means that, for most initializations, the fact that the imbalance matrices are bounded away from zero (characterized by ∆ > 0) is already sufficient for exponential convergence. Effect of product: The role of the product in (9) is more nuanced: Assume n = m for simplicity so that σn(WWT ) = σm(WTW ) = σ2min(W ). We see that the non-negative quantities ∆+,∆− control how much the product affects the convergence. More precisely, the lower bound in (9) is a decreasing function of both ∆+ and ∆−. When ∆+ = ∆− = 0, the lower bound reduces to√ ∆2 + 4σ2min(W ), showing a joint contribution to convergence from both imbalance and product. However, as ∆+,∆− increases, the bound decreases towards ∆, which means that the effect of 2In Min et al. (2022), there is no general idea of lower bounding λmin ( T{W1,W2} ) , but their analyses essentially provide such a bound. imbalance always exists, but the effect of the product diminishes for large ∆+,∆−. We note that ∆+,∆− measure how the eigenvalues of the imbalance matrix D are different in magnitude, i.e., how “ill-conditioned" the imbalance matrix is. Implication on convergence: Note that (9) is almost a lower bound for λmin ( T{W1(t),W2(t)} ) , t ≥ 0, as the imbalance matrix D is time-invariant (so are ∆+,∆−,∆), except the right-hand side of (9) also depends on σmin(W (t)). If f satisfies A2, then f has a unique minimizer W ∗. Moreover, one can show that given a initial product W (0), W (t) is constrained to lie within a closed ball{ W : ∥W −W ∗∥F ≤ √ K µ ∥W (0)−W ∗∥F } . That is, the product W (t) does not get too far away from W ∗ during training. We can use this to derive the following lower bound on σmin(W (t)): σmin(W (t)) ≥ [ σmin(W ∗)− √ K µ ∥W (0)−W ∗∥F ] + := margin (See Appendix A). (10) This margin term being positive guarantees that the closed ball excludes any W with σmin(W ) = 0. With this observation, we find a lower bound λmin ( T{W1(t),W2(t)} ) , t ≥ 0 that depends on both the weight imbalance and margin, and the exponential convergence of loss L follows: Theorem 1. Let D be the imbalance matrix for L = 2. The continuous dynamics in (3) satisfy L(t)− L∗ ≤ exp (−α2γt) (L(0)− L∗),∀t ≥ 0 , (11) where 1. If f satisfies only A1, then α2 = ∆ ; 2. If f satisfies both A1 and A2, then α2 = −∆+ + √ (∆+ +∆)2 + 4 ( [ σn (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 −∆− + √ (∆− +∆)2 + 4 ( [ σm (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 , (12) with W (0) = ∏L l=1 Wl(0) and W ∗ equal to the unique optimizer of f . Please see Appendix E for the proof. Theorem 1 is new as it generalizes the convergence result in Min et al. (2022) for two-layer linear networks, which is only for l2 loss in linear regression. Our result considers a general loss function defined by f , including the losses for matrix factorization (Arora et al., 2018a), linear regression (Min et al., 2022), and matrix sensing (Arora et al., 2019a). Additionally, Arora et al. (2018a) first introduced the notion of margin for f in matrix factorization problems (K = 1, µ = 1), and we extend it to any f that is smooth and strongly convex. Towards deep models: So far, we revisited prior results on two-layer networks, showing how λmin(TW1,W2) can be lower bounded by weight imbalance and product, from which the convergence result is derived. Can we generalize the analysis to deep networks? The main challenge is that even computing λmin(T{Wl}Ll=1) given the weights {Wl} L l=1 is complicated: For L = 2, λmin(TW1,W2) = λn(W1W T 1 ) + λm(W T 2 W2), but such nice relation does not exist for L > 3, which makes the search for a tight lower bound as in (9) potentially difficult. On the other hand, the findings in (9) shed light on what can be potentially shown for the deep layer case: 1. For two-layer networks, we always have the bound λmin ( T{W1,W2} ) ≥ ∆, which depends only on the imbalance. Can we find a lower bound on the convergence rate of a deep network that depends only on an imbalance quantity analogous to ∆? If yes, how does such a quantity depend on network depth? 2. For two-layer networks, the bound reduces to √ ∆2 + 4σ2min(W ) when the imbalance is “well- conditioned" (∆+,∆− are small). For deep networks, can we characterize such joint contribution from the imbalance and product, given a similar assumption? We will answer these questions as we present our convergence results for deep networks. 4 CONVERGENCE RESULTS FOR DEEP LINEAR MODELS 4.1 THREE-LAYER MODEL Beyond two-layer models, the convergence analysis for imbalanced networks not in the kernel regime has only been studied for specific initializations (Yun et al., 2020). In this section, we derive a novel rate bound for three-layer models that applies to a wide range of imbalanced initializations. For ease of presentation, we denote the two imbalance matrices for three-layer models, D1 and D2, as −D1 = W2WT2 −WT1 W1 := D21 , D2 = WT2 W2 −W3WT3 := D23. (13) Our lower bound on λmin ( T{W1,W2,W3} ) comes after a few definitions. Definition 1. Given two real symmetric matrices A,B of order n, we define the non-commutative binary operation ∧r as A∧rB := diag{min{λi(A), λi+1−r(B)}}ni=1 , where λj(·) = +∞,∀j ≤ 0. Definition 2. Given imbalance matrices (D21, D23) ∈ Rh1×h1 × Rh2×h2 , define D̄h1 =diag{max{λi(D21), λi(D23), 0}} h1 i=1, D̄h2 =diag{max{λi(D21), λi(D23), 0}} h2 i=1, (14) ∆21=tr(D̄h1)− tr(D̄h1 ∧n D21), ∆ (2) 21 =tr(D̄ 2 h1)− tr ( (D̄h1 ∧n D21 )2 ), (15) ∆23=tr(D̄h2)− tr(D̄h2 ∧m D23), ∆ (2) 23 =tr(D̄ 2 h2)− tr ( (D̄h2 ∧m D23 )2 ). (16) Theorem 2. When L = 3, given weights {W1,W2,W3} with imbalance matrices (D21, D23), then for the linear operator T{W1,W2,W3} defined in Lemma 1, we have λmin ( T{W1,W2,W3} ) ≥ 1 2 (∆ (2) 21 +∆ 2 21) + ∆21∆23 + 1 2 (∆ (2) 23 +∆ 2 23) (17) Proof Sketch. Generally, it is difficult to directly work on λmin ( T{W1,W2,W3} ) , and we use the lower bound λmin ( T{W1,W2,W3} ) ≥ λn(W1W2WT2 WT1 ) + λn(W1WT1 )λm(WT3 W3) + λm(W T 3 W T 2 W2W3). We show that given D21, D23, the optimal value of min W1,W2,W3 λn(W1W2W T 2 W T 1 ) + λn(W1W T 1 )λm(W T 3 W3) + λm(W T 3 W T 2 W2W3) (18) s.t. W2W T 2 −WT1 W1 = D21, WT2 W2 −W3WT3 = D23 is ∆∗(D21, D23) = 12 (∆ (2) 21 +∆ 2 21) + ∆21∆23 + 1 2 (∆ (2) 23 +∆ 2 23), the bound shown in (17). Please see Appendix F for the complete proof and a detailed discussion on the proof idea. With the theorem we immediately have the following corollary. Corollary 1. When L = 3, given initialization with imbalance matrices (D21, D23) and f satisfying A1, the continuous dynamics in (3) satisfy L(t)− L∗ ≤ exp (−α3γt) (L(0)− L∗),∀t ≥ 0 , (19) where α3 = 12 (∆ (2) 21 +∆ 2 21) + ∆21∆23 + 1 2 (∆ (2) 23 +∆ 2 23). We make the following remarks regarding the contribution. Optimal bound via imbalance: First of all, as shown in the proof sketch, our bound should be considered as the best lower bound on λmin(T{W1(t),W2(t),W3(t)}) one can obtain given knowledge of the imbalance matrices D21 and D23 only. More importantly, this lower bound works for ANY initialization and has the same role as ∆ does in two-layer linear networks, i.e., (17) quantifies the general effect imbalance on the convergence. Finding an improved bound that takes the effect of product σmin(W ) into account is an interesting future research direction. Implication on convergence: Corollary 2 shows exponential convergence of the loss L(t) if α3 > 0. While it is challenging to characterize all initialization such that α3 > 0, the case n = m = 1 is rather simpler: In this case, D̄h1 ∧1 D21 = D21 and D̄h2 ∧1 D23 = D23. Then we have ∆21 = tr(D̄h1)− tr(D21) = h1∑ i=1 (λi(D̄h1)− λi(D21)) + λh1(D̄h1)− λh1(D21) ≥ −λh1(D21) , and similarly we have ∆23 ≥ −λh2(D23). Therefore, α3 ≥ ∆21∆23 ≥ λh1(D21)λh2(D23) > 0 when both D21 and D23 have negative eigenvalues, which is easy to satisfy as both D21 and D23 are given by the difference between two positive semi-definite matrices. Such observation can be generalized to show that α3 > 0 when D21 has at least n negative eigenvalues and D23 has at least m negative eigenvalues. Moreover, we show that α3 > 0 under certain definiteness assumptions on D21 and D23, please refer to the remark after Theorem 3 in Section 4.2. A better characterization of the initialization that has α3 > 0 is an interesting future research topic. Technical contribution: The way we find the lower bound in (17) is by studying the generalized eigenvalue interlacing relation imposed by the imbalance constraints. Specifically, W2WT2 −WT1 W1 = D21 suggests that λi+n(W2WT2 ) ≤ λi(D21) ≤ λi(W2WT2 ),∀i because W2WT2 −D21 is a matrix of at most rank-n. We derive, from such interlacing relation, novel eigenvalue bounds (See Lemma F.6) on λn(WT1 W1) and λn(W1W2W T 2 W1) that depends on eigenvalues of both W2W T 2 and D21. Then the eigenvalues of W2WT2 can also be controlled by the fact that W2 must satisfy both imbalance equations in (13). Since imbalance equations like those in (13) appear in deep networks and certain nonlinear networks Du et al. (2018); Le & Jegelka (2022), we believe our mathematical results are potentially useful for understanding those networks. Comparison with prior work: The convergence of multi-layer linear networks under balanced initialization (Dl = 0,∀l) has been studied in Arora et al. (2018a;b), and our result is complementary as we study the effect of non-zero imbalance on the convergence of three-layer networks. Some settings with imbalanced weights have been studied: Yun et al. (2020) studies a special initialization scheme (Dl ⪰ 0, l = 1, · · · , L − 2, and DL−1 ⪰ λIhL−1) that forces the partial ordering of the weights, and Wu et al. (2019) uses a similar initialization to study the linear residual networks. Our bound works for such initialization and also show such partial ordering is not necessary for convergence. 4.2 DEEP LINEAR MODELS The lower bound we derived for three-layer networks applies to any initialization. However, the bound is a fairly complicated function of all the imbalance matrices that is hard to interpret. Searching for such a general bound is even more challenging for models with arbitrary depth (L ≥ 3). Therefore, our results for deep networks will rely on extra assumptions on the weights that simplify the lower bound to facilite interpretability. Specifically, we consider the following properties of the weights: Definition 3. A set of weights {Wl}Ll=1 with imbalance matrices {Dl := WTl Wl −Wl+1WTl+1} L−1 l=1 is said to be unimodal with index l∗ if there exists some l∗ ∈ [L] such that Dl ⪰ 0, for l < l∗ and Dl ⪯ 0, for l ≥ l∗ . We define its cumulative imbalances {d̃(i)}L−1i=1 as d̃(i) = {∑i l=l∗ λm(−Dl), i ≥ l∗∑l∗−1 l=i λn(Dl), i < l ∗ . Furthermore, for weights with unimodality index l∗, if additionally, Dl = dlIhl , l = 1, · · · , L− 1 for dl ≥ 0, for l < l∗ and dl ≤ 0, for l ≥ l∗ , then those weights are said to have homogeneous imbalance. The unimodality assumption enforces an ordering of the weights w.r.t. the positive semi-definite cone. This is more clear when considering scalar weights {wl}Ll=1, in which case unimodality requires w2l to be descending until index l ∗ and ascending afterward. Under this unimodality assumption, we show that imbalance contributes to the convergence of the loss via a product of cumulative imbalanaces. Furthermore, we also show the combined effects of imbalance and weight product when the imbalance matrices are “well-conditioned" (in this case, homogeneous). More formally, we have: Theorem 3. For weights {Wl}Ll=1 with unimodality index l∗, we have λmin ( T{Wl}Ll=1 ) ≥ L−1∏ l=1 d̃(i) . (20) Furthermore, if the weights have homogeneous imbalance, then λmin ( T{Wl}Ll=1 ) ≥ √√√√(L−1∏ l=1 d̃(i) )2 + ( Lσ 2−2/L min (W ) )2 , W = L∏ l=1 Wl . (21) We make the following remarks: Connection to results for three-layer: For three-layer networks, we present an optimal bound λmin(TW1,W2,W3) ≥ 1 2 (∆ (2) 21 +∆ 2 21) + ∆21∆23 + 1 2 (∆ (2) 23 +∆ 2 23) , given knowledge of the imbalance. Interestingly, when comparing it with our bound in (20), we have: Claim. When L = 3, for weights {W1,W2,W3} with unimodality index l∗, 1. If l∗ = 1, then 12 (∆ (2) 23 +∆ 2 23) = ∏L−1 l=1 d̃(i) and 1 2 (∆ (2) 21 +∆ 2 21) = ∆21∆23 = 0; 2. If l∗ = 2, then ∆21∆23 = ∏L−1 l=1 d̃(i) and 1 2 (∆ (2) 21 +∆ 2 21) = 1 2 (∆ (2) 23 +∆ 2 23) = 0; 3. If l∗ = 3, then 12 (∆ (2) 21 +∆ 2 21) = ∏L−1 l=1 d̃(i) and 1 2 (∆ (2) 23 +∆ 2 23) = ∆21∆23 = 0. We refer the readers to Appendix G for the proof. The claim shows that the bound in (20) is optimal for three-layer unimodal weights as it coincides with the one in Theorem 2. We conjecture that (20) is also optimal for multi-layer unimodal weights and leave the proof for future research. Interestingly, while the bound for three-layer models is complicated, the three terms 12 (∆ (2) 23 + ∆ 2 23), ∆21∆23, 1 2 (∆ (2) 21 +∆ 2 21), seem to roughly capture how close the weights are to those with unimodality. This hints at potential generalization of Theorem 2 to the deep case where the bound should have L terms capturing how close the weights are to those with different unimodality (l∗ = 1, · · · , L). Effect of imbalance under unimodality: For simplicity, we assume unimodality index l∗ = L. The bound ∏L−1 i=1 d̃(i), as a product of cumulative imbalances, generally grows exponentially with the depth L. Prior work Yun et al. (2020) studies the case Dl ⪰ 0, l = 1, · · · , L−2, and DL−1 ⪰ λIhL−1 , in which case ∏L−1 i=1 d̃(i) ≥ λL−1. Our bound ∏L−1 i=1 d̃(i) suggests the dependence on L could be super-exponential: When λn(Dl) ≥ ϵ > 0, for l = 1, · · · , L − 1, we have ∏L−1 i=1 d̃(i) =∏L−1 i=1 ∑L−1 l=i λn(Dl) ≥ ∏L−1 l=1 lϵ = ϵ L−1(L − 1)!, which grows faster in L than λL−1 for any λ. Therefore, for gradient flow dynamics, the depth L could greatly improve convergence in the presence of weight imbalance. One should note, however, that such analysis can not be directly translated into fast convergence guarantees of gradient descent algorithm as one requires careful tuning of the step size for the discrete weight updates to follow the trajectory of the continuous dynamics (Elkabetz & Cohen, 2021). With our bound in Theorem 3, we show convergence of deep linear models under various initialization: Convergence under unimodality: The following immediately comes from Theorem 3: Corollary 2. If the initialization weights {Wl(0)}Ll=1 are unimodal, then the continuous dynamics in (3) satisfy L(t)− L∗ ≤ exp (−αLγt) (L(0)− L∗),∀t ≥ 0, (22) where 1. If f satisfies A1 only, then αL = ∏L−1 i=1 d̃(i) ; 2. If f satisfies both A1, A2, and the weights additionally have homogeneous imbalance, then αL = √√√√(L−1∏ i=1 d̃(i) )2 + ( L ( [ σmin (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2−2/L)2 , with W (0) = ∏L l=1 Wl(0) and W ∗ equal to the unique optimizer of f . Spectral initialization under l2 loss: Suppose f = 12∥Y −W∥ 2 F and W = ∏L l=1 Wl. We write the SVD of Y ∈ Rn×m as Y = P [ ΣY 0 0 0 ] [ Q 0 ] := P Σ̃Y Q̃, where P ∈ O(n), Q ∈ O(m) . Consider the spectral initialization W1(0) = RΣ1V T1 , Wl(0) = Vl−1ΣlV T l , l = 2, · · · , L − 1, WL(0) = VL−1ΣLQ̃, where Σl, l = 1, · · · , L are diagonal matrices of our choice and Vl ∈ Rn×hl , l = 1, · · · , L− 1 with V Tl Vl = Ihl . It can be shown that (See Appendix D.1 for details) W1(t) = RΣ1(t)V T 1 , Wl(t) = Vl−1Σl(t)V T l , l = 2, · · · , L− 1, WL(t) = VL−1ΣL(t)Q̃. (23) Moreover, only the first m diagonal entries of Σl are changing. Let σi,l, σi,y denote the i-th diagonal entry of Σl, and Σ̃Y respectively, then the dynamics of {σi,l}Ll=1 follow the gradient flow on Li({σi,l}Ll=1) = 12 ∣∣∣σi,y −∏Ll=1 σi,l∣∣∣2 for i = 1, · · · ,m, which is exactly a multi-layer model with scalar weights: f(w) = |σi,y − w|2/2, w = ∏L l=1 wl. Therefore, spectral initialization under l2 loss can be decomposed into m deep linear models with scalar weights, whose convergence is shown by Corollary 2. Note that networks with scalar weights are always unimodal, because the gradient flow dynamics remain the same under any reordering of the weights, and always have homogeneous imbalance, because the imbalances are scalars. The aforementioned analysis also applies to the linear regression loss f = 12∥Y −XW∥ 2 F , provided that {X,Y } is co-diagonalizable (Gidel et al., 2019), we refer the readers to Appendix D.1 for details. Diagonal linear networks: Consider f a function on Rn satisfying A1 and L = f(w1 ⊙ · · · ⊙ wL), where wl ∈ Rn and ⊙ denote the Hadamard (entrywise) product. The gradient flow on L can not be decomposed into several scalar dynamics as in the previous example, but we can show that (See Appendix D.2 for details) L̇ = −∥∇L∥2F ≤ −(min1≤i≤n λmin(T{wl,i}Ll=1))γ(L − L ∗) , where wl,i is the i-th entry of wl. Then Theorem 3 gives lower bound on each λmin(T{wl,i}Ll=1). Again, here the scalar weights {wl,i}Ll always have homogeneous imbalance. Comparison with prior work: Regarding unimodality, Yun et al. (2020) studies the initialization scheme Dl ⪰ 0, l = 1, · · · , L − 2 and DL−1 ⪰ λIhL−1 , which is a special case (l∗ = L) of ours. The homogeneous imbalance assumption was first introduced in Tarmoun et al. (2021) for two-layer networks, and we generalize it to the deep case. We compare, in Table 1, our bound to the existing work (Arora et al., 2018a; Yun et al., 2020) on convergence of deep linear networks outside the kernel regime. Note that Yun et al. (2020) only studies a special case of unimodal weights (l∗ = L with d̃(i) ≥ λ > 0,∀i). For homogeneous imbalance, Yun et al. (2020) studied spectral initialization and diagonal linear networks, whose initialization necessarily has homogeneous imbalance, but the result does not generalize to the case of matrix weights. Our results for homogeneous imbalance works also for deep networks with matrix weights, and our rate also shown the effect of the product Lσ 2−2/L min (W ), thus covers the balanced initialization (Arora et al., 2018a) as well. Remark 1. Note that the loss functions used in Gunasekar et al. (2018); Yun et al. (2020) are classification losses, such as the exponential loss, which do not satisfy A1. However, they do satisfy Polyak-Łojasiewicz-inequality-like condition ∥∇f(W )∥F ≥ γ(f(W )− f∗),∀W ∈ Rn×m, which allows us to show O ( 1 t ) convergence of the loss function. We refer readers to Section 4.3 for details. 4.3 CONVERGENCE RESULTS FOR CLASSIFICATION TASKS As we discussed in Remark 1, the loss functions used in classification tasks generally do not satisfy our assumption A1 for f . Suppose instead we have the following assumption for f . Assumption 2. f satisfies (A1’) ∥∇f(W )∥F ≥ γ(f(W )− f∗),∀W ∈ Rn×m. Then we can show O ( 1 t ) convergence of the loss function, as stated below. Theorem 4. Given initialization {Wl(0)}Ll=1 such that λmin(T{Wl(t)}Ll=1) ≥ α, ∀t ≥ 0 , and f satisfying (A1´), then L(t)− L∗ ≤ L(0)− L ∗ (L(0)− L∗)αγ2t+ 1 . (24) We refer readers to Appendix B for the proof. The lower bound on λmin(T{Wl(t)}Ll=1) can be obtained for different networks by our results in previous sections. The exponential loss satisfies A1´ (see Appendix D.2)and is studied in Gunasekar et al. (2017); Yun et al. (2020) for diagonal linear networks. 5 CONCLUSION AND DISCUSSION In this paper, we study the convergence of gradient flow on multi-layer linear models with a loss of the form f(W1W2 · · ·WL), where f satisfies the gradient dominance property. We show that with proper initialization, the loss converges to its global minimum exponentially. Moreover, we derive a lower bound on the convergence rate that depends on two trajectory-specific quantities: the imbalance matrices, which measure the difference between the weights of adjacent layers, and the least singular value of the weight product W = W1W2 · · ·WL. Our analysis applies to various types of multi-layer linear networks, and our assumptions on f are general enough to include loss functions used for both regression and classification tasks. Future directions include extending our results to analyzing gradient descent algorithms as well as to nonlinear networks. Convergence of gradient descent: Exponential convergence of the gradient flow often suggests a linear rate of convergence of gradient descent when the step size is sufficiently small, and Elkabetz & Cohen (2021) formally establishe such a relation. Indeed, Arora et al. (2018a) shows linear rate of convergence of gradient descent on multi-layer linear networks under balanced initialization. A natural future direction is to translate the convergence results under imbalanced initialization for gradient flow to the convergence of gradient descent with a small step size. Nonlinear networks: While the crucial ingredient of our analysis, invariance of weight imbalance, no longer holds in the presence of nonlinearities such as ReLU activations, Du et al. (2018) shows the diagonal entries of the imbalance are preserved, and Le & Jegelka (2022) shows a stronger version of such invariance given additional assumptions on the training trajectory. Therefore, the weight imbalance could still be used to understand the training of nonlinear networks. A CONTROLLING PRODUCT WITH MARGIN Most of our results regarding the lower bound on λminT{Wl}Ll=1 are given as a value that depends on 1) the imbalance of the weights; 2) the minimum singular value of the product W = ∏L l=1. The former is time-invariant, thus is determined at initialization. As we discussed in Section 3, we require the notion of margin to lower bound σmin(W (t)) for the entire training trajectory. The following Lemma that will be used in subsequent proofs. Lemma A.1. If f satisfies A2, then the gradient flow dynamics (3) satisfies σmin (W (t)) ≥ σmin (W ∗)− √ K µ ∥W (0)−W ∗∥F ,∀t ≥ 0 where W (t) = ∏L l=1 Wl(t) and W ∗ is the unique minimizer of f . Proof. From Polyak (1987), we know if f is µ-strongly convex, then it has unique minimizer W ∗ and f(W )− f∗ ≥ µ 2 ∥W −W ∗∥2F . Additionally, if f is K-smooth, then f(W )− f∗ ≤ K 2 ∥W −W ∗∥2F . This suggests that for any t ≥ 0, K 2 ∥W (t)−W ∗∥2F ≥ L(t)− L∗ ≥ µ 2 ∥W −W ∗∥2F . Therefore we have the following σmin (W (t)) = σmin (W (t)−W ∗ +W ∗) (Weyl’s inequality (Horn & Johnson, 2012, 7.3.P16)) ≥ σmin(W ∗)− ∥W (t)−W ∗∥2 ≥ σmin(W ∗)− ∥W (t)−W ∗∥F (f is µ-strongly convex) ≥ σmin(W ∗)− √ 2 µ (L(t)− L∗) (L(t) non-decreasing under (3)) ≥ σmin(W ∗)− √ 2 µ (L(0)− L∗) (f is K-smooth) ≥ σmin(W ∗)− √ K µ ∥W (0)−W ∗∥2F = σmin (W ∗)− √ K µ ∥W (0)−W ∗∥F . Lemma A.1 directly suggests σmin(W (t)) ≥ [ σmin (W ∗)− √ K µ ∥W (0)−W ∗∥F ] + := margin , and the margin is positive when the initial product W (0) is sufficiently close to the optimal W ∗. B CONVERGENCE ANALYSIS FOR CLASSIFICATION LOSSES In this section, we consider f that satisfies, instead of A1, the following Assumption 3. f satisfies (A1´) the Łojasiewicz inequality-like condition ∥∇f(W )∥F ≥ γ(f(W )− f∗),∀W ∈ Rn×m . Theorem 4 (Restated). Given initialization {Wl(0)}Ll=1 such that λminT{Wl(t)}Ll=1 ≥ α, ∀t ≥ 0 , and f satisfying (A1´), then L(t)− L∗ ≤ L(0)− L ∗ (L(0)− L∗)αγ2t+ 1 . Proof. When f satisfies (A1´), then (5) becomes L̇ = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F ≤ −λmin ( T{Wl}Ll=1 ) ∥∇f(W )∥2F (A1′) ≤ −λmin ( T{Wl}Ll=1 ) γ2(f(W )− f∗)2 = −λmin ( T{Wl}Ll=1 ) γ2(L − L∗)2 . This shows − 1 (L − L∗)2 d dt (L − L∗) ≥ λmin ( T{Wl}Ll=1 ) γ2 ≥ αγ2 . Take integral ∫ dt on both sides, we have for any t ≥ 0, 1 L − L∗ ∣∣∣∣t 0 ≥ αγ2t , which is L(t)− L∗ ≤ L(0)− L ∗ (L(0)− L∗)αγ2t+ 1 . Following similar argument as in Yun et al. (2020), we can show that exponential loss on linearly separable data satisfies A1´. Claim. Let f(w) = ∑N i=1 exp ( −yi · (xTi w) ) , if there exists z ∈ Sn−1 and γ > 0 such that yi(x T i z) ≥ γ , ∀i = 1, · · · , N , then ∥∇f(w)∥F ≥ γf(w) ,∀w ∈ Rn . Proof. Using the linear separability, we have ∥∇f(w)∥2F = ∥∥∥∥∥ N∑ i=1 exp ( −yi · (xTi w) ) yixi ∥∥∥∥∥ 2 F (Cauchy-Schwarz inequality) ≥ ∣∣∣∣∣ 〈 z, N∑ i=1 exp ( −yi · (xTi w) ) yixi 〉∣∣∣∣∣ 2 ≥ ∣∣∣∣∣ N∑ i=1 exp ( −yi · (xTi w) ) γ ∣∣∣∣∣ 2 = |f(w)γ|2 , as desired. Therefore, our convergence results applies to classification tasks with exponential loss. C PROOFS IN SECTION 2 First we prove the expression for L̇ in Lemma 1 Lemma 1 (Restated). Under continuous dynamics in (3), we have L̇ = −∥∇L ( {Wl}Ll=1 ) ∥2F = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F , where W = ∏L l=1 Wi, and T{Wl}Ll=1 is a positive semi-definite linear operator on R n×m with T{Wl}Ll=1E = L∑ l=1 ( l−1∏ i=1 Wi )( l−1∏ i=1 Wi )T E ( L+1∏ i=l+1 Wi )T ( L+1∏ i=l+1 Wi ) ,W0 = In,WL+1 = Im . Proof. The gradient flow dynamics (3) satisfies d dt Wl = − ∂ ∂Wl L ( {Wl}Ll=1 ) = − ( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T , (C.1) where W = ∏L l=1 Wi and W0 = In,WL+1 = Im. Therefore L̇ = L∑ l=1 〈 ∂ ∂Wl L ( {Wl}Ll=1 ) , d dt Wl 〉 F = − L∑ l=1 ∥∥∥∥ ∂∂WlL ({Wl}Ll=1) ∥∥∥∥2 F = − L∑ l=1 〈( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T , ( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T〉 F = − L∑ l=1 〈( l−1∏ i=1 Wi )( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T ( L+1∏ i=l+1 Wi ) ,∇f(W ) 〉 F = − 〈 L∑ l=1 ( l−1∏ i=1 Wi )( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T ( L+1∏ i=l+1 Wi ) ,∇f(W ) 〉 F = − 〈 T{Wl}Ll=1∇f(W ),∇f(W ) 〉 F . Next, we prove that the imbalance matrices are time-invariant Lemma 2 (Restated). Under continuous dynamics (3), we have Ḋl(t) = 0,∀t ≥ 0, l = 1, · · · , L−1. Proof. Each imbalance matrix is defined as Dl = W T l Wl −Wl+1WTl+1, l = 1, · · · , L− 1 We only need to check that ddt ( WTl Wl ) and ddt ( Wl+1W T l+1 ) are identical. From the following derivation, for l = 1, · · · , L− 1, d dt ( WTl Wl ) = ẆTl Wl +W T l Ẇl = − ( L+1∏ i=l+1 Wi ) ∇T f(W ) ( l−1∏ i=1 Wi ) Wl −WTl ( l−1∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T = − ( L+1∏ i=l+1 Wi ) ∇T f(W ) ( l∏ i=1 Wi ) − ( l∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T , d dt ( Wl+1W T l+1 ) = Ẇl+1W T l+1 +Wl+1Ẇ T l+1 = − ( l∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+2 Wi )T WTl+1 −Wl+1 ( L+1∏ i=l+2 Wi ) ∇T f(W ) ( l∏ i=1 Wi ) = − ( l∏ i=1 Wi )T ∇f(W ) ( L+1∏ i=l+1 Wi )T − ( L+1∏ i=l+1 Wi ) ∇T f(W ) ( l∏ i=1 Wi ) we know ddt ( WTl Wl ) = ddt ( Wl+1W T l+1 ) , therefore Ḋl(t) = 0, l = 1, · · · , L− 1 D LINEAR MODELS RELATED TO SCALAR DYNAMICS D.1 SPECTRAL INITIALIZATION UNDER l2 LOSS The spectral initialization Saxe et al. (2014); Gidel et al. (2019); Tarmoun et al. (2021) considers the following: Suppose f = 12∥Y −XW∥ 2 F and we have overparametrized model W = ∏L l=1 Wl. Additionally, we assume Y ∈ RN×m, X ∈ RN×n (n ≥ m) are co-diagonalizable, i.e. there exist P ∈ RN×n with PTP = In and Q ∈ O(m), R ∈ O(n) such that we can write the SVDs of Y,X as Y = P [ ΣY 0 0 0 ] [ Q 0 ] := P Σ̃Y Q̃ and X = PΣXRT . Remark 2. In Section 4, we discussed the case f = 12∥Y −W∥ 2 F , which is essentially considering the aforementioned setting with N = n and X = In. Given any set of weights {Wl}Ll=1 such that W1 = RΣ1V T 1 , Wl = Vl−1ΣlV T l , l = 2, · · · , L− 1, WL = VL−1ΣLQ̃ , where Σl, l = 1, · · · , L are diagonal matrices and Vl ∈ Rn×hl , l = 1, · · · , L− 1 with V Tl Vl = Ihl . The gradient flow dynamics requires Ẇ1 = − ∂L ∂W1 = −XT (Y −XW )WTL WTL−1 · · ·WT2 = −RΣXPT · (P Σ̃Y Q̃− PΣXRT ·R L∏ l=1 ΣLQ̃) · Q̃TΣLVL−1 · VL−1ΣL−1V TL−2 · · ·V2Σ2V T1 = −R ( ΣX ( ΣY − ΣX L∏ l=1 Σl ) Q̃Q̃T L∏ l=2 Σl ) V T1 = −R ( ΣX ( ΣY − ΣX L∏ l=1 Σl )[ Im 0 0 0 ] L∏ l=2 Σl ) V T1 , which shows that the singular space R, V1 for W1 do not change under the gradient flow, and the singular values σi,1of W1 satisfies σ̇i,1 = ( σi,y − σi,x L∏ l=1 σi,l ) σi,x L∏ l=2 σi,l , i = 1, · · · ,m , and σ̇i,1 = 0, i = m+ 1, · · · , n. Similarly, we can show that Ẇl = Vl−1 ΣX (ΣY − ΣX L∏ i=1 Σi )[ Im 0 0 0 ]∏ i ̸=l Σi V Tl , l = 2, · · · , L− 1 , ẆL = VL−1 ΣX (ΣY − ΣX L∏ i=1 Σi )[ Im 0 0 0 ]∏ i̸=L Σi Q̃ . Overall, this suggests that the singular space of {Wl}Ll=1 do not change under the gradient flow, and their singular values satisfies, for i = 1, · · · ,m, σ̇i,l = ( σi,y − σi,x L∏ k=1 σi,k ) σi,x L∏ k ̸=l σi,k , l = 1, · · · , L . Each dynamic equation is equivalent to the one from gradient flow on Li({σi,l}Ll=1) = 1 2 ∣∣∣σi,y − σi,x∏Ll=1 σi,l∣∣∣2 . Therefore, under spectral initialization, the dynamics of the weights are decoupled into at most m dynamics discussed in Section 4.2. D.2 DIAGONAL LINEAR NETWORKS The loss function of diagonal linear networks Gunasekar et al. (2017); Yun et al. (2020) is of the form f(w1 ⊙ · · · ⊙ wL), we write L({wl}Ll=1) = f(w1 ⊙ · · · ⊙ wL) = f(w(1), · · · , w(n)) = f ( L∏ l=1 wl,1 , · · · , L∏ l=1 wl,n ) , i.e. f takes n variables w(1), · · · , w(n) and each variable w(i) is overparametrized into ∏L l=1 wl,i. Then we can show that L̇ = −∥∇{wl}Ll=1L∥ 2 F = n∑ i=1 L∑ l=1 ∣∣∣∣ ∂L∂wl,i ∣∣∣∣2 = n∑ i=1 L∑ l=1 ∣∣∣∣ ∂f∂w(i) ∣∣∣∣2 ∣∣∣∣∂w(i)∂wl,i ∣∣∣∣2 = n∑ i=1 ∣∣∣∣ ∂f∂w(i) ∣∣∣∣2 L∑ l=1 ∣∣∣∣∂w(i)∂wl,i ∣∣∣∣2 = n∑ i=1 ∣∣∣∣ ∂f∂w(i) ∣∣∣∣2 τ{wl,i}Ll=1 ≤ − ( min 1≤i≤n τ{wl,i}Ll=1 ) n∑ i=1 ∣∣∣∣ ∂f∂w(i) ∣∣∣∣2 (f satisfies A1) ≤ − ( min 1≤i≤n τ{wl,i}Ll=1 ) γ(f − f∗) = − ( min 1≤i≤n τ{wl,i}Ll=1 ) γ(L − L∗) . Moreover, the imbalances {d(i)l := w2l,i − w2l+1,i} L−1 l=1 are time-invariant for each i = 1, · · · , n by Lemma 2. Therefore, we can lower bound each τ{wl,i}Ll=1 using the imbalance {d (i) l } L−1 l=1 as in Proposition 3, from which one obtain the exponential convergence of L. E PROOF FOR TWO-LAYER MODEL Using Lemma 3, we can prove Theorem 1 Theorem 1 (Restated). Let D be the imbalance matrix for L = 2. The continuous dynamics in (3) satisfy L(t)− L∗ ≤ exp (−α2γt) (L(0)− L∗),∀t ≥ 0 , (E.2) where 1. If f satisfies only A1, then α2 = ∆ ; 2. If f satisfies both A1 and A2, then α2 = −∆+ + √ (∆+ +∆)2 + 4 ( [ σn (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 −∆− + √ (∆− +∆)2 + 4 ( [ σm (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 , (E.3) with W (0) = ∏L l=1 Wl(0) and W ∗ equal to the unique optimizer of f . Proof. As shown in (5) in Section 2. We have d dt (L(t)− L∗) ≤ −λminT{W1(t),W2(t)}γ(L(t)− L ∗) . Consider any {W1(t),W2(t)} on the trajectory, we have, by Lemma 3, λminT{W1(t),W2(t)} Lemma 3 ≥ 1 2 ( −∆+ + √ (∆+ +∆)2 + 4σ2n (W (t)) −∆− + √ (∆− +∆)2 + 4σ2m (W (t)) ) ≥ 1 2 ( −∆+ + √ (∆+ +∆)2 −∆− + √ (∆− +∆)2 ) = ∆ := α2 . When f also satisfies A2: we need to prove σn (W (t)) ≥ [ σn (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + , (E.4) σm (W (t)) ≥ [ σm (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + . (E.5) When n = m, both inequalities are equivalent to σmin(W (t)) ≥ [ σmin(W ∗)− √ K/µ∥W (0)−W ∗∥F ] + , which is true by Lemma A.1. When n ̸= m, one of the two inequalities become trivial. For example, if n > m, then (E.4) is trivially 0 ≥ 0, and (E.5) is equivalent to σmin(W (t)) ≥ [ σmin(W ∗)− √ K/µ∥W (0)−W ∗∥F ] + , which is true by Lemma A.1. Overall, we have λminT{W1(t),W2(t)} Lemma 3 ≥ 1 2 ( −∆+ + √ (∆+ +∆)2 + 4σ2n (W (t)) −∆− + √ (∆− +∆)2 + 4σ2m (W (t)) ) ≥ 1 2 −∆+ + √ (∆+ +∆)2 + 4 ([ σn (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 −∆− + √ (∆− +∆)2 + 4 ([ σm (W ∗)− √ K/µ∥W (0)−W ∗∥F ] + )2 := α2 . Either case, we have ddt (L(t)− L ∗) ≤ −α2γ(L(t)− L∗), and by Grönwall’s inequality, we have L(t)− L∗ ≤ exp(−α2γt)(L(0)− L∗) . F PROOFS FOR THREE-LAYER MODEL In Section F.1, we discuss the proof idea for Theorem 2, then present the proof afterwards. In Section G, we show a simplified bound when the weights can be ordered w.r.t. positive-semidefiniteness. F.1 PROOF IDEA We first discuss the proof idea behind Theorem 2, then provide the complete proof. Consider the case when n = m = 1, we use the following notations for the weights {wT1 ,W2, w3} ∈ R1×h1 × Rh1×h2 × Rh2×1. The quantity we need to lower bound is λminT{wT1 ,W2,w3} = w T 1 W2W T 2 w1 + w T 1 w1 · wT3 w3 + wT3 WT2 W2w3 = ∥WT2 w1∥2 + ∥w1∥2∥w3∥2 + ∥W2w3∥2 , where our linear operator T{wT1 ,W2,w3} reduces to a scalar. The remaining thing to do is to find min wT1 ,W2,w3 ∥WT2 w1∥2 + ∥w1∥2∥w3∥2 + ∥W2w3∥2 (F.6) s.t. W2W T 2 − w1wT1 = D21 WT2 W2 − w3wT3 = D23 i.e., we try to find the best lower bound on λminT{wT1 ,W2,w3} given the fact that the weights have to satisfies the imbalance constraints from D21, D23, and λminT{wT1 ,W2,w3} is related to the norm of some weights ∥w1∥, ∥w3∥ and the “alignment” between weights ∥WT2 w1∥, ∥W2w3∥. The general idea of the proof is to lower bound each term ∥WT2 w1∥2, ∥w1∥2, ∥w3∥2, ∥W2w3∥2 individually given the imbalance constraints, then show the existence of some {wT1 ,W2, w3} that attains the lower bound simultaneously. The following discussion is most for lower bounding ∥w1∥, ∥WT2 w1∥ but the same argument holds for lower bounding other quantities. Understanding what can be chosen to be the spectrum of W2WT2 (W T 2 W2) is the key to derive an lower bound, and the imbalance constraints implicitly limit such choices. To see this, notice that W2WT2 − w1wT1 = D21 suggests an eigenvalue interlacing relation (Horn & Johnson, 2012, Corollary 4.39) between W2WT2 and D21, i.e. λh1(D21) ≤ λh1(W2WT2 ) ≤ λh1−1(D21) ≤ · · · ≤ λ2(W2WT2 ) ≤ λ1(D21) ≤ λ1(W2WT2 ) . Therefore, any choice of {λi(W2WT2 )} h1 i=1 must satisfy the interlacing relation with {λi(D21)} h1 i=1. Similarly, {λi(WT2 W2)} h2 i=1 must satisfy the interlacing relation with {λi(D23)} h2 i=1. Moreover, {λi(W2WT2 )} h1 i=1 and {λi(WT2 W2)} h2 i=1 agree on non-zero eigenvalues. In short, an appropriate choice of the spectrum of W2WT2 (W T 2 W2) needs to respect the interlacing relation with the eigenvalues of D21 and D23. The following matrix is defined D̄h1 := diag{max{λi(D21), λi(D23), 0}} h1 i=1 to be the “minimum” choice of the spectrum of W2WT2 (W T 2 W2) in the sense that any valid choice of {λi(W2WT2 )} h1 i=1 must satisfies λi(W2W T 2 ) ≥ λi(D̄h1) ≥ λi(D21) , i = 1, · · · , h1 . That is, the spectrum of D̄h1 “lies between” the one of W2W T 2 and of D21. Now we check the imbalance constraint again W2WT2 − w1wT1 = D21, it shows that: using a rank-one update w1wT1 , one obtain the spectrum of D21 starting from the spectrum of W2WT2 , and more importantly, we require the norm ∥w1∥2 to be (taking the trace on the imbalance equation) tr(W2W T 2 )− ∥w1∥2 = tr(D21) ⇒ ∥w1∥2 = tr(W2WT2 )− tr(D21) . Now since D̄h1 “lies inbetween”, we have ∥w1∥2 = tr(W2WT2 )− tr(D21) = (changes from λi(W2WT2 ) to λi(D21)) = (changes from λi(W2WT2 ) to λi(D̄h1)) + (changes from λi(D̄h1) to λi(D21)) ≥ (changes from λi(D̄h1) to λi(D21)) = tr(D̄h1)− tr(D21) , which is a lower bound on ∥w1∥2. It is exactly the ∆21 in Theorem 2 (It takes more complicated form when n > 1). A lower bound on ∥WT2 w1∥2 requires carefully exam the changes from the spectrum of D̄h1 to the one of D21. If λh1(D21) < 0, then “changes from λi(D̄) to λi(D21)” has two parts 1. (changes from λi(D̄) to [λi(D21)]+) through the part where w1 is “aligned" with WT2 , 2. (changes from 0 to λh1(D21)) through the part where w1 is “orthogonal" to W T 2 . Only the former contributes to ∥WT2 w1∥2 hence we need the expression ∆ (2) 21 +∆ 2 21, which excludes the latter part. Using similar argument we can lower bound ∥w3∥2, ∥W2w3∥2. Lastly, the existence of {wT1 ,W2, w3} that attains the lower bound is from the fact that D̄h1 (D̄h2 ) is a valid choice for the spectrum of W2WT2 (W T 2 W2). The complete proof of the Theorem 2 follows the same idea but with a generalized notion of eigenvalue interlacing, and some related novel eigenvalue bounds. F.2 PROOF OF THEOREM 2 Theorem 2 is the direct consequence of the following two results. Lemma F.1. Given any set of weights {W1,W2,W3} ∈ Rn×h1 × Rh1×h2 × Rh2×m, we have λminT{W1,W2,W3} ≥ λn(W1W2W T 2 W T 1 ) + λn(W1W T 1 )λm(W T 3 W3) + λm(W T 3 W T 2 W2W3) . (Note that λminT{W1,W2,W3} does not have a closed-form expression. One can only work with its lower bound λn(W1W2WT2 W T 1 ) + λn(W1W T 1 )λm(W T 3 W3) + λm(W T 3 W T 2 W2W3).) Theorem F.2. Given imbalance matrices pair (D21, D23) ∈ Rh1×h1 × Rh2×h2 , then the optimal value of min W1,W2,W3 2 ( λn(W1W2W T 2 W T 1 ) + λn(W1W T 1 )λm(W T 3 W3) + λm(W T 3 W T 2 W2W3) ) s.t. W2W T 2 −WT1 W1 = D21 WT2 W2 −W3WT3 = D23 is ∆∗(D21, D23) = ∆ (2) 21 +∆ 2 21 + 2∆21∆23 +∆ (2) 23 +∆ 2 23 . Combining those two results gets λminT{W1,W2,W3} ≥ ∆∗(D21, D23)/2, as stated in Theorem 2. The Lemma F.1 is intuitive and easy to prove: Proof of Lemma F.1. Notice that T{W1,W2,W3} is the summation of three positive semi-definite linear operators on Rn×m, i.e. T{W1,W2,W3} = T12 + T13 + T23 , where T12E = W1W2WT2 WT1 E, T13E = W1WT1 EWT3 W3, T23E = EWT3 WT2 W2W3 , and λminT12 = λn(W1W2WT2 WT1 ), λminT13 = λn(W1WT1 )λm(WT3 W3), λminT23 = λm(W T 3 W T 2 W2W3). Therefore, let Emin with ∥Emin∥F = 1 be the eigenmatrix associated with λminT{W1,W2,W3}, we have λminT{W1,W2,W3} = 〈 T{W1,W2,W3}, Emin 〉 F = ⟨T12, Emin⟩F + ⟨T13, Emin⟩F + ⟨T23, Emin⟩F ≥ λminT12 + λminT13 + λminT23 . The rest of this section is dedicated to prove Theorem F.2 We will first state a few Lemmas that will be used in the proof, then show the proof for Theorem F.2, and present the long proofs for the auxiliary Lemmas in the end. F.3 AUXILIARY LEMMAS The main ingredient used in proving Theorem F.2 is the notion of r-interlacing relation between the spectrum of two matrices, which is a natural generalization of the interlacing relation as seen in classical Cauchy Interlacing Theorem (Horn & Johnson, 2012, Theorem 4.3.17). Definition 4. Given real symmetric matrices A,B of order n, write A ⪰r B, if λi+r(A) ≤ λi(B) ≤ λi(A) ,∀i where λj(·) = +∞, j ≤ 0 and λj(·) = −∞
1. What is the focus of the paper regarding gradient flow and multi-layer linear models? 2. What are the strengths of the proposed approach, particularly in terms of its broad applicability and technical improvements? 3. Are there any weaknesses or limitations in the paper's analysis or experiments? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Can the reviewer identify any specific questions or concerns they have regarding the paper without being too specific?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper aims to analyze the convergence of gradient flow of multi-layer linear models. The paper deposit that in general, the convergence rate depends on two trajectory-specific quantities: the imbalance matrices (which measure the difference between the weights of adjacent layers) and on the least singular values of the weight product. The framework of the paper is designed to cover several existing initialization schemes and applies to both regression and classification. Strengths And Weaknesses Strengths The paper is well-written with a sufficient literature review. The main ideas and techniques of the work were developed based on well-founded rationales. The mathematical analyses of the work are rigorous and seem correct. The framework of the paper is general and broadly applicable (covers several existing initialization schemes; general loss functions) The paper obtains several new technical results/improvements The derived bounds (Theorems 2,3) characterize the general effect of weight imbalance on convergence. Previous works on this aspect focused on two-layer models or when all imbalance matrices are positive semi-definite. The analysis applies to general loss functions and thus can be used to study classification tasks (Theorem 4). Existing works mostly focused on l2 loss for regression tasks. Three-layer model: Theorem 2 is stronger than previous work (Yun et. al.(2020)) in that it doesn’t force the partial ordering of the weights for convergence. The paper also proves that the bound in this case is optimal. Deep linear models: The bound of Theorem 3 dominates and unifies two existing bounds on deep networks (Arora et. al.(2018a), Yun et. al.(2020)) and characterizes the general effect of weight imbalance on convergence The unimodality condition seems to be novel and contains that of Yun et. al.(2020) as a special case Some technical constructions of the proof of Theorem 2 (interlacing of spectra of two matrices whose difference is positive definite and low-rank; explicit construct of the optimal solution to the lower bound) might be of general interest. Weaknesses None noted Clarity, Quality, Novelty And Reproducibility The paper is well-written with a sufficient literature review. The main ideas and techniques of the work were developed based on well-founded rationales. The mathematical analyses of the work are rigorous and seem correct. The paper obtains several new technical results/improvements.
ICLR
Title Recursive Regression with Neural Networks: Approximating the HJI PDE Solution Abstract Most machine learning applications using neural networks seek to approximate some function g(x) by minimizing some cost criterion. In the simplest case, if one has access to pairs of the form (x, y) where y = g(x), the problem can be framed as a regression problem. Beyond this family of problems, we find many cases where the unavailability of data pairs makes this approach unfeasible. However, similar to what we find in the reinforcement learning literature, if we have some known properties of the function we are seeking to approximate, there is still hope to frame the problem as a regression problem. In this context, we present an algorithm that approximates the solution to a partial differential equation known as the Hamilton-Jacobi-Isaacs partial differential equation (HJI PDE) and compare it to current state of the art tools. This PDE, which is found in the fields of control theory and robotics, is of particular importance in safety critical systems where guarantees of performance are a must. N/A Most machine learning applications using neural networks seek to approximate some function g(x) by minimizing some cost criterion. In the simplest case, if one has access to pairs of the form (x, y) where y = g(x), the problem can be framed as a regression problem. Beyond this family of problems, we find many cases where the unavailability of data pairs makes this approach unfeasible. However, similar to what we find in the reinforcement learning literature, if we have some known properties of the function we are seeking to approximate, there is still hope to frame the problem as a regression problem. In this context, we present an algorithm that approximates the solution to a partial differential equation known as the Hamilton-Jacobi-Isaacs partial differential equation (HJI PDE) and compare it to current state of the art tools. This PDE, which is found in the fields of control theory and robotics, is of particular importance in safety critical systems where guarantees of performance are a must. 1 INTRODUCTION Artificial neural networks are remarkable function approximators used in a myriad of applications ranging from complex controllers for robotic actuation (Levine et al., 2016) (Schulman et al., 2015) to simple image classifiers for digit recognition (LeCun et al., 1989) . They even find uses in physics to find approximations to solutions of PDEs and systems of coupled ordinary differential equations (ODEs) (Lagaris et al., 1998). Their success is in part achieved by their property of being universal function approximators (Hornik et al., 1989). In order to train a neural network one usually defines a cost function which captures the ”goodness” of the choice of parameters in our model, and uses gradient descent/ascent algorithms to improve them. In supervised learning, for example, input output data pairs are used to define a cost function such as the mean squared error or the mean absolute error; unfortunately, in many cases the function we want to approximate is unkown. For instance, in many reinforcement learning settings one wants to find the optimal policy, a function from state variables to actions1, which maximizes the expected sum of discounted rewards of an agent in some environment. This function is usually unkown a priori, so this problem can’t readily be framed as a regression problem using input-output pairs. This assertion becomes blurred, however, when looking at the work of Mnih et al. (2013), where a deep Q-network learns by generating targets and minimizing a cost of the form Li(θi) = Es,a∼ρ[(yi −Q(s, a; θi))2]. (1) Here, the targets yi are generated from the same Q-network that is being used to approximate the Q-function, hence the neural network has two purposes: approximation and data generation. In this work, we show that this same idea can be extended to the domain of approximating solutions to partial differential equations, and in particular the solution to the Hamiton-Jacobi-Isaacs PDE. 1or states to probabilities over actions 2 THE HAMILTON-JACOBI-ISAACS PDE In control theory and robotics we often want to know how a system evolves in time given some input signal. In particular, one would like to know whether there exists an (optimal) input signal that drives our system to a particular region of interest in our state space and what that input is. For a deterministic system with continuous states and inputs, this problem can be succinctly expressed as a partial differential equation known as the Hamilton-Jacobi-Isaacs (HJI) PDE. Let V : Rn × R− → R. Then, given a time invariant system of the form dxdt = f(x, a, b) and boundary condition V (x, 0) = l(x), where x ∈ Rn is the state vector and a ∈ A ⊆ Rma and b ∈ B ⊆ Rmb are inputs to the system2, we wish to find the solution to the minimum-payoff HJI PDE, associated to the reachability problem: ∂V (x, t) ∂t = −min{0, H(x,∇xV )}, (2) where H(x,∇xV ) := max a∈A min b∈B ∇xV T f(x, a, b) (3) is known as the Hamiltonian. The boundary condition V (x, 0) = l(x) encodes in its zero sub-level set (i.e. l(x) ≤ 0) the region of interest in our state space known as the target set T . Lastly, the solution V (x, t) to (2) encodes the information about all the starting states whose induced trajectories will enter (and possibly leave) T within |t|, given the dynamics and input signals. More precisely, for some starting state x0 and t ≤ 0, V (x0, t) < 0 if and only if the trajectory starting from x0 enters T within |t|. To give some intuition as to why V (x, t) encodes the starting states whose trajectories enter T within t, let us consider the simpler problem where dxdt = f(x) is an autonomous system without any inputs. Further, let us write (2) as a finite difference in t. With some rearranging, and absorbing the gradient into V (i.e. ∇xV T f(x)∆t+V (x, t) ≈ V (x+f(x)∆t, t)), one can obtain the following approximation V (x, t−∆t) ≈ min{ V (x, t) , V (x+ f(x)∆t, t) }. (4) It is straightforward to see from (4) that at time t = 0 all the states outside of T (i.e. V (x, 0) ≥ 0) but near its boundary, whose induced trajectories enter the target (i.e. V (x+f(x)∆t, 0) < 0) within ∆t, will become negative in V (x,−∆t). Thinking of this update recursively one can intuitively see how the zero sub-level set of V grows backward in time to include more and more states. For the case of one input trying to drive our system into T , the approximation becomes V (x, t−∆t) ≈ min{ V (x, t) , min b V (x+ f(x, b)∆t, t) }, (5) and for two competing inputs, V (x, t−∆t) ≈ min{ V (x, t) , max a min b V (x+ f(x, a, b)∆t, t) }. (6) Using the previous analogy of the autonomous system, one can see how (5) and (6) are essentially different ways to expand the zero sub-level set backward in time: (5) can be seen as an input trying to expand the set as fast as possible; (6) can be seen as two inputs with competing goals, where one input tries to expand the set and the other seeks to prevent its growth. Moreover, this last setting shows the relevance of the HJI PDE in safety critical systems. By treating input b as a bounded worse case disturbance and T as some unsafe region, one can establish safety guarantees about the system and claim which states won’t be driven into T within some time horizon. 2a is usually taken to be the input and b is taken to be some bounded input disturbance Lastly, it is important to note that V (x, t) contains useful information in its gradient ∇xV (x, t). In the case where dxdt = f(x, b) has a single input, the argument minimizing the following optimization problem b∗ = argmin b∈B ∇xV (xo, t)T f(xo, b) (7) yields the instantaneous optimal input for state x0 at time t to guide the trajectory into T as fast as possible. Using this fact one can generate an optimal control policy based on the gradient of V . This idea can then be easily extended to the case of two competing inputs to obtain competing control policies. Finally, even though (7) need not be a convex problem, in this work we will only deal with simple dynamical systems, making the optimization problem easy to solve. 3 APPROXIMATING SOLUTIONS OF PDES The problem presented in section 2 (as in many other cases with PDEs) is general not straightforward to solve. For this reason, trying to find a good approximation instead of the actual solution can be a reasonable approach. Many current state-of-the-art tools used to approximate solutions of PDEs, including (2), use gridding techniques (Mitchell, 2007) whereby finite differences are used to iteratively update values on a grid. Another approach (Lagaris et al., 1998) is to train a feedforward neural network by minimizing the following loss Lθ := N∑ i=1 G(xi, ψθ(xi),∇ψθ(xi),∇2ψθ(xi))2 (8) where G(x, ψ(x),∇ψ(x),∇2ψ(x)) = 0 is the PDE whose solution ψ(x) we are trying to approximate and xi are points taken from the discretization of our domain. In (8), the function ψθ(x) := A(x) + F (x,Nθ(x)) is a candidate approximation which by construction satisfies the boundary condition, where Nθ(x) is a feedforward neural network. In order to ensure that the conditions at the boundary are satisfied, F (x,Nθ(x)) = 0 at the boundary and A(x) is a fixed function which satisfies them. Although this approach is well suited for some problems, special care must be taken when computing the gradient of the loss with respect to the parameters. For instance, following the previous procedure, the loss for HJI PDE would be written as Lθ := N∑ i=1 ( ∂V (xi, ti) ∂t +min{0, H(xi,∇xV )})2, (9) but themin in the function makes this expression not differentiable everywhere. There exist ways to circumvent this problem (Djeridane and Lygeros, 2006), however, but they require the cumbersome definition of many intermediary functions which can become hard to find for complicated dynamical models. In this work, we try to tackle the problem of finding an approximate solution to (2) from a different perspective. We show that a poor approximation to our solution is enough to generate “good enough” new data for regression, which can in turn be used to improve our model. 4 SELF-GENERATED DATA 4.1 ALGORITHM In this section we present a simple method for approximating the solution to (2) by utilizing a feedforward neural network in two ways: as a function approximator and a data generator. We believe that this parametric approach is better suited for finding good approximations by avoiding some of the limitations found in gridding/tabular techniques due to the curse of dimesionality. To that end, we start by defining our candidate approximation V̂θ(x) to be of the same form as in (Lagaris et al., 1998); that is, a sum of two terms which help satisfy our boundary condition V (x, 0) V̂θ(x, t) = V (x, 0) + tNθ(x, t), (10) where Nθ(x, t) is a neural network mapping from our states and time variables to the real numbers. Next, we sampleN points in the state variable x chosen uniformly at random over some set S which includes T (the target set), and similarly, sampleN points in the time variable t uniformly at random over the set [−T, 0], where T ≥ 0 is the desired time horizon. By sampling from these distributions, we seek to find a good approximation to V (x, t) over the set S × [−T, 0]. Once these points have been gathered, we make use of the update (4),(5) or (6) (depending on our problem) and use V̂θ(x, t), the approximation itself, to generate the new regression points. The complete algorithm 4.1 is shown using update equation (6), but it should be clear how to modify it for the other cases. Algorithm 1 Recursive Regression via SGD with Momentum 1: Input: N , interval, V (x, 0), f(x, a, b), A, B, S, T, K(batch size), γ (momentum decay), η (learning rate) 2: Random initialization of the weights and biases of the neural network Nθ(x, t) 3: Define V̂θ(x, t) := V (x, 0) + tNθ(x, t) 4: Define Lθ := ∑K k=0 |yk − V̂θ(xk, tk)| 5: i← 0, νi ← 0, ∆t← 10−2 6: while True do (or stopping criterion) 7: if mod(i,interval) == 0 then 8: R← empty array of size N 9: Sample N pairs (x, t) ∼ Uniform(S × [−T, 0]) 10: for j = 0 to N do 11: (a∗j , b ∗ j )← argmax a∈A argmin b∈B ∇xV̂ Tθ f(x, a, b) 12: yj ← min{V̂θ(xj , tj), V̂θ(xj + f(xj , a∗, b∗)∆t, tj) } 13: Rj ← ((xj , tj), yj) 14: b← K elements from R picked at random 15: νi+1 ← γνi + η∇θL(b) 16: θi+1 ← θi − νi+1 17: i← i+ 1 18: Output: V̂θ(x, t) 4.2 COMMENTS Algorithm 4.1 is a type of bootstrapping method in that lines 12 and 13 make use of V̂θ(x, t) to generate points for regression to train Nθ(x, t) which in turn modify V̂θ(x, t) itself. At first glance, it is unclear whether the generated pairs ((xj , tj), yj) will result in a good approximation to the solution of our PDE after regression; however, given the form of our candidate function (10) we expect that points sampled near t = 0 will in fact be reasonable approximations of V (x, t) for small t. Given this assumption, we hypothesize that despite the presence of misleading data, our network will be able to do a good job at regressing over all points, thus improving our initial model and allowing the generation of improved data. By repeating this procedure, we expect the accuracy of the boundary condition to ”propagate” backward in time (possibly with some minor error) in the form of better and better points for regression. Another important aspect from line 13 is that we are simulating our dynamics forward in time using the Euler approximation step xj + f(xj , a∗, b∗)∆t. In practice, depending on the variability and complexity of the dynamics, one might use a Runge-Kutta method or a more involved integration procedure. For the experiments in the next sections a Runge-Kutta method with 4 stages (RK4) was used. 5 EXPERIMENTS In this section we present a few 2-dimensional experiments to demonstrate the validity of our claim and the effectiveness of the algorithm. To measure the performance of the algorithm, we compare the difference between our computed approximation and the true analytical solution. In case it is not straightforward to obtain the solution, a very accurate approximation taken from state-of-the-art tools is used instead. In particular, we make use of the LevelSet Toolbox from Mitchell (2007), a powerful computational tool for obtaining good approximations to Hamilton-Jacobi (HJ) PDEs. The first error metric to be used will be E1(V̂θ(x, t)) := 1 M M∑ i=1 |V (xi, ti)− V̂θ(xi, ti)| (11) where M are the number of points chosen from our domain to compute the average absolute error and V (x, t) can denote either the true solution or an accurate approximation. In the case where the analytical solution is known, the points are taken uniformly at random over S; otherwise, they are taken over some grid in S and [−T, 0]. Lastly, we also use a second error metric E2(V̂θ(x, t)) := 1 M M∑ i=1 |∂V (xi, ti) ∂t +min{0, H(xi,∇xV )}| (12) similar to the one defined in (9), which denotes the extent by which (on average) the approximation is violating the PDE equality. For all experiments M = 3000, all chosen uniformly at random over S × [−T, 0]. In section 5.4 we also show a visual representation of the approximations. 5.1 A LINEAR SYSTEM In this experiment we study the performance of the algorithm on an autonomous system of the form ẋ = f(x) = [ −1 −2 2 −1 ] x (13) with V (x, 0) = ||x||2 − 1 and T = 1.0. For this simple system, the solution to the HJI PDE can be found analytically to be V (x, t) = e−t||x||2 − 1. One can easily verify this by checking it satisfies the boundary condition and (2). For this experiment, a feedforward neural network with a single hidden layer of 10 units and sigmoid activation functions was used. The number of points sampled was chosen to be N = 500, uniformly picked over the set S := {(x1, x2)|x1, x2 ∈ [−5, 5]} and over t ∈ [−T, 0]. The batches were picked to be of size K = 10, momentum decay γ = 0.95 and learning rate η = 0.1. The interval to renew the regression points was chosen to be 1000 iterations and the program was halted at 500,000 iterations. The results shown in Fig. 1 where taken over 10 runs of the algorithm concurrently executed over multiple threads. The overall time to run the 500,000 iterations for all threads was 1521 seconds. The average E1 error at halting time was in the order of 7× 10−2, whereas the E2 error was in the order of 3× 10−1. The sharp jumps appearing in the loss figure in the majority of cases correspond to the error after new points are generated and used for regression. 5.2 PURSUIT-EVASION GAME: SINGLE INPUT In this experiment we explore a pursuit-evasion game where a pursuer has to intercept an evader. In a first simplified approach, we assume the evader has a fixed heading and speed, whereas the pursuer has the same speed as the evader but has the liberty to change the direction of its heading. Fixing the evader at the origin with its heading aligned with the x-axis we frame the problem in relative coordinates between the evader and pursuer, that is x = [xr yr]T , where xr and yr represent the x and y position of the pursuer relative to the evader. This system’s dynamics are readily encoded in the following equation [ ẋr ẏr ] = f(x, b) = [ vpcos(b)− ve vpsin(b) ] (14) where vp = ve = 2.0 represent the speed of the pursuer and evader respectively, b ∈ [0, 2π] represents the input available to the pursuer, which is the angle with respect to the x-axis. In this simplified pursuit-evasion game we say the pursuer has captured the evader if they are within 1 unit of distance from each other. Thus, we define our capture condition by defining V (x, 0) = ||x||2−1, which will ensure that our approximation captures all the states from which the pursuer can capture the evader in within T = 1.0. As in the previous example, we choose the same network architecture and the same values for the halting time, renewal interval, N ,K,γ and η. The results shown in Fig. 2 where also taken over 10 runs of the algorithm like in section 5.2. The overall time to run the 500,000 iterations was 1952 seconds. The average E1 error at halting time was also in the order of 7× 10−2, whereas the E2 error was in the order of 1.5× 10−1. The points used to compute E1 were taken from a 51 × 51 grid at t = −0.5 (half of the time horizon), using a previously computed approximation from the LevelSet Toolbox. The reason why a single time instance was used to compute E1 was purely to reduce the amount of computation of the error at run-time. 5.3 PURSUIT-EVASION GAME: TWO INPUTS The last experimental example also consists of a pursuit-evasion game, but in this case the evader has access to a range of speeds through an input a ∈ [−2, 2]. The system dynamics thus become[ ẋr ẏr ] = f(x, a, b) = [ vpcos(b)− a vpsin(b) ] (15) and, similarly, V (x, 0) = ||x||2 − 1 and T = 1.0. As before, vp = 2.0. The interesting behavior we expect to see from this experiment, in comparison to the single input counterpart, is that this new available action to the evader will make it more difficult for the pursuer to intercept. This should then be evident by looking at our approximation V̂θ and its zero sub-level sets at different times. For this experiment we also chose the same architecture for the network as in the previous experiments and the same parameters, except for the halting time which was 300,000 iterations. The results shown in Fig. 3 where also taken over 10 runs of the algorithm. The overall time to run the 300,000 iterations over the all threads was 1028 seconds. The average E1 error at halting time was in the order of 6 × 10−2, whereas the E2 error was in the order of 1.5 × 10−1. Like in the single input case, the points used to compute E1 were taken from a 51 × 51 grid at t = −0.5 of a pre-computed approximation. 5.4 CONTOUR VISUALIZATION In this section we briefly display some of the contours for a neural network picked at random from those computed in the experimental section. Each line corresponds to the set of states where V̂θ(x, t) = 0 for t = 0,−0.25,−0.5,−0.75,−1.0. These contours enclose within them the states from which our system can reach the target set T within the absolute value of its associated time. As expected, the linear system’s contours expand radially in all directions since the origin is a stable equilibrium point3 where all trajectories converge. For the pursuit-evasion game of one input, we also see that the contours grow toward the right, which is a sensible outcome given that the pursuer can’t catch up with the evader if it starts somewhere where xr < −1.0. Finally, the last set of contours associated with the pursuer-evader game of two competing inputs also make sense, since starting states xr < −1.0 or xr > 1.0 should not permit the pursuer to intercept the evader, and so 3with the same negative real part for the eigenvalues the contours should not expand in those directions. As a last comparison, in Fig. 5 we display the actual contours that would be obtained using the LevelSet Toolbox. By comparing Fig. 5 and 4 one can qualitatively see that the neural network has learned an accurate approximation of V (x, t). 6 ADVANTAGES AND DISADVANTAGES The first advantage of using this method over gridding techniques is a dramatic improvement in memory requirements. For instance, using a standard grid with [51, 51, 10] discretization points per axis (i.e. 51 in xr, 51 in yr and 10 in t) each of the three previous experiments requires the storage of 26, 010 numbers, as opposed to 51 weights for our neural network. For the gridding approach this memory requirement must increase exponentially with the number of dimensions, whereas this need not be the case for our method. Furthermore, points that do not fall exactly on the grid have to be interpolated, whereas the neural network is an approximation that assigns values to all points in the domain. To this we can also add that fact that the neural network can yield the gradient at any point directly with backpropagation, whereas the gradient must once again be approximated for gridding techniques. The main disadvantage of this method, for small dimensional systems in particular, is the time requirement. Computing values over a grid with the LevelSet Toolbox for the previous systems took less than 10 seconds. This advantage of gridding/tabular procedures, however, quickly disappears in higher dimensions (4D, 5D...) due to the curse of dimensionality. Finally, another disadvantage of using this method is the necessity to tune hyper parameters. 7 CONCLUSION AND FUTURE WORK In this work we focus our attention on the idea that recursive/bootstrapped regression can be used in some problems where the function we wish to approximate has some known characteristics. In particular, we show that accurate approximations to the HJI PDE solution can be found by assigning a neural network two roles, one of them being function approximation, and the other data generation.To validate our hypothesis three different experiments with three distinct dynamical systems were performed with satisfactory results. In this work we did not focus on the architecture of the neural network, but rather on its ability to perform well on three distinct tasks using the same algorithm. In future work we will try to find whether one can construct wider or deeper neural networks and obtain better results. We also want to investigate how well this method scales with the number of state and input dimensions. Positive results in that front could suppose an important step to further alleviate the effects of the curse of dimensionality, which are pervasive in griding methods. ACKNOWLEDGMENTS Special thanks to Carlos Florensa for his implementation tips and to Jaime F. Fisac for helping in the process of writing this work. 8 EXTRA EXPERIMENT This experiment was designed to test the applicability of the method to problems beyond those presented in the previous sections. In particular, we show that with small changes we can also compute an accurate approximation to a pursuit-evasion problem in 3 dimensions. Similar to the previous examples, we frame the problem in relative coordinates with the x-axis aligned with the evader’s heading, and give the pursuer and evader control over the rate of rotation. This can be written as follows: ẋrẏr θ̇r = f(x, a, b) = [−ve + vpcos(θr) + ayrvpsin(θr)− axr b− a ] (16) For this problem the capture condition is encoded in the boundary condition V (x, 0) = ||[xr yr]T ||2 − 1 (where we ignore θr since the capture condition only depends on the distance) and we consider a the time horizon T = 1.0s. For this problem we give both pursuer and evader the same speed vp = ve = 1.0 and the same turning rates a, b ∈ [−1, 1]. Unlike the previous experiments, we used a neural network with two hidden layers with 10 and 5 units respectively and sigmoid activations. The number of points sampled was chosen to be N = 2000, uniformly picked over the set S := {(xr, yr, θr)|xr, yr ∈ [−5, 5], θr ∈ [−π, π]} and over t ∈ [−T, 0]. The batches were picked to be of size K = 25, momentum decay γ = 0.999 and learning rate η = 0.001. The interval to renew the regression points was chosen to be 1000 iterations and the program was halted at 500,000 iterations. As shown in Fig. 6, both error metrics decrease as the algorithm progresses, reaching an average error for E1 in the order of 5.0× 10−2 and an average error for E2 in the order of 1.0× 10−1. The points used to compute E1 were taken from a 51× 51× 50 approximation grid at t = −0.5s. This set of experiments was run in a different machine4 using 8 threads and the total time for all threads to finish was 1000 seconds. Finally, Fig. 7 shows the zero level set contour at t = −0.5, which is now a 3D surface, from side and top perspectives. The first row shows the output of the LevelSet Toolbox from each perspective, and the second row shows a 3D scatter plot of points on the zero level-set obtained from one of the 8 neural networks that were trained. 4due to heavy usage of the first machine we had to switch to a different one For this experiment, only 111 numbers were needed to store the approximation, as opposed to 51× 51×50×10 = 1300500 numbers (i.e. 51 in xr, 51 in yr, 50 in θr and 10 in t) for a [51×51×50×10] grid approximation.
1. What is the main contribution of the paper regarding time-evolution PDEs? 2. What are the strengths of the proposed algorithm compared to traditional grid-based simulations? 3. How does the reviewer assess the novelty and potential usefulness of the proposed approach? 4. Are there any concerns or suggestions regarding the experimental evaluation of the algorithm? 5. How does the reviewer view the relevance of the paper's content to the machine learning and deep learning communities?
Review
Review This paper presents an algorithm for approximating the solution of certain time-evolution PDEs. The paper presents an interesting learning-based approach to solve such PDEs. The idea is to alternate between: 1. sampling points in space-time 2. generating solution to PDE at "those" sampled points 3. regressing a space-time function to satisfy the latter solutions at the sampled points (and hopefully generalize beyond those points). I actually find the proposed algorithm interesting, and potentially useful in practice. The classic grid-based simulation of PDEs is often too expensive to be practical, due to the curse of dimensionality. Hence, learning the solution of PDEs makes a lot of sense for practical settings. On the other hand, as the authors point out, simply running gradient descent on the regression loss function does not work, because of the non-differentiablity of the "min" that shows up in the studied PDEs. Therefore, I think the proposed idea is actually very interesting approach to learning the PDE solution in presence of non-differentability, which is indeed a "challenging" setup for numerically solving PDEs. The paper motivates the problem (time-evolution PDE with "min" operator applied to the spatial derivatives) by applications in control thery, but I think there is more direct interest in such problems for the machine learning community, and even deep learning community. For example http://link.springer.com/chapter/10.1007/978-3-319-14612-6_4 studies approximate solution to PDEs with very similar properties (evolution+"min") to develop new optimization algorithms. The latter is indeed used to training deep networks: https://arxiv.org/abs/1601.04114 I think this work would catch even more attention if the authors could show some experiments with higher-dimensional problems (where grid-based methods are absolutely inapplicable).
ICLR
Title Recursive Regression with Neural Networks: Approximating the HJI PDE Solution Abstract Most machine learning applications using neural networks seek to approximate some function g(x) by minimizing some cost criterion. In the simplest case, if one has access to pairs of the form (x, y) where y = g(x), the problem can be framed as a regression problem. Beyond this family of problems, we find many cases where the unavailability of data pairs makes this approach unfeasible. However, similar to what we find in the reinforcement learning literature, if we have some known properties of the function we are seeking to approximate, there is still hope to frame the problem as a regression problem. In this context, we present an algorithm that approximates the solution to a partial differential equation known as the Hamilton-Jacobi-Isaacs partial differential equation (HJI PDE) and compare it to current state of the art tools. This PDE, which is found in the fields of control theory and robotics, is of particular importance in safety critical systems where guarantees of performance are a must. N/A Most machine learning applications using neural networks seek to approximate some function g(x) by minimizing some cost criterion. In the simplest case, if one has access to pairs of the form (x, y) where y = g(x), the problem can be framed as a regression problem. Beyond this family of problems, we find many cases where the unavailability of data pairs makes this approach unfeasible. However, similar to what we find in the reinforcement learning literature, if we have some known properties of the function we are seeking to approximate, there is still hope to frame the problem as a regression problem. In this context, we present an algorithm that approximates the solution to a partial differential equation known as the Hamilton-Jacobi-Isaacs partial differential equation (HJI PDE) and compare it to current state of the art tools. This PDE, which is found in the fields of control theory and robotics, is of particular importance in safety critical systems where guarantees of performance are a must. 1 INTRODUCTION Artificial neural networks are remarkable function approximators used in a myriad of applications ranging from complex controllers for robotic actuation (Levine et al., 2016) (Schulman et al., 2015) to simple image classifiers for digit recognition (LeCun et al., 1989) . They even find uses in physics to find approximations to solutions of PDEs and systems of coupled ordinary differential equations (ODEs) (Lagaris et al., 1998). Their success is in part achieved by their property of being universal function approximators (Hornik et al., 1989). In order to train a neural network one usually defines a cost function which captures the ”goodness” of the choice of parameters in our model, and uses gradient descent/ascent algorithms to improve them. In supervised learning, for example, input output data pairs are used to define a cost function such as the mean squared error or the mean absolute error; unfortunately, in many cases the function we want to approximate is unkown. For instance, in many reinforcement learning settings one wants to find the optimal policy, a function from state variables to actions1, which maximizes the expected sum of discounted rewards of an agent in some environment. This function is usually unkown a priori, so this problem can’t readily be framed as a regression problem using input-output pairs. This assertion becomes blurred, however, when looking at the work of Mnih et al. (2013), where a deep Q-network learns by generating targets and minimizing a cost of the form Li(θi) = Es,a∼ρ[(yi −Q(s, a; θi))2]. (1) Here, the targets yi are generated from the same Q-network that is being used to approximate the Q-function, hence the neural network has two purposes: approximation and data generation. In this work, we show that this same idea can be extended to the domain of approximating solutions to partial differential equations, and in particular the solution to the Hamiton-Jacobi-Isaacs PDE. 1or states to probabilities over actions 2 THE HAMILTON-JACOBI-ISAACS PDE In control theory and robotics we often want to know how a system evolves in time given some input signal. In particular, one would like to know whether there exists an (optimal) input signal that drives our system to a particular region of interest in our state space and what that input is. For a deterministic system with continuous states and inputs, this problem can be succinctly expressed as a partial differential equation known as the Hamilton-Jacobi-Isaacs (HJI) PDE. Let V : Rn × R− → R. Then, given a time invariant system of the form dxdt = f(x, a, b) and boundary condition V (x, 0) = l(x), where x ∈ Rn is the state vector and a ∈ A ⊆ Rma and b ∈ B ⊆ Rmb are inputs to the system2, we wish to find the solution to the minimum-payoff HJI PDE, associated to the reachability problem: ∂V (x, t) ∂t = −min{0, H(x,∇xV )}, (2) where H(x,∇xV ) := max a∈A min b∈B ∇xV T f(x, a, b) (3) is known as the Hamiltonian. The boundary condition V (x, 0) = l(x) encodes in its zero sub-level set (i.e. l(x) ≤ 0) the region of interest in our state space known as the target set T . Lastly, the solution V (x, t) to (2) encodes the information about all the starting states whose induced trajectories will enter (and possibly leave) T within |t|, given the dynamics and input signals. More precisely, for some starting state x0 and t ≤ 0, V (x0, t) < 0 if and only if the trajectory starting from x0 enters T within |t|. To give some intuition as to why V (x, t) encodes the starting states whose trajectories enter T within t, let us consider the simpler problem where dxdt = f(x) is an autonomous system without any inputs. Further, let us write (2) as a finite difference in t. With some rearranging, and absorbing the gradient into V (i.e. ∇xV T f(x)∆t+V (x, t) ≈ V (x+f(x)∆t, t)), one can obtain the following approximation V (x, t−∆t) ≈ min{ V (x, t) , V (x+ f(x)∆t, t) }. (4) It is straightforward to see from (4) that at time t = 0 all the states outside of T (i.e. V (x, 0) ≥ 0) but near its boundary, whose induced trajectories enter the target (i.e. V (x+f(x)∆t, 0) < 0) within ∆t, will become negative in V (x,−∆t). Thinking of this update recursively one can intuitively see how the zero sub-level set of V grows backward in time to include more and more states. For the case of one input trying to drive our system into T , the approximation becomes V (x, t−∆t) ≈ min{ V (x, t) , min b V (x+ f(x, b)∆t, t) }, (5) and for two competing inputs, V (x, t−∆t) ≈ min{ V (x, t) , max a min b V (x+ f(x, a, b)∆t, t) }. (6) Using the previous analogy of the autonomous system, one can see how (5) and (6) are essentially different ways to expand the zero sub-level set backward in time: (5) can be seen as an input trying to expand the set as fast as possible; (6) can be seen as two inputs with competing goals, where one input tries to expand the set and the other seeks to prevent its growth. Moreover, this last setting shows the relevance of the HJI PDE in safety critical systems. By treating input b as a bounded worse case disturbance and T as some unsafe region, one can establish safety guarantees about the system and claim which states won’t be driven into T within some time horizon. 2a is usually taken to be the input and b is taken to be some bounded input disturbance Lastly, it is important to note that V (x, t) contains useful information in its gradient ∇xV (x, t). In the case where dxdt = f(x, b) has a single input, the argument minimizing the following optimization problem b∗ = argmin b∈B ∇xV (xo, t)T f(xo, b) (7) yields the instantaneous optimal input for state x0 at time t to guide the trajectory into T as fast as possible. Using this fact one can generate an optimal control policy based on the gradient of V . This idea can then be easily extended to the case of two competing inputs to obtain competing control policies. Finally, even though (7) need not be a convex problem, in this work we will only deal with simple dynamical systems, making the optimization problem easy to solve. 3 APPROXIMATING SOLUTIONS OF PDES The problem presented in section 2 (as in many other cases with PDEs) is general not straightforward to solve. For this reason, trying to find a good approximation instead of the actual solution can be a reasonable approach. Many current state-of-the-art tools used to approximate solutions of PDEs, including (2), use gridding techniques (Mitchell, 2007) whereby finite differences are used to iteratively update values on a grid. Another approach (Lagaris et al., 1998) is to train a feedforward neural network by minimizing the following loss Lθ := N∑ i=1 G(xi, ψθ(xi),∇ψθ(xi),∇2ψθ(xi))2 (8) where G(x, ψ(x),∇ψ(x),∇2ψ(x)) = 0 is the PDE whose solution ψ(x) we are trying to approximate and xi are points taken from the discretization of our domain. In (8), the function ψθ(x) := A(x) + F (x,Nθ(x)) is a candidate approximation which by construction satisfies the boundary condition, where Nθ(x) is a feedforward neural network. In order to ensure that the conditions at the boundary are satisfied, F (x,Nθ(x)) = 0 at the boundary and A(x) is a fixed function which satisfies them. Although this approach is well suited for some problems, special care must be taken when computing the gradient of the loss with respect to the parameters. For instance, following the previous procedure, the loss for HJI PDE would be written as Lθ := N∑ i=1 ( ∂V (xi, ti) ∂t +min{0, H(xi,∇xV )})2, (9) but themin in the function makes this expression not differentiable everywhere. There exist ways to circumvent this problem (Djeridane and Lygeros, 2006), however, but they require the cumbersome definition of many intermediary functions which can become hard to find for complicated dynamical models. In this work, we try to tackle the problem of finding an approximate solution to (2) from a different perspective. We show that a poor approximation to our solution is enough to generate “good enough” new data for regression, which can in turn be used to improve our model. 4 SELF-GENERATED DATA 4.1 ALGORITHM In this section we present a simple method for approximating the solution to (2) by utilizing a feedforward neural network in two ways: as a function approximator and a data generator. We believe that this parametric approach is better suited for finding good approximations by avoiding some of the limitations found in gridding/tabular techniques due to the curse of dimesionality. To that end, we start by defining our candidate approximation V̂θ(x) to be of the same form as in (Lagaris et al., 1998); that is, a sum of two terms which help satisfy our boundary condition V (x, 0) V̂θ(x, t) = V (x, 0) + tNθ(x, t), (10) where Nθ(x, t) is a neural network mapping from our states and time variables to the real numbers. Next, we sampleN points in the state variable x chosen uniformly at random over some set S which includes T (the target set), and similarly, sampleN points in the time variable t uniformly at random over the set [−T, 0], where T ≥ 0 is the desired time horizon. By sampling from these distributions, we seek to find a good approximation to V (x, t) over the set S × [−T, 0]. Once these points have been gathered, we make use of the update (4),(5) or (6) (depending on our problem) and use V̂θ(x, t), the approximation itself, to generate the new regression points. The complete algorithm 4.1 is shown using update equation (6), but it should be clear how to modify it for the other cases. Algorithm 1 Recursive Regression via SGD with Momentum 1: Input: N , interval, V (x, 0), f(x, a, b), A, B, S, T, K(batch size), γ (momentum decay), η (learning rate) 2: Random initialization of the weights and biases of the neural network Nθ(x, t) 3: Define V̂θ(x, t) := V (x, 0) + tNθ(x, t) 4: Define Lθ := ∑K k=0 |yk − V̂θ(xk, tk)| 5: i← 0, νi ← 0, ∆t← 10−2 6: while True do (or stopping criterion) 7: if mod(i,interval) == 0 then 8: R← empty array of size N 9: Sample N pairs (x, t) ∼ Uniform(S × [−T, 0]) 10: for j = 0 to N do 11: (a∗j , b ∗ j )← argmax a∈A argmin b∈B ∇xV̂ Tθ f(x, a, b) 12: yj ← min{V̂θ(xj , tj), V̂θ(xj + f(xj , a∗, b∗)∆t, tj) } 13: Rj ← ((xj , tj), yj) 14: b← K elements from R picked at random 15: νi+1 ← γνi + η∇θL(b) 16: θi+1 ← θi − νi+1 17: i← i+ 1 18: Output: V̂θ(x, t) 4.2 COMMENTS Algorithm 4.1 is a type of bootstrapping method in that lines 12 and 13 make use of V̂θ(x, t) to generate points for regression to train Nθ(x, t) which in turn modify V̂θ(x, t) itself. At first glance, it is unclear whether the generated pairs ((xj , tj), yj) will result in a good approximation to the solution of our PDE after regression; however, given the form of our candidate function (10) we expect that points sampled near t = 0 will in fact be reasonable approximations of V (x, t) for small t. Given this assumption, we hypothesize that despite the presence of misleading data, our network will be able to do a good job at regressing over all points, thus improving our initial model and allowing the generation of improved data. By repeating this procedure, we expect the accuracy of the boundary condition to ”propagate” backward in time (possibly with some minor error) in the form of better and better points for regression. Another important aspect from line 13 is that we are simulating our dynamics forward in time using the Euler approximation step xj + f(xj , a∗, b∗)∆t. In practice, depending on the variability and complexity of the dynamics, one might use a Runge-Kutta method or a more involved integration procedure. For the experiments in the next sections a Runge-Kutta method with 4 stages (RK4) was used. 5 EXPERIMENTS In this section we present a few 2-dimensional experiments to demonstrate the validity of our claim and the effectiveness of the algorithm. To measure the performance of the algorithm, we compare the difference between our computed approximation and the true analytical solution. In case it is not straightforward to obtain the solution, a very accurate approximation taken from state-of-the-art tools is used instead. In particular, we make use of the LevelSet Toolbox from Mitchell (2007), a powerful computational tool for obtaining good approximations to Hamilton-Jacobi (HJ) PDEs. The first error metric to be used will be E1(V̂θ(x, t)) := 1 M M∑ i=1 |V (xi, ti)− V̂θ(xi, ti)| (11) where M are the number of points chosen from our domain to compute the average absolute error and V (x, t) can denote either the true solution or an accurate approximation. In the case where the analytical solution is known, the points are taken uniformly at random over S; otherwise, they are taken over some grid in S and [−T, 0]. Lastly, we also use a second error metric E2(V̂θ(x, t)) := 1 M M∑ i=1 |∂V (xi, ti) ∂t +min{0, H(xi,∇xV )}| (12) similar to the one defined in (9), which denotes the extent by which (on average) the approximation is violating the PDE equality. For all experiments M = 3000, all chosen uniformly at random over S × [−T, 0]. In section 5.4 we also show a visual representation of the approximations. 5.1 A LINEAR SYSTEM In this experiment we study the performance of the algorithm on an autonomous system of the form ẋ = f(x) = [ −1 −2 2 −1 ] x (13) with V (x, 0) = ||x||2 − 1 and T = 1.0. For this simple system, the solution to the HJI PDE can be found analytically to be V (x, t) = e−t||x||2 − 1. One can easily verify this by checking it satisfies the boundary condition and (2). For this experiment, a feedforward neural network with a single hidden layer of 10 units and sigmoid activation functions was used. The number of points sampled was chosen to be N = 500, uniformly picked over the set S := {(x1, x2)|x1, x2 ∈ [−5, 5]} and over t ∈ [−T, 0]. The batches were picked to be of size K = 10, momentum decay γ = 0.95 and learning rate η = 0.1. The interval to renew the regression points was chosen to be 1000 iterations and the program was halted at 500,000 iterations. The results shown in Fig. 1 where taken over 10 runs of the algorithm concurrently executed over multiple threads. The overall time to run the 500,000 iterations for all threads was 1521 seconds. The average E1 error at halting time was in the order of 7× 10−2, whereas the E2 error was in the order of 3× 10−1. The sharp jumps appearing in the loss figure in the majority of cases correspond to the error after new points are generated and used for regression. 5.2 PURSUIT-EVASION GAME: SINGLE INPUT In this experiment we explore a pursuit-evasion game where a pursuer has to intercept an evader. In a first simplified approach, we assume the evader has a fixed heading and speed, whereas the pursuer has the same speed as the evader but has the liberty to change the direction of its heading. Fixing the evader at the origin with its heading aligned with the x-axis we frame the problem in relative coordinates between the evader and pursuer, that is x = [xr yr]T , where xr and yr represent the x and y position of the pursuer relative to the evader. This system’s dynamics are readily encoded in the following equation [ ẋr ẏr ] = f(x, b) = [ vpcos(b)− ve vpsin(b) ] (14) where vp = ve = 2.0 represent the speed of the pursuer and evader respectively, b ∈ [0, 2π] represents the input available to the pursuer, which is the angle with respect to the x-axis. In this simplified pursuit-evasion game we say the pursuer has captured the evader if they are within 1 unit of distance from each other. Thus, we define our capture condition by defining V (x, 0) = ||x||2−1, which will ensure that our approximation captures all the states from which the pursuer can capture the evader in within T = 1.0. As in the previous example, we choose the same network architecture and the same values for the halting time, renewal interval, N ,K,γ and η. The results shown in Fig. 2 where also taken over 10 runs of the algorithm like in section 5.2. The overall time to run the 500,000 iterations was 1952 seconds. The average E1 error at halting time was also in the order of 7× 10−2, whereas the E2 error was in the order of 1.5× 10−1. The points used to compute E1 were taken from a 51 × 51 grid at t = −0.5 (half of the time horizon), using a previously computed approximation from the LevelSet Toolbox. The reason why a single time instance was used to compute E1 was purely to reduce the amount of computation of the error at run-time. 5.3 PURSUIT-EVASION GAME: TWO INPUTS The last experimental example also consists of a pursuit-evasion game, but in this case the evader has access to a range of speeds through an input a ∈ [−2, 2]. The system dynamics thus become[ ẋr ẏr ] = f(x, a, b) = [ vpcos(b)− a vpsin(b) ] (15) and, similarly, V (x, 0) = ||x||2 − 1 and T = 1.0. As before, vp = 2.0. The interesting behavior we expect to see from this experiment, in comparison to the single input counterpart, is that this new available action to the evader will make it more difficult for the pursuer to intercept. This should then be evident by looking at our approximation V̂θ and its zero sub-level sets at different times. For this experiment we also chose the same architecture for the network as in the previous experiments and the same parameters, except for the halting time which was 300,000 iterations. The results shown in Fig. 3 where also taken over 10 runs of the algorithm. The overall time to run the 300,000 iterations over the all threads was 1028 seconds. The average E1 error at halting time was in the order of 6 × 10−2, whereas the E2 error was in the order of 1.5 × 10−1. Like in the single input case, the points used to compute E1 were taken from a 51 × 51 grid at t = −0.5 of a pre-computed approximation. 5.4 CONTOUR VISUALIZATION In this section we briefly display some of the contours for a neural network picked at random from those computed in the experimental section. Each line corresponds to the set of states where V̂θ(x, t) = 0 for t = 0,−0.25,−0.5,−0.75,−1.0. These contours enclose within them the states from which our system can reach the target set T within the absolute value of its associated time. As expected, the linear system’s contours expand radially in all directions since the origin is a stable equilibrium point3 where all trajectories converge. For the pursuit-evasion game of one input, we also see that the contours grow toward the right, which is a sensible outcome given that the pursuer can’t catch up with the evader if it starts somewhere where xr < −1.0. Finally, the last set of contours associated with the pursuer-evader game of two competing inputs also make sense, since starting states xr < −1.0 or xr > 1.0 should not permit the pursuer to intercept the evader, and so 3with the same negative real part for the eigenvalues the contours should not expand in those directions. As a last comparison, in Fig. 5 we display the actual contours that would be obtained using the LevelSet Toolbox. By comparing Fig. 5 and 4 one can qualitatively see that the neural network has learned an accurate approximation of V (x, t). 6 ADVANTAGES AND DISADVANTAGES The first advantage of using this method over gridding techniques is a dramatic improvement in memory requirements. For instance, using a standard grid with [51, 51, 10] discretization points per axis (i.e. 51 in xr, 51 in yr and 10 in t) each of the three previous experiments requires the storage of 26, 010 numbers, as opposed to 51 weights for our neural network. For the gridding approach this memory requirement must increase exponentially with the number of dimensions, whereas this need not be the case for our method. Furthermore, points that do not fall exactly on the grid have to be interpolated, whereas the neural network is an approximation that assigns values to all points in the domain. To this we can also add that fact that the neural network can yield the gradient at any point directly with backpropagation, whereas the gradient must once again be approximated for gridding techniques. The main disadvantage of this method, for small dimensional systems in particular, is the time requirement. Computing values over a grid with the LevelSet Toolbox for the previous systems took less than 10 seconds. This advantage of gridding/tabular procedures, however, quickly disappears in higher dimensions (4D, 5D...) due to the curse of dimensionality. Finally, another disadvantage of using this method is the necessity to tune hyper parameters. 7 CONCLUSION AND FUTURE WORK In this work we focus our attention on the idea that recursive/bootstrapped regression can be used in some problems where the function we wish to approximate has some known characteristics. In particular, we show that accurate approximations to the HJI PDE solution can be found by assigning a neural network two roles, one of them being function approximation, and the other data generation.To validate our hypothesis three different experiments with three distinct dynamical systems were performed with satisfactory results. In this work we did not focus on the architecture of the neural network, but rather on its ability to perform well on three distinct tasks using the same algorithm. In future work we will try to find whether one can construct wider or deeper neural networks and obtain better results. We also want to investigate how well this method scales with the number of state and input dimensions. Positive results in that front could suppose an important step to further alleviate the effects of the curse of dimensionality, which are pervasive in griding methods. ACKNOWLEDGMENTS Special thanks to Carlos Florensa for his implementation tips and to Jaime F. Fisac for helping in the process of writing this work. 8 EXTRA EXPERIMENT This experiment was designed to test the applicability of the method to problems beyond those presented in the previous sections. In particular, we show that with small changes we can also compute an accurate approximation to a pursuit-evasion problem in 3 dimensions. Similar to the previous examples, we frame the problem in relative coordinates with the x-axis aligned with the evader’s heading, and give the pursuer and evader control over the rate of rotation. This can be written as follows: ẋrẏr θ̇r = f(x, a, b) = [−ve + vpcos(θr) + ayrvpsin(θr)− axr b− a ] (16) For this problem the capture condition is encoded in the boundary condition V (x, 0) = ||[xr yr]T ||2 − 1 (where we ignore θr since the capture condition only depends on the distance) and we consider a the time horizon T = 1.0s. For this problem we give both pursuer and evader the same speed vp = ve = 1.0 and the same turning rates a, b ∈ [−1, 1]. Unlike the previous experiments, we used a neural network with two hidden layers with 10 and 5 units respectively and sigmoid activations. The number of points sampled was chosen to be N = 2000, uniformly picked over the set S := {(xr, yr, θr)|xr, yr ∈ [−5, 5], θr ∈ [−π, π]} and over t ∈ [−T, 0]. The batches were picked to be of size K = 25, momentum decay γ = 0.999 and learning rate η = 0.001. The interval to renew the regression points was chosen to be 1000 iterations and the program was halted at 500,000 iterations. As shown in Fig. 6, both error metrics decrease as the algorithm progresses, reaching an average error for E1 in the order of 5.0× 10−2 and an average error for E2 in the order of 1.0× 10−1. The points used to compute E1 were taken from a 51× 51× 50 approximation grid at t = −0.5s. This set of experiments was run in a different machine4 using 8 threads and the total time for all threads to finish was 1000 seconds. Finally, Fig. 7 shows the zero level set contour at t = −0.5, which is now a 3D surface, from side and top perspectives. The first row shows the output of the LevelSet Toolbox from each perspective, and the second row shows a 3D scatter plot of points on the zero level-set obtained from one of the 8 neural networks that were trained. 4due to heavy usage of the first machine we had to switch to a different one For this experiment, only 111 numbers were needed to store the approximation, as opposed to 51× 51×50×10 = 1300500 numbers (i.e. 51 in xr, 51 in yr, 50 in θr and 10 in t) for a [51×51×50×10] grid approximation.
1. What are the challenges associated with approximating solutions to PDEs using neural network approximators? 2. How does the lack of direct correlation between small PDE residuals and well-performing policies impact the approach taken by the paper? 3. Why are the 2D toy examples considered inadequate, and how might the approach be scaled up to achieve more practical applications? 4. What are some specific errors or typos present in the paper, such as "Range-Kutta"? 5. Considering the focus of the paper on utilizing neural networks, is the submission to ICLR appropriate, or would it be better suited for publication in a different journal such as ACC, ADPRL, or CDC?
Review
Review Approximating solutions to PDEs with NN approximators is very hard. In particular the HJB and HJI eqs have in general discontinuous and non-differentiable solutions making them particularly tricky (unless the underlying process is a diffusion in which case the Ito term makes everything smooth, but this paper doesn't do that). What's worse, there is no direct correlation between a small PDE residual and a well performing-policy [tsitsiklis? beard? todorov?, I forget]. There's been lots of work on this which is not properly cited. The 2D toy examples are inadequate. What reason is there to think this will scale to do anything useful? There are a bunch of typos ("Range-Kutta"?) . More than anything, this paper is submitted to the wrong venue. There are no learned representations here. You're just using a NN. That's not what ICLR is about. Resubmit to ACC, ADPRL or CDC. Sorry for terseness. Despite rough review, I absolutely love this direction of research. More than anything, you have to solve harder control problems for people to take notice...
ICLR
Title Recursive Regression with Neural Networks: Approximating the HJI PDE Solution Abstract Most machine learning applications using neural networks seek to approximate some function g(x) by minimizing some cost criterion. In the simplest case, if one has access to pairs of the form (x, y) where y = g(x), the problem can be framed as a regression problem. Beyond this family of problems, we find many cases where the unavailability of data pairs makes this approach unfeasible. However, similar to what we find in the reinforcement learning literature, if we have some known properties of the function we are seeking to approximate, there is still hope to frame the problem as a regression problem. In this context, we present an algorithm that approximates the solution to a partial differential equation known as the Hamilton-Jacobi-Isaacs partial differential equation (HJI PDE) and compare it to current state of the art tools. This PDE, which is found in the fields of control theory and robotics, is of particular importance in safety critical systems where guarantees of performance are a must. N/A Most machine learning applications using neural networks seek to approximate some function g(x) by minimizing some cost criterion. In the simplest case, if one has access to pairs of the form (x, y) where y = g(x), the problem can be framed as a regression problem. Beyond this family of problems, we find many cases where the unavailability of data pairs makes this approach unfeasible. However, similar to what we find in the reinforcement learning literature, if we have some known properties of the function we are seeking to approximate, there is still hope to frame the problem as a regression problem. In this context, we present an algorithm that approximates the solution to a partial differential equation known as the Hamilton-Jacobi-Isaacs partial differential equation (HJI PDE) and compare it to current state of the art tools. This PDE, which is found in the fields of control theory and robotics, is of particular importance in safety critical systems where guarantees of performance are a must. 1 INTRODUCTION Artificial neural networks are remarkable function approximators used in a myriad of applications ranging from complex controllers for robotic actuation (Levine et al., 2016) (Schulman et al., 2015) to simple image classifiers for digit recognition (LeCun et al., 1989) . They even find uses in physics to find approximations to solutions of PDEs and systems of coupled ordinary differential equations (ODEs) (Lagaris et al., 1998). Their success is in part achieved by their property of being universal function approximators (Hornik et al., 1989). In order to train a neural network one usually defines a cost function which captures the ”goodness” of the choice of parameters in our model, and uses gradient descent/ascent algorithms to improve them. In supervised learning, for example, input output data pairs are used to define a cost function such as the mean squared error or the mean absolute error; unfortunately, in many cases the function we want to approximate is unkown. For instance, in many reinforcement learning settings one wants to find the optimal policy, a function from state variables to actions1, which maximizes the expected sum of discounted rewards of an agent in some environment. This function is usually unkown a priori, so this problem can’t readily be framed as a regression problem using input-output pairs. This assertion becomes blurred, however, when looking at the work of Mnih et al. (2013), where a deep Q-network learns by generating targets and minimizing a cost of the form Li(θi) = Es,a∼ρ[(yi −Q(s, a; θi))2]. (1) Here, the targets yi are generated from the same Q-network that is being used to approximate the Q-function, hence the neural network has two purposes: approximation and data generation. In this work, we show that this same idea can be extended to the domain of approximating solutions to partial differential equations, and in particular the solution to the Hamiton-Jacobi-Isaacs PDE. 1or states to probabilities over actions 2 THE HAMILTON-JACOBI-ISAACS PDE In control theory and robotics we often want to know how a system evolves in time given some input signal. In particular, one would like to know whether there exists an (optimal) input signal that drives our system to a particular region of interest in our state space and what that input is. For a deterministic system with continuous states and inputs, this problem can be succinctly expressed as a partial differential equation known as the Hamilton-Jacobi-Isaacs (HJI) PDE. Let V : Rn × R− → R. Then, given a time invariant system of the form dxdt = f(x, a, b) and boundary condition V (x, 0) = l(x), where x ∈ Rn is the state vector and a ∈ A ⊆ Rma and b ∈ B ⊆ Rmb are inputs to the system2, we wish to find the solution to the minimum-payoff HJI PDE, associated to the reachability problem: ∂V (x, t) ∂t = −min{0, H(x,∇xV )}, (2) where H(x,∇xV ) := max a∈A min b∈B ∇xV T f(x, a, b) (3) is known as the Hamiltonian. The boundary condition V (x, 0) = l(x) encodes in its zero sub-level set (i.e. l(x) ≤ 0) the region of interest in our state space known as the target set T . Lastly, the solution V (x, t) to (2) encodes the information about all the starting states whose induced trajectories will enter (and possibly leave) T within |t|, given the dynamics and input signals. More precisely, for some starting state x0 and t ≤ 0, V (x0, t) < 0 if and only if the trajectory starting from x0 enters T within |t|. To give some intuition as to why V (x, t) encodes the starting states whose trajectories enter T within t, let us consider the simpler problem where dxdt = f(x) is an autonomous system without any inputs. Further, let us write (2) as a finite difference in t. With some rearranging, and absorbing the gradient into V (i.e. ∇xV T f(x)∆t+V (x, t) ≈ V (x+f(x)∆t, t)), one can obtain the following approximation V (x, t−∆t) ≈ min{ V (x, t) , V (x+ f(x)∆t, t) }. (4) It is straightforward to see from (4) that at time t = 0 all the states outside of T (i.e. V (x, 0) ≥ 0) but near its boundary, whose induced trajectories enter the target (i.e. V (x+f(x)∆t, 0) < 0) within ∆t, will become negative in V (x,−∆t). Thinking of this update recursively one can intuitively see how the zero sub-level set of V grows backward in time to include more and more states. For the case of one input trying to drive our system into T , the approximation becomes V (x, t−∆t) ≈ min{ V (x, t) , min b V (x+ f(x, b)∆t, t) }, (5) and for two competing inputs, V (x, t−∆t) ≈ min{ V (x, t) , max a min b V (x+ f(x, a, b)∆t, t) }. (6) Using the previous analogy of the autonomous system, one can see how (5) and (6) are essentially different ways to expand the zero sub-level set backward in time: (5) can be seen as an input trying to expand the set as fast as possible; (6) can be seen as two inputs with competing goals, where one input tries to expand the set and the other seeks to prevent its growth. Moreover, this last setting shows the relevance of the HJI PDE in safety critical systems. By treating input b as a bounded worse case disturbance and T as some unsafe region, one can establish safety guarantees about the system and claim which states won’t be driven into T within some time horizon. 2a is usually taken to be the input and b is taken to be some bounded input disturbance Lastly, it is important to note that V (x, t) contains useful information in its gradient ∇xV (x, t). In the case where dxdt = f(x, b) has a single input, the argument minimizing the following optimization problem b∗ = argmin b∈B ∇xV (xo, t)T f(xo, b) (7) yields the instantaneous optimal input for state x0 at time t to guide the trajectory into T as fast as possible. Using this fact one can generate an optimal control policy based on the gradient of V . This idea can then be easily extended to the case of two competing inputs to obtain competing control policies. Finally, even though (7) need not be a convex problem, in this work we will only deal with simple dynamical systems, making the optimization problem easy to solve. 3 APPROXIMATING SOLUTIONS OF PDES The problem presented in section 2 (as in many other cases with PDEs) is general not straightforward to solve. For this reason, trying to find a good approximation instead of the actual solution can be a reasonable approach. Many current state-of-the-art tools used to approximate solutions of PDEs, including (2), use gridding techniques (Mitchell, 2007) whereby finite differences are used to iteratively update values on a grid. Another approach (Lagaris et al., 1998) is to train a feedforward neural network by minimizing the following loss Lθ := N∑ i=1 G(xi, ψθ(xi),∇ψθ(xi),∇2ψθ(xi))2 (8) where G(x, ψ(x),∇ψ(x),∇2ψ(x)) = 0 is the PDE whose solution ψ(x) we are trying to approximate and xi are points taken from the discretization of our domain. In (8), the function ψθ(x) := A(x) + F (x,Nθ(x)) is a candidate approximation which by construction satisfies the boundary condition, where Nθ(x) is a feedforward neural network. In order to ensure that the conditions at the boundary are satisfied, F (x,Nθ(x)) = 0 at the boundary and A(x) is a fixed function which satisfies them. Although this approach is well suited for some problems, special care must be taken when computing the gradient of the loss with respect to the parameters. For instance, following the previous procedure, the loss for HJI PDE would be written as Lθ := N∑ i=1 ( ∂V (xi, ti) ∂t +min{0, H(xi,∇xV )})2, (9) but themin in the function makes this expression not differentiable everywhere. There exist ways to circumvent this problem (Djeridane and Lygeros, 2006), however, but they require the cumbersome definition of many intermediary functions which can become hard to find for complicated dynamical models. In this work, we try to tackle the problem of finding an approximate solution to (2) from a different perspective. We show that a poor approximation to our solution is enough to generate “good enough” new data for regression, which can in turn be used to improve our model. 4 SELF-GENERATED DATA 4.1 ALGORITHM In this section we present a simple method for approximating the solution to (2) by utilizing a feedforward neural network in two ways: as a function approximator and a data generator. We believe that this parametric approach is better suited for finding good approximations by avoiding some of the limitations found in gridding/tabular techniques due to the curse of dimesionality. To that end, we start by defining our candidate approximation V̂θ(x) to be of the same form as in (Lagaris et al., 1998); that is, a sum of two terms which help satisfy our boundary condition V (x, 0) V̂θ(x, t) = V (x, 0) + tNθ(x, t), (10) where Nθ(x, t) is a neural network mapping from our states and time variables to the real numbers. Next, we sampleN points in the state variable x chosen uniformly at random over some set S which includes T (the target set), and similarly, sampleN points in the time variable t uniformly at random over the set [−T, 0], where T ≥ 0 is the desired time horizon. By sampling from these distributions, we seek to find a good approximation to V (x, t) over the set S × [−T, 0]. Once these points have been gathered, we make use of the update (4),(5) or (6) (depending on our problem) and use V̂θ(x, t), the approximation itself, to generate the new regression points. The complete algorithm 4.1 is shown using update equation (6), but it should be clear how to modify it for the other cases. Algorithm 1 Recursive Regression via SGD with Momentum 1: Input: N , interval, V (x, 0), f(x, a, b), A, B, S, T, K(batch size), γ (momentum decay), η (learning rate) 2: Random initialization of the weights and biases of the neural network Nθ(x, t) 3: Define V̂θ(x, t) := V (x, 0) + tNθ(x, t) 4: Define Lθ := ∑K k=0 |yk − V̂θ(xk, tk)| 5: i← 0, νi ← 0, ∆t← 10−2 6: while True do (or stopping criterion) 7: if mod(i,interval) == 0 then 8: R← empty array of size N 9: Sample N pairs (x, t) ∼ Uniform(S × [−T, 0]) 10: for j = 0 to N do 11: (a∗j , b ∗ j )← argmax a∈A argmin b∈B ∇xV̂ Tθ f(x, a, b) 12: yj ← min{V̂θ(xj , tj), V̂θ(xj + f(xj , a∗, b∗)∆t, tj) } 13: Rj ← ((xj , tj), yj) 14: b← K elements from R picked at random 15: νi+1 ← γνi + η∇θL(b) 16: θi+1 ← θi − νi+1 17: i← i+ 1 18: Output: V̂θ(x, t) 4.2 COMMENTS Algorithm 4.1 is a type of bootstrapping method in that lines 12 and 13 make use of V̂θ(x, t) to generate points for regression to train Nθ(x, t) which in turn modify V̂θ(x, t) itself. At first glance, it is unclear whether the generated pairs ((xj , tj), yj) will result in a good approximation to the solution of our PDE after regression; however, given the form of our candidate function (10) we expect that points sampled near t = 0 will in fact be reasonable approximations of V (x, t) for small t. Given this assumption, we hypothesize that despite the presence of misleading data, our network will be able to do a good job at regressing over all points, thus improving our initial model and allowing the generation of improved data. By repeating this procedure, we expect the accuracy of the boundary condition to ”propagate” backward in time (possibly with some minor error) in the form of better and better points for regression. Another important aspect from line 13 is that we are simulating our dynamics forward in time using the Euler approximation step xj + f(xj , a∗, b∗)∆t. In practice, depending on the variability and complexity of the dynamics, one might use a Runge-Kutta method or a more involved integration procedure. For the experiments in the next sections a Runge-Kutta method with 4 stages (RK4) was used. 5 EXPERIMENTS In this section we present a few 2-dimensional experiments to demonstrate the validity of our claim and the effectiveness of the algorithm. To measure the performance of the algorithm, we compare the difference between our computed approximation and the true analytical solution. In case it is not straightforward to obtain the solution, a very accurate approximation taken from state-of-the-art tools is used instead. In particular, we make use of the LevelSet Toolbox from Mitchell (2007), a powerful computational tool for obtaining good approximations to Hamilton-Jacobi (HJ) PDEs. The first error metric to be used will be E1(V̂θ(x, t)) := 1 M M∑ i=1 |V (xi, ti)− V̂θ(xi, ti)| (11) where M are the number of points chosen from our domain to compute the average absolute error and V (x, t) can denote either the true solution or an accurate approximation. In the case where the analytical solution is known, the points are taken uniformly at random over S; otherwise, they are taken over some grid in S and [−T, 0]. Lastly, we also use a second error metric E2(V̂θ(x, t)) := 1 M M∑ i=1 |∂V (xi, ti) ∂t +min{0, H(xi,∇xV )}| (12) similar to the one defined in (9), which denotes the extent by which (on average) the approximation is violating the PDE equality. For all experiments M = 3000, all chosen uniformly at random over S × [−T, 0]. In section 5.4 we also show a visual representation of the approximations. 5.1 A LINEAR SYSTEM In this experiment we study the performance of the algorithm on an autonomous system of the form ẋ = f(x) = [ −1 −2 2 −1 ] x (13) with V (x, 0) = ||x||2 − 1 and T = 1.0. For this simple system, the solution to the HJI PDE can be found analytically to be V (x, t) = e−t||x||2 − 1. One can easily verify this by checking it satisfies the boundary condition and (2). For this experiment, a feedforward neural network with a single hidden layer of 10 units and sigmoid activation functions was used. The number of points sampled was chosen to be N = 500, uniformly picked over the set S := {(x1, x2)|x1, x2 ∈ [−5, 5]} and over t ∈ [−T, 0]. The batches were picked to be of size K = 10, momentum decay γ = 0.95 and learning rate η = 0.1. The interval to renew the regression points was chosen to be 1000 iterations and the program was halted at 500,000 iterations. The results shown in Fig. 1 where taken over 10 runs of the algorithm concurrently executed over multiple threads. The overall time to run the 500,000 iterations for all threads was 1521 seconds. The average E1 error at halting time was in the order of 7× 10−2, whereas the E2 error was in the order of 3× 10−1. The sharp jumps appearing in the loss figure in the majority of cases correspond to the error after new points are generated and used for regression. 5.2 PURSUIT-EVASION GAME: SINGLE INPUT In this experiment we explore a pursuit-evasion game where a pursuer has to intercept an evader. In a first simplified approach, we assume the evader has a fixed heading and speed, whereas the pursuer has the same speed as the evader but has the liberty to change the direction of its heading. Fixing the evader at the origin with its heading aligned with the x-axis we frame the problem in relative coordinates between the evader and pursuer, that is x = [xr yr]T , where xr and yr represent the x and y position of the pursuer relative to the evader. This system’s dynamics are readily encoded in the following equation [ ẋr ẏr ] = f(x, b) = [ vpcos(b)− ve vpsin(b) ] (14) where vp = ve = 2.0 represent the speed of the pursuer and evader respectively, b ∈ [0, 2π] represents the input available to the pursuer, which is the angle with respect to the x-axis. In this simplified pursuit-evasion game we say the pursuer has captured the evader if they are within 1 unit of distance from each other. Thus, we define our capture condition by defining V (x, 0) = ||x||2−1, which will ensure that our approximation captures all the states from which the pursuer can capture the evader in within T = 1.0. As in the previous example, we choose the same network architecture and the same values for the halting time, renewal interval, N ,K,γ and η. The results shown in Fig. 2 where also taken over 10 runs of the algorithm like in section 5.2. The overall time to run the 500,000 iterations was 1952 seconds. The average E1 error at halting time was also in the order of 7× 10−2, whereas the E2 error was in the order of 1.5× 10−1. The points used to compute E1 were taken from a 51 × 51 grid at t = −0.5 (half of the time horizon), using a previously computed approximation from the LevelSet Toolbox. The reason why a single time instance was used to compute E1 was purely to reduce the amount of computation of the error at run-time. 5.3 PURSUIT-EVASION GAME: TWO INPUTS The last experimental example also consists of a pursuit-evasion game, but in this case the evader has access to a range of speeds through an input a ∈ [−2, 2]. The system dynamics thus become[ ẋr ẏr ] = f(x, a, b) = [ vpcos(b)− a vpsin(b) ] (15) and, similarly, V (x, 0) = ||x||2 − 1 and T = 1.0. As before, vp = 2.0. The interesting behavior we expect to see from this experiment, in comparison to the single input counterpart, is that this new available action to the evader will make it more difficult for the pursuer to intercept. This should then be evident by looking at our approximation V̂θ and its zero sub-level sets at different times. For this experiment we also chose the same architecture for the network as in the previous experiments and the same parameters, except for the halting time which was 300,000 iterations. The results shown in Fig. 3 where also taken over 10 runs of the algorithm. The overall time to run the 300,000 iterations over the all threads was 1028 seconds. The average E1 error at halting time was in the order of 6 × 10−2, whereas the E2 error was in the order of 1.5 × 10−1. Like in the single input case, the points used to compute E1 were taken from a 51 × 51 grid at t = −0.5 of a pre-computed approximation. 5.4 CONTOUR VISUALIZATION In this section we briefly display some of the contours for a neural network picked at random from those computed in the experimental section. Each line corresponds to the set of states where V̂θ(x, t) = 0 for t = 0,−0.25,−0.5,−0.75,−1.0. These contours enclose within them the states from which our system can reach the target set T within the absolute value of its associated time. As expected, the linear system’s contours expand radially in all directions since the origin is a stable equilibrium point3 where all trajectories converge. For the pursuit-evasion game of one input, we also see that the contours grow toward the right, which is a sensible outcome given that the pursuer can’t catch up with the evader if it starts somewhere where xr < −1.0. Finally, the last set of contours associated with the pursuer-evader game of two competing inputs also make sense, since starting states xr < −1.0 or xr > 1.0 should not permit the pursuer to intercept the evader, and so 3with the same negative real part for the eigenvalues the contours should not expand in those directions. As a last comparison, in Fig. 5 we display the actual contours that would be obtained using the LevelSet Toolbox. By comparing Fig. 5 and 4 one can qualitatively see that the neural network has learned an accurate approximation of V (x, t). 6 ADVANTAGES AND DISADVANTAGES The first advantage of using this method over gridding techniques is a dramatic improvement in memory requirements. For instance, using a standard grid with [51, 51, 10] discretization points per axis (i.e. 51 in xr, 51 in yr and 10 in t) each of the three previous experiments requires the storage of 26, 010 numbers, as opposed to 51 weights for our neural network. For the gridding approach this memory requirement must increase exponentially with the number of dimensions, whereas this need not be the case for our method. Furthermore, points that do not fall exactly on the grid have to be interpolated, whereas the neural network is an approximation that assigns values to all points in the domain. To this we can also add that fact that the neural network can yield the gradient at any point directly with backpropagation, whereas the gradient must once again be approximated for gridding techniques. The main disadvantage of this method, for small dimensional systems in particular, is the time requirement. Computing values over a grid with the LevelSet Toolbox for the previous systems took less than 10 seconds. This advantage of gridding/tabular procedures, however, quickly disappears in higher dimensions (4D, 5D...) due to the curse of dimensionality. Finally, another disadvantage of using this method is the necessity to tune hyper parameters. 7 CONCLUSION AND FUTURE WORK In this work we focus our attention on the idea that recursive/bootstrapped regression can be used in some problems where the function we wish to approximate has some known characteristics. In particular, we show that accurate approximations to the HJI PDE solution can be found by assigning a neural network two roles, one of them being function approximation, and the other data generation.To validate our hypothesis three different experiments with three distinct dynamical systems were performed with satisfactory results. In this work we did not focus on the architecture of the neural network, but rather on its ability to perform well on three distinct tasks using the same algorithm. In future work we will try to find whether one can construct wider or deeper neural networks and obtain better results. We also want to investigate how well this method scales with the number of state and input dimensions. Positive results in that front could suppose an important step to further alleviate the effects of the curse of dimensionality, which are pervasive in griding methods. ACKNOWLEDGMENTS Special thanks to Carlos Florensa for his implementation tips and to Jaime F. Fisac for helping in the process of writing this work. 8 EXTRA EXPERIMENT This experiment was designed to test the applicability of the method to problems beyond those presented in the previous sections. In particular, we show that with small changes we can also compute an accurate approximation to a pursuit-evasion problem in 3 dimensions. Similar to the previous examples, we frame the problem in relative coordinates with the x-axis aligned with the evader’s heading, and give the pursuer and evader control over the rate of rotation. This can be written as follows: ẋrẏr θ̇r = f(x, a, b) = [−ve + vpcos(θr) + ayrvpsin(θr)− axr b− a ] (16) For this problem the capture condition is encoded in the boundary condition V (x, 0) = ||[xr yr]T ||2 − 1 (where we ignore θr since the capture condition only depends on the distance) and we consider a the time horizon T = 1.0s. For this problem we give both pursuer and evader the same speed vp = ve = 1.0 and the same turning rates a, b ∈ [−1, 1]. Unlike the previous experiments, we used a neural network with two hidden layers with 10 and 5 units respectively and sigmoid activations. The number of points sampled was chosen to be N = 2000, uniformly picked over the set S := {(xr, yr, θr)|xr, yr ∈ [−5, 5], θr ∈ [−π, π]} and over t ∈ [−T, 0]. The batches were picked to be of size K = 25, momentum decay γ = 0.999 and learning rate η = 0.001. The interval to renew the regression points was chosen to be 1000 iterations and the program was halted at 500,000 iterations. As shown in Fig. 6, both error metrics decrease as the algorithm progresses, reaching an average error for E1 in the order of 5.0× 10−2 and an average error for E2 in the order of 1.0× 10−1. The points used to compute E1 were taken from a 51× 51× 50 approximation grid at t = −0.5s. This set of experiments was run in a different machine4 using 8 threads and the total time for all threads to finish was 1000 seconds. Finally, Fig. 7 shows the zero level set contour at t = −0.5, which is now a 3D surface, from side and top perspectives. The first row shows the output of the LevelSet Toolbox from each perspective, and the second row shows a 3D scatter plot of points on the zero level-set obtained from one of the 8 neural networks that were trained. 4due to heavy usage of the first machine we had to switch to a different one For this experiment, only 111 numbers were needed to store the approximation, as opposed to 51× 51×50×10 = 1300500 numbers (i.e. 51 in xr, 51 in yr, 50 in θr and 10 in t) for a [51×51×50×10] grid approximation.
1. What are the limitations of the proposed approach in terms of adaptability to new domains, systems, or boundary conditions? 2. How does the method perform when dealing with domains of various sizes, and are there any potential issues with scalability? 3. Can the approach be applied to other types of PDEs, such as diffusion, and what might be the challenges or opportunities in doing so?
Review
Review I have no familiarity with the HJI PDE (I've only dealt with parabolic PDE's such as diffusion in the past). So the details of transforming this problem into a supervised loss escape me. Therefore, as indicated below, my review should be taken as an "educated guess". I imagine that many readers of ICLR will face a similar problem as me, and so, if this paper is accepted, at the least the authors should prepare an appendix that provides an introduction to the HJI PDE. At a high level, my comments are: 1. It seems that another disadvantage of this approach is that a new network must be trained for each new domain (including domain size), system function f(x) or boundary condition. If that is correct, I wonder if it's worth the trouble when existing tools already solve these PDE's. Can the authors shed light on a more "unifying approach" that would require minimal changes to generalize across PDE's? 2. How sensitive is the network's result to domains of different sizes? It seems only a single size 51 x 51 was tested. Do errors increase with domain size? 3. How general is this approach to PDE's of other types e.g. diffusion?
ICLR
Title Can We Faithfully Represent Absence States to Compute Shapley Values on a DNN? Abstract Masking some input variables of a deep neural network (DNN) and computing output changes on the masked input sample represent a typical way to compute attributions of input variables in the sample. People usually mask an input variable using its baseline value. However, there is no theory to examine whether baseline value faithfully represents the absence of an input variable, i.e., removing all signals from the input variable. Fortunately, recent studies (Ren et al., 2023a; Deng et al., 2022a) show that the inference score of a DNN can be strictly disentangled into a set of causal patterns (or concepts) encoded by the DNN. Therefore, we propose to use causal patterns to examine the faithfulness of baseline values. More crucially, it is proven that causal patterns can be explained as the elementary rationale of the Shapley value. Furthermore, we propose a method to learn optimal baseline values, and experimental results have demonstrated its effectiveness. 1 INTRODUCTION Many attribution methods (Zhou et al., 2016; Selvaraju et al., 2017; Lundberg and Lee, 2017; Shrikumar et al., 2017) have been proposed to estimate the attribution (importance) of input variables to the model output, which represents an important direction in explainable AI. In this direction, many studies (Lundberg and Lee, 2017; Ancona et al., 2019; Fong et al., 2019) masked some input variables of a deep neural network (DNN), and they used the change of network outputs on the masked samples to estimate attributions of input variables. As Fig. 1 shows, there are different types of baseline values to represent the absence of input variables. Theoretically, the trustworthiness of attributions highly depends on whether the current baseline value can really remove the signal of the input variable without bringing in new out-of-distribution (OOD) features. However, there is no criterion to evaluate the signal removal of masking methods. To this end, we need to first break the blind faith that seemingly reasonable baseline values can faithfully represent the absence of input variables, and the blind faith that seemingly OOD baseline values definitely cause abnormal features. In fact, because a DNN may have complex inference logic, seemingly OOD baseline values do not necessarily generate OOD features. Concept/causality-emerging phenomenon. The core challenge of theoretically guaranteeing or examining whether the baseline value removes all or partial signals of an input variable is to explicitly define the signal/concept/knowledge encoded by a DNN in a countable manner. To this end, Ren et al. (2023a) have discovered a counter-intuitive concept-emerging phenomenon in a trained DNN. Although the DNN does not have a physical unit to encode explicit causality or concepts, Ren et al. (2023a); Deng et al. (2022a) have surprisingly discovered that when the DNN is sufficiently trained, the sparse and symbolic concepts emerge. Thus, we use such concepts as a new perspective to define the optimal baseline value for the absence of input variables. As Fig. 1 shows, each concept represents an AND relationship between a specific set S of input variables. The co-appearance of these input variables makes a numerical contribution US to the network output. Thus, we can consider such a concept as a causal pattern1 of the network output, ∗Quanshi Zhang is the corresponding author. He is with the Department of Computer Science and Engineering, the John Hopcroft Center, at the Shanghai Jiao Tong University, China. [email protected]. 1Note that in this paper, the causal pattern means the extracted causal relationship between input variables and the output encoded by the DNN, rather than the true intrinsic causal relationship hidden in data. and US is termed the causal effect. For example, the concept of a rooster’s head consists of the forehead, eyes, beak, and crown, i.e., S= {forehead, eyes, beak, crown} = {f, e, b, c} for short. Only if input variables f , e, b, and c co-appear, the causal pattern S is triggered and makes an effect US on the confidence of the head classification. Otherwise, the absence of any input variables in the causal pattern S will remove the effect. Ren et al. (2023a) have extracted a set of sparse causal patterns (concepts) encoded by the DNN. More importantly, the following finding has proven that such causal patterns1 can be considered as elementary inference logic used by the DNN. Specifically, given an input sample with n variables, we can generate 2n different masked samples. We can use a relatively small number of causal patterns to accurately mimic network outputs on all 2n masked samples, which guarantees the faithfulness of causal patterns. Defining optimal baseline values based on causal patterns. From the above perspective of causal patterns, whether baseline values look reasonable and fit human’s intuition is no longer the key factor to determine the trustworthiness of baseline values. Instead, we evaluate the faithfulness of baseline values by using causal patterns. Because the baseline value is supposed to represent the absence of an input variable, we find that setting an optimal baseline value usually generates the most simplified explanation of the DNN, i.e., we may extract a minimum number of causal patterns to explain the DNN. Such an explanation is the most reliable according to Occam’s Razor. • We prove that using incorrect baseline values makes a single causal pattern be explained as an exponential number of redundant causal patterns. Let us consider the following toy example, where the DNN contains a causal pattern S={f, e, b, c} with a considerable causal effect E on the output. If an incorrect baseline value bf of the variable f (forehead) just blurs the image patch, rather than fully remove its appearance, then masking the variable f cannot remove all score E. The remaining score E−U{f,e,b,c} will be explained as redundant causal patterns U{e,b}, U{e,c}, U{e,b,c}, etc. • Furthermore, incorrect baseline values may also generate new patterns. For example, if baseline values of {f, e, b, c} are set as black regions, then masking all four regions may generate a new pattern of a black square, which is a new causal pattern that influences the network output. Therefore, we consider that the optimal baseline value, which faithfully reflects the true inference logic, usually simplifies the set of causal patterns. I.e., it usually reduces the overall strength of existing causal effects most without introducing new causal effects. However, we find that most existing masking methods are not satisfactory from this perspective (see Section 3.2 and Table 1), although the masking method based on conditional distribution of input variables (Covert et al., 2020b; Frye et al., 2021) performs a bit better. In particular, we notice that Shapley values can also be derived from causal patterns in theory, i.e., the causal patterns are proven to be elementary effects of Shapley values. Therefore, we propose a new method to learn optimal baseline values for Shapley values, which removes the causal effects of the masked input variables and avoids introducing new causal effects. Contributions of this paper can be summarized as follows. (1) We propose a metric to examine whether the masking approach in attribution methods could faithfully represent the absence state of input variables. Based on this metric, we find that most previous masking methods are not reliable. (2) We define and develop an approach to estimating optimal baseline values for Shapley values, which ensures the trustworthiness of the attribution. 2 EXPLAINABLE AI THEORIES BASED ON GAME-THEORETIC INTERACTIONS This paper is a typical achievement on the theoretical system of game-theoretic interactions. In fact, our research group has developed and used the game-theoretical interaction as a new perspective to solve two challenges in explainable AI, i.e., (1) how to define and represent implicit knowledge encoded by a DNN as explicit and countable concepts, (2) how to use concepts encoded by the DNN to explain its representation power or performance. More importantly, we find that the gametheoretic interaction is also a good perspective to analyze the common mechanism shared by previous empirical findings and explanations of DNNs. • Explaining the knowledge/concepts encoded by a DNN. Defining interactions between input variables of a DNN in game theory is a typical research direction (Grabisch and Roubens, 1999; Sundararajan et al., 2020). To this end, we further defined the multi-variate interaction (Zhang et al., 2021a;d) and multi-order interaction (Zhang et al., 2021b) to represent interactions of different complexities. Ren et al. (2023a) and Li and Zhang (2023) first discovered that we could consider game-theoretic interactions as the concepts encoded by a DNN, considering the following three terms. (1) We found that a trained DNN usually only encoded very sparse and salient interactions, and each interaction made a certain effect on the network output. (2) We proved that we could just use the effects of such a small number of salient interactions to well mimic/predict network outputs on an exponential number of arbitrarily masked input samples. (3) We found that salient interactions usually exhibited strong transferability across different samples, strong transferability across different DNNs, and strong discrimination power. Thus, the above three perspectives comprised the solid foundation of considering salient interactions as the concepts encoded by a DNN. Furthermore, Cheng et al. (2021b) found that such interactions usually represented the most reliable and prototypical concepts encoded by a DNN. Cheng et al. (2021a) further analyzed the different signal-processing behaviors of a DNN in encoding shapes and textures. • The game-theoretic interaction is also a new perspective to investigate the representation power of a DNN. Deng et al. (2022a) proved a counter-intuitive bottleneck/difficulty of a DNN in representing interactions of the intermediate complexity. Zhang et al. (2021b) explored the effects of the dropout operation on interactions to explain the generalization power of a DNN. Wang et al. (2021a;b); Ren et al. (2021) used interactions between input variables to explain the adversarial robustness and adversarial transferability of a DNN. Zhou et al. (2023) found that complex (highorder) interactions were more likely to be over-fitted, and they used the generalization power of different interaction concepts to explain the generalization power of the entire DNN. Ren et al. (2023b) proved that a Bayesian neural network (BNN) was less likely to encode complex (highorder) interactions, which avoided over-fitting. • Game-theoretic interactions are also used to analyze the common mechanism shared by many empirical findings. Deng et al. (2022b) discovered that almost all (fourteen) attribution methods could be re-formulated as a reallocation of interactions in mathematics. This enabled the fair comparison between different attribution methods. Zhang et al. (2022) proved that twelve previous empirical methods of boosting adversarial transferability could be explained as reducing interactions between pixel-wise adversarial perturbations. 3 PROBLEMS WITH THE REPRESENTATION OF THE MASKED STATES The Shapley value (Shapley, 1953) was first introduced in game theory to measure the contribution of each player in a game. People usually use Shapley values to estimate attributions of input variables of a DNN. Let the input sample x of the DNN contain n input variables, i.e., x = [x1, . . . , xn]. The Shapley value of the i-th input variable ϕi is defined as follows. ϕi = ∑ S⊆N\{i} [|S|!(n− |S| − 1)!/n!] · [ v(xS∪{i})− v(xS) ] (1) where v(xS) ∈ R denotes the model output when variables in S are present, and variables in N \S are masked. Specifically, v(x∅) represents the model output when all input variables are masked. The Shapley value of the variable i is computed as the weighted marginal contribution of i when the variable i is present w.r.t. the case when the variable i is masked, i.e. v(xS∪{i})− v(xS). 𝑣 𝑥𝑆 : the income is less than 50k relation ship marital status age educati on sex 𝑈𝑆 𝐶𝑆1 -0.72 𝐶𝑆2 +0.58 𝐶𝑆3 -1.18 𝐶𝑆4 -0.93 𝐶𝑆5 +0.58 𝐶𝑆6 +0.52 𝐶𝑆7 -0.52 Figure 2: Causal patterns that explain the inference on a sample in the income dataset. Table 1: The ratio R of the remaining and newly introduced causal effects in the masked inputs. A small value of R meant that baseline values removed most original causal effects and did not introduce many new effects. R(zero) R(mean) R(blur) R(conditional) R(ours) MNIST 1.1736 0.3043 0.4159 0.3780 0.2185 CIFAR-10 0.6630 0.8042 0.7288 0.4027 0.1211 The Shapley value is widely considered a fair attribution method, because it satisfies the linearity, dummy, symmetry, and efficiency axioms (Weber, 1988) (please refer to Appendix D). However, when we explain a DNN, a typical challenge is how to faithfully define the absence of an input variable. The most classical way is to use baseline values (or called reference values) b = [b1, b2, . . . , bn] to mask variables to represent their absence. Specifically, given an input sample x, xS denotes a masked sample, which is generated by masking variables in the set N \ S. If i ∈ S, (xS)i = xi; otherwise, (xS)i = bi (2) We aim to learn optimal baseline values b to faithfully represent absent states of input variables. Decomposing a DNN’s output into sparse interactions. Given a trained DNN v and an input x with n input variables, Ren et al. (2023a) have proven that the DNN output v(x) can be decomposed into effects of interactions between input variables. Specifically, let S ⊆ N denote a subset of input variables. The interaction effect between variables in S is defined as the following Harsanyi dividend (Harsanyi, 1982). US def = ∑ S′⊆S (−1)|S|−|S ′| · v(xS′) (3) Based on this definition, we have v(x) = ∑ S⊆N US . Sparse salient interactions can be considered as causal patterns1 (or concepts) encoded by the DNN. Theorem 1 and Remark 1 prove that most interactions have ignorable effects US ≈ 0, and people can use a few salient interactions with non-ignorable effects to well approximate the inference scores on 2n different masked samples. Thus, we can consider such interactions as causal patterns1 or concepts encoded by the DNN. Accordingly, we can consider the interaction effect US as the causal effect. Besides, Remark 1 has been verified on different DNNs learned for various tasks by experiments in both Appendix G.1 and (Ren et al., 2023a). Theorem 1 (Faithfulness, proven by Ren et al. (2023a) and Appendix E.1) Let us consider a DNN v and an input sample x with n input variables. We can generate 2n different masked samples, i.e., {xS |S ⊆ N}. The DNN’s outputs on all masked samples can always be well mimicked as the sum of the triggered interaction effects in Eq. (3), i.e., ∀S⊆N, v(xS) = ∑ S′⊆S US′ . Remark 1 (Sparsity) Interaction effects in most DNNs are usually very sparse. Most interaction effects are almost zero, i.e., US ≈ 0. A few most salient interaction effects in Ω (less than 100 interaction effects in most cases) are already enough to approximate the DNN, i.e., ∀S ⊆ N, v(xS)≈∑ S′∈Ω,S′⊆S US′ , where |Ω| ≪ 2 n. Each causal pattern (concept) S represents an AND relationship between input variables in S. For example, the head pattern of a rooster consists of {forehead, eyes, beak, crown}. If the forehead, eyes, beak, and crown of the rooster co-appear, then the head pattern S={forehead, eyes, beak, crown} is triggered and makes a causal effect US on the output. Otherwise, if any part is masked, the causal pattern S will not be triggered, and the DNN’s inference score v(x) will not receive the causal effect US . In sum, when we mask an input variable i, it is supposed to remove all causal effects of all AND relationships that contain the variable i. Please see Appendix F for the proof. 3.1 EXAMINING THE FAITHFULNESS OF BASELINE VALUES USING CAUSAL PATTERNS We use salient causal patterns (or concepts) to evaluate the faithfulness of masking methods. Specifically, we examine whether baseline values remove most causal effects depending on xi, and whether baseline values generate new causal effects. The evaluation of the masking methods based on salient causal effects is theoretically supported from the following three perspectives. First, Theorem 1 and Remark 1 prove that the inference score of a DNN can be faithfully disentangled into a relatively small number of causal patterns. Second, Theorem 2 shows that Shapley values can be explained as a re-allocation of causal effects to input variables. Therefore, reducing effects of salient patterns means the removal of elementary factors that determine Shapley values. Besides, in order to verify that the reduction of causal patterns can really represent the absence of input variables, we have conducted experiments to find that salient patterns triggered by white noise inputs were much less than those triggered by normal images. Please see Appendix G.2 for details. Theorem 2 (proven by Harsanyi (1982) and Appendix E.2) We can directly derive Shapley values from the effects US of causal patterns. The Shapley value can be considered as uniformly allocating each causal pattern S’s effect US to all its variables, i.e. ϕi = ∑ S⊆N\{i} 1 |S|+1US∪{i}. Third, an incorrect baseline value bi will make partial effects of the AND relationship of the variable i be mistakenly explained as an exponential number of additional redundant causal patterns, which significantly complicates the explanation. Therefore, the optimal baseline value is supposed to generate the most sparse causal patterns as the simplest explanation of the DNN. Compared to dense causal patterns generated by sub-optimal baseline values, the simplest explanation removes as many as existing causal effects as possible without introducing additional causal effects. Remark 2 (proof in Appendix E.3) Let us consider a function with a single causal pattern f(xS)= wS ∏ j∈S(xj−δj). Accordingly, ground-truth baseline values of variables are obviously {δj}, because setting any variable ∀j ∈ S, xj = δj will deactivate this pattern. Given the correct baseline values b∗j =δj , we can use a single causal pattern to regress f(xS), i.e., US = f(xS), ∀ S′ ̸= S,US′ = 0. Theorem 3 (proof in Appendix E.3) For the function f(xS)=wS ∏ j∈S(xj−δj), if we use m ′ incorrect baseline values {b′j |b′j ̸=δj} to replace correct ones to compute causal effects, then the function will be explained to contain at most 2m ′ causal patterns. Theorem 4 (proof in Appendix E.3) If we use m′ incorrect baseline values to compute causal effects in the function f(xS) = wS ∏ j∈S(xj −δj), a total of ( m′ k−|S|+m′ ) causal patterns of the k-th order emerge, k ≥ |S|−m′. A causal pattern of the k-th order means that this causal pattern represents the AND relationship between k variables. Specifically, Remark 2, Theorems 3 and 4 provide a new perspective to understand how incorrect baseline values generate new causal patterns. Remark 2 shows how correct baseline values explain a toy model that contains a single causal pattern. Theorems 3 and 4 show that incorrect baseline values will use an exponential number of redundant low-order patterns to explain a single high-order causal pattern. For example, we are given the function f(x) =w(xp−δp)(xq−δq) s.t. xp = 3, xq = 4, δp = 2, δq = 3. If we use ground-truth baseline values {δp, δq}, then the function is explained as simple as a single causal pattern Ω={{p, q}}, which yields correct Shapley values ϕp = ϕq = 0.5 ·w, according to Theorem 2. Otherwise, if we use incorrect baseline values {b′p = 1, b′q = 1}, then this function will be explained as four causal patterns Ω= {∅, {p}, {q}, {p, q}}, i.e., f(x) = U∅C∅ +U{p}C{p} +U{q}C{q} +U{p,q}C{p,q}, where U∅ = 2w, U{p} = −4w, U{q} = −3w, and U{p,q} = 6w are computed using incorrect baseline values. Incorrect baseline values increase complicated causal patterns and lead to incorrect Shapley values ϕp = −w, ϕq = 0. In fact, the existence of most newly introduced causal patterns is due to that the effects of a high-order causal pattern are not fully removed, and that OOD causal patterns (new OOD edges or shapes) may be caused by incorrect baseline values. 3.2 PROBLEMS WITH PREVIOUS MASKING METHODS In this subsection, we compare causal patterns in the masked sample with causal patterns in the original sample to evaluate the following baseline values. (1) Mean baseline values. As Fig. 1 shows, the baseline value of each input variable is set to the mean value of this variable over all samples (Dabkowski and Gal, 2017), i.e. bi = Ex[xi]. However, empirically, this method actually introduces additional signals to the input. For example, mean values introduce massive grey dots to images and may form new edges as abnormal causal patterns. This has been verified by experiments in Table 1. Experimental details will be introduced later. (2) Zero baseline values. Baseline values of all input variables are set to zero (Ancona et al., 2019; Sundararajan et al., 2017), i.e. ∀i ∈ N, bi = 0. As Fig. 1 shows, just like mean baseline values, zero baseline values also introduce additional signals (black dots) to the input (verified in Table 1). (3) Blurring input samples. Fong and Vedaldi (2017) and Fong et al. (2019) blur image pixels xi using a Gaussian kernel as its masked state. Covert et al. (2020a); Sturmfels et al. (2020) mentioned that this approach only removed high-frequency signals, but failed to remove low-frequency signals. (4) For each input variable, determining a different baseline value for each specific context S. Instead of fixing baseline values as constants, some studies use varying baseline values to compute v(xS) given x, which are determined temporarily by the context S in x. Some methods (Frye et al., 2021; Covert et al., 2020b) define v(xS) by modeling the conditional distribution of variable values in N \S given the context S, i.e. v(xS) = Ep(x′|xS)[model(xS ⊔ x ′ N\S)]. The operation ⊔ means the concatenation of x’s dimensions in S and x′’s dimensions in N \ S. By assuming the independence between input variables, the above conditional baseline values can be simplified to marginal baseline values (Lundberg and Lee, 2017), i.e. v(xS)=Ep(x′)[model(xS ⊔ x′N\S)]. We conducted experiments to examine whether the above baseline values remove all causal patterns in the original input and whether baseline values introduce new causal patterns. We used the metric R=Ex [ ( ∑ S⊆N |U ′ S |− ∑ S⊆N |U (noise) S |)/( ∑ S⊆N |US |) ] to evaluate the quality of masking. We generated a set of samples based on x, where a set of input variables were masked, and U ′S denote the causal effect in such masked samples. US denote the causal effect in the original sample x, which was used for normalization. U (noise)S denotes the causal effect in a white noise input, and it represents the unavoidable effect of huge amounts of noise patterns. Thus, we considered the U (noise)S term as an inevitable anchor value and removed it from R for a more convincing evaluation. The masking method would have two kinds of effects on causal patterns. (1) We hoped to remove all existing salient patterns in the original sample. (2) We did not expect the masking method to introduce new salient patterns. Interestingly, the removal of existing salient patterns decreased the R value, while the triggering of new patterns increased the R value. Thus, the R metric reflected both effects. A small value of R indicated a good setting of baseline values. We used 20 images in the MNIST dataset (LeCun et al., 1998) and 20 images in the CIFAR-10 dataset (Krizhevsky et al., 2009) to compute R, respectively. We split each MNIST image into 7× 7 grids and split each CIFAR-10 image into 8× 8 grids. For each image, we masked the central 4× 3 grids using the zero baseline, mean baseline, blur baseline, and the baseline based on the conditional distribution, and computed the metric of R(zero), R(mean), R(blur), and R(conditional), respectively. Table 1 shows that the ratio R by using previous baseline values were all large. Although the masking method based on conditional distribution performed better than some other baseline values, our method exhibited the best performance. It indicates that previous masking methods did not remove most existing patterns and/or trigger new patterns. 3.3 ABSENCE STATES AND OPTIMAL BASELINE VALUES In the original scenario of game theory, the Shapley value was proposed without the need to define the absence of players. When people explain a DNN, we consider that the true absence state of variables should generate the most simplified causal explanation. Remark 2 and Theorem 3 show that correct baseline values usually generate the simplest causal explanation, i.e., using the least number of causal patterns to explain the DNN. In comparison, if an incorrect baseline value bi does not fully remove all effects of AND relationships of the variable i, then the remained effects will be mistakenly explained as a large number of other redundant patterns. The above proof well fits Occam’s Razor, i.e., the simplest causality with the minimum causal patterns is more likely to represent the essence of the DNN’s inference logic. This also lets us consider the baseline values that minimize the number of salient causal patterns (i.e., achieving the simplest causality) as the optimal baseline values. Therefore, the learning of the baseline value b∗i of the i-th variable can be formulated to sparsify causal patterns in the deep model. Particularly, such baseline values are supposed to remove existing causal effects without introducing many new effects. b∗ = argminb ∑ x |Ω(x)|, subject to Ω(x) = {S ⊆ N ||US(x|b)| > τ} (4) where US(x|b) denotes the causal effect computed on the sample x by setting baseline values to b. 4 ESTIMATING BASELINE VALUES Based on Theorem 3, we derive Eq. (4) to learn optimal baseline values, but the computational cost of enumerating all causal patterns is exponential. Thus, we explore an approximate solution to learning baseline values. According to Theorem 4, incorrect baseline values usually mistakenly explain high-order causal patterns as an unnecessarily large number of low-order causal patterns, where the order m of the causal effect US is defined as the cardinality of S, m = |S|. Thus, the objective of learning baseline values is roughly equivalent to penalizing effects of loworder causal patterns, in order to prevent learning incorrect baseline values that mistakenly represent the high-order pattern as an exponential number of low-order patterns. min b L(b), subject to L(b) = ∑ x ∑ S⊆N,|S|≤k |US(x|b)| (5) An approximate-yet-efficient solution. When each input sample contains a huge number of variables, e.g., an image sample, directly optimizing Eq. (5) is NP-hard. Fortunately, we find the multiorder Shapley value and the multi-order marginal benefit in the following equation have strong connections with multi-order causal patterns (proven in Appendix H), as follows. ϕ (m) i (x|b) def =ES⊆N\{i} |S|=m [ v(xS∪{i}, b)−v(xS , b) ] = ES⊆N\{i} |S|=m [∑ L⊆S UL∪{i}(x|b) ] ∆vi(S|x, b) def =v(xS∪{i}, b)−v(xS , b) = ∑ L⊆S UL∪{i}(x|b) (6) where ϕ(m)i (x|b) and ∆vi(S|x, b) denote the m-order Shapley value and the m-order marginal benefit computed using baseline values b, respectively, where the order m is given as m= |S|. According to the above equation, high-order casual patterns US are only contained by high-order Shapley values ϕ(m)i and high-order marginal benefits ∆vi. Therefore, in order to penalize the effects of low-order causal patterns, we penalize the strength of low-order Shapley values and low-order marginal benefits, respectively, as an engineering solution to boost computational efficiency. In experiments, these loss functions were optimized via SGD. LShapley(b) = ∑ m∼Unif(0,λ) ∑ x∈X ∑ i∈N |ϕ(m)i (x|b)|, Lmarginal(b) = ∑ m∼Unif(0,λ) ∑ x∈X ∑ i∈N E S⊆N |S|=m |∆vi(S|x, b)| (7) where λ ≥ m denotes the maximum order to be penalized. We have conducted experiments to verify that baseline values b learned by loss functions in Eq. (7) could effectively sparsify causal effects of low-order causal patterns in Eq. (5). Please see Appendix G.3 for results. Most importantly, we still used the metric R in Section 3.2 to check whether the learned baseline values removed original causal patterns in the input while not introducing new patterns. The low value of R(ours) in Table 1 shows that baseline values learned by our method successfully removed existing salient causal patterns without introducing many new salient patterns. 5 EXPERIMENTS 5.1 VERIFICATION OF CORRECTNESS OF BASELINE VALUES AND SHAPLEY VALUES Correctness of baseline values on synthetic functions. People usually cannot determine the ground truth of baseline values for real images, such as the MNIST dataset. Therefore, we conducted experiments on synthetic functions with ground-truth baseline values, in order to verify the Table 5: Accuracy of Shapley values on the extended Addition-Multiplication dataset using different settings of baseline values. Zero Mean Baseline values in SHAP Kernel SHAP Frye et al. (2021) Ours Accuracy 82.88% 72.63% 81.25% 33.88% 66.00% 100% Table 6: An example of Shapley values computed on different baseline values. The function is model(x) =−2.62x1−5x3− 1.98x6(x4−0.94)+1.15(x5−0.91)−4.23x7, and the input is x = [0, 1, 1, 1, 1, 1, 1]. Baseline values The computed Shapley values {ϕi} Truth/Ours {0, 0,−5,−0.06, 0.10,−0.06,−4.23} Zero baseline {0, 0,−5,−0.99, 1.15, 0.87,−4.23} Mean baseline {1.31, 0,−2.50,−0.74, 0.58, 0.19,−2.11} Setting in SHAP {0.014, 0,−0.011,−0.003, 0.003, 0.001,−0.010} correctness of the learned baseline values. We randomly generated 100 functions, whose causal patterns and ground truth of baseline values could be easily determined. This dataset has been released at https://github.com/zzp1012/faithful-baseline-value. The generated functions were composed of addition, subtraction, multiplication, exponentiation, and the sigmoid operations (see Table 3). For example, for the function y=sigmoid(3x1x2−3x3−1.5)−x4x5+0.25(x6+x7)2, xi∈{0, 1}, there were three causal patterns (i.e. {x1, x2, x3}, {x4, x5}, {x6, x7}), which were activated only if xi =1 for i∈{1, 2, 4, 5, 6, 7} and x3 =0. In this case, the ground truth of baseline values was b∗i = 0 for i ∈ {1, 2, 4, 5, 6, 7} and b∗3 = 1. Please see Appendix G.4 for more discussions about the setting of ground-truth baseline values. We used our method to learn baseline values on these functions and tested the accuracy. Note that |bi−b∗i |∈ [0, 1] and b∗i ∈{0, 1}. If |bi−b∗i |<0.5, we considered the learned baseline value correct. We set λ=0.5n in both LShapley and Lmarginal. The results are reported in Table 4 and are discussed later. Correctness of baseline values on functions in (Tsang et al., 2018). Besides, we also evaluated the correctness of the learned baseline values using functions in Tsang et al. (2018). Among all the 92 input variables in these functions, the ground truth of 61 variables could be determined (see Appendix G.4). Thus, we used these annotated baseline values to test the accuracy. Table 4 reports the accuracy of the learned baseline values on the above functions. In most cases, the accuracy was above 90%, showing that our method could effectively learn correct baseline values. A few functions in (Tsang et al., 2018) did not have salient causal patterns, which caused errors in the learning. Besides, in experiments, we tested our method under three different initializations of baseline values (i.e., 0, 0.5, and 1). Table 4 shows that baseline values learned with different initialization settings all converged to similar and high accuracy. Correctness of the computed Shapley values. Incorrect baseline values lead to incorrect Shapley values. We verified the correctness of the computed Shapley values on the extended AdditionMultiplication dataset (Zhang et al., 2021c). We added the subtraction operation to avoid all baseline values being zero. Theorem 2 considers the Shapley value as a uniform assignment of effects of each causal pattern to its compositional variables. This enabled us to determine the ground-truth Shapley value of variables without baseline values based on causal patterns. For example, the function f(x) = 3x1x2 + 5x3x4 + x5 s.t. x = [1, 1, 1, 1, 1] contained three causal patterns, according to the principle of the most simplified causality. Accordingly, the ground-truth Shapley values were ϕ̂1= ϕ̂2=3/2, ϕ̂3= ϕ̂4=5/2, and ϕ̂5=1. See Appendix G.5 for more details. The estimated Shapley value ϕi was considered correct if |ϕi− ϕ̂i| ≤ 0.01; otherwise, incorrect. Then, we computed the accuracy of the estimated Shapley values as the ratio of input variables with correct Shapley values. Discussion on why the learned baseline values generated correct Shapley values. We computed Shapley values of variables in the extended Addition-Multiplication dataset using different baseline values, and compared their accuracy in Table 5. The result shows that our method exhibited the highest accuracy. Table 6 shows an example of incorrect Shapley values computed by using other baseline values. Our method generated correct Shapley values in this example. For the variable x6, due to its negative coefficient −1.98, its contribution should be negative. However, all other baseline values generated positive Shapley values for x6. The term −4.23x7 showed the significant effect of the variable x7 on the output, but its Shapley value computed using baseline values in SHAP was just −0.010, which was obviously incorrect. age workclass education maritalstatus occupatio n relationshi p race sex capitalgain capitalloss hours-perweek nativecountry -20 0 20 40 60 80 100 120 Workclass Education Marital-statu Occupation Relationship Race Sex Capital-gai Capital-los Hours-per-we Native-country Age 0 20 40 60 80 1200100 Baseline values learned by our methods Using (zero-init) Using (mean-init) Using (zero-init) Using (mean-init) age workclass education marital-status occupation relationship race sex capital-gain capital-loss hours-per-week native-country 0 20 40 60 80 Values of the input sample Shapley value Ag Workclass Educatio Marital-st tu Occupatio Relationshi Rac Se Capital-gai Capital-los Hours-per-we Native-countr 0 20 40 60 Zero baseline values 0 Mean baseline values 0 Baseline in SHAP 0 Baseline in SAGE 0 Ours 𝐿Shapley zero-init 0 Ours 𝐿marginal zero-init 0 163 age workclass education marital-status occupation relationship race sex capital-gain capital-loss hours-per-week native-country 0 20 40 60 Shapley value Ag Workclass Educatio Marital-st tu Occupatio Relationshi Rac Se Capital-gai Capital-los Hours-per-we Native-countr 0 20 40 60 0 0 0 0 0 0 330 Values of the input sample Zero baseline Mean baseline Baseline in SHAP Baseline in SAGE Ours 𝐿 Ours 𝐿 Figure 3: The learned baseline values (left) and Shapley values computed with different baseline values (right) on the income dataset. Results on the MNIST, the CIFAR-10, and the credit datasets are shown in Appendix G.6 and G.7. 5.2 RESULTS AND EVALUATION ON REALISTIC DATASETS AND MODELS Learning baseline values. We used our method to learn baseline values for MLPs, LeNet (LeCun et al., 1998), and ResNet-20 (He et al., 2016) trained on the UCI South German Credit dataset (namely credit dataset) (Dua and Graff, 2017), the UCI Census Income dataset (namely income dataset) (Dua and Graff, 2017), the MNIST dataset (LeCun et al., 1998), and the CIFAR-10 dataset (Krizhevsky et al., 2009), respectively. We learned baseline values by using either LShapley or Lmarginal as the loss function. In the computation of LShapley, we set v(xS) = log p(y truth|xS) 1−p(ytruth|xS) . In the computation of Lmarginal, |∆vi(S)| was set to |∆vi(S)|=∥h(xS∪{i})−h(xS)∥1, where h(xS) denotes the output feature of the penultimate layer given the masked input xS , in order to boost the efficiency of learning. We set λ=0.2n for the MNIST and the CIFAR-10 datasets, and set λ=0.5n for the simpler data in two UCI datasets. Given baseline values, we used the sampling-based approximation (Castro et al., 2009) to estimate Shapley values. We used two ways to initialize baseline values before learning, i.e. setting baseline values to zero or mean values over different samples, namely zero-init and mean-init, respectively. Fig. 3 (left) shows that baseline values learned with different initialization settings all converged to similar baseline values, except for very few dimensions having multiple local-minimum solutions (discussed in Appendix G.7), which proved the stability of our method. Comparison of attributions computed using different baseline values. Fig. 3 shows the learned baseline values and the computed Shapley values on the income dataset. We found that attributions generated by zero/mean baseline values conflicted with the results of all other methods. Our method obtained that the occupation had more influence than the marital status on the income, which was somewhat consistent with our life experience. However, baseline values in SHAP and SAGE sometimes generated abnormal explanations. In this top-right example, the attribute capital gain was zero, which was not supposed to support the prediction of “the person made over 50K a year.” However, the SAGE’s baseline values generated a large positive Shapley value for capital gain. In the bottom-right example, both SHAP and SAGE considered the marital status important for the prediction. SHAP did not consider the occupation as an important variable. Therefore, we considered these explanations not reliable. Attribution maps and baseline values generated on the CIFAR-10 and the MNIST datasets are provided in Appendix G.6. Compared to zero/mean/blurring baseline values, our baseline values were more likely to ignore noisy variables in the background, which were far from the foreground in images. Compared to SHAP, our method yielded more informative attributions. Besides, our method generated smoother attributions than SAGE. 6 CONCLUSIONS In this paper, we have defined the absence state of input variables in terms of causality. Then, we have found that most existing masking methods cannot faithfully remove existing causal patterns without triggering new patterns. In this way, we have formulated optimal baseline values for the computation of Shapley values as those that remove most causal patterns. Then, we have proposed an approximate-yet-efficient method to learn optimal baseline values that represent the absence states of input variables. Experimental results have demonstrated the effectiveness of our method. ETHIC STATEMENT This paper aims to examine the masking approach in previous explaining methods. We find that previous settings of the masking approach cannot faithfully represent the absence of input variables, thereby hurting the trustworthiness of the obtained explanations. Therefore, we propose a new method to learn optimal baseline values to represent the absence of input variables. In this way, the trustworthiness of explanations of the DNN is further boosted. There are no ethical issues with this paper. REPRODUCIBILITY STATEMENT We have provided proofs for all theoretical results in Appendix E and Appendix H. We have also provided experimental details in Section 5 and Appendix G. Furthermore, we will release the code when the paper is accepted. ACKNOWLEDGEMENT This work is partially supported by the National Nature Science Foundation of China (62276165), National Key R&D Program of China (2021ZD0111602), Shanghai Natural Science Foundation (21JC1403800,21ZR1434600), National Nature Science Foundation of China (U19B2043). This work is also partially supported by Huawei Technologies Inc. A RELATED WORKS No previous methods directly examined the faithfulness of the masking methods. Instead, we made a survey in a larger scope of attribution methods and other explainable AI studies, and put them in the appendix. Nevertheless, we will put this section back to the main paper if the paper is accepted. In the scope of explainable AI, many methods (Simonyan et al., 2014; Yosinski et al., 2015; Mordvintsev et al., 2015; Dosovitskiy and Brox, 2016; Zhou et al., 2015) have been proposed to explain the DNN. Among all methods, the estimation of attributions for each input variable represents a classical direction (Zhou et al., 2016; Selvaraju et al., 2017; Lundberg and Lee, 2017; Shrikumar et al., 2017). In this paper, we mainly focus on attributions based on Shapley values. Shapley values. The Shapley value (Shapley, 1953) in game theory was widely considered as a fair distribution of the overall reward in a game to each player (Weber, 1988). (Sen et al., 1981) and (Grömping, 2007) used the Shapley value to attribute the correlation coefficient of a linear regression to input features. (Štrumbelj et al., 2009; Štrumbelj and Kononenko, 2014) used the Shapley value to attribute the prediction of a model to input features. (Bork et al., 2004) used the Shapley value to measure importances of protein interactions in large, complex biological interaction networks. (Keinan et al., 2004) employed the Shapley value to measure causal effects in neurophysical models. (Sundararajan et al., 2017) proposed Integrated Gradients based on the AumannShapley(Aumann and Shapley, 2015) cost-sharing technique. Besides above local explanations, (Covert et al., 2020b) focused on the global interpretability. In order to compute the Shapley value in deep models efficiently, (Lundberg and Lee, 2017) proposed various approximations for Shapley valus in DNNs. (Lundberg et al., 2018) further computed the Shapley value on tree emsembles. (Aas et al., 2021) generalized the approximation method in (Lundberg and Lee, 2017) to the case when features were related to each other. (Ancona et al., 2019) further formulated a polynomial-time approximation of Shapley values for DNNs. Baseline values. In terms of baseline values of Shapley values, most studies (Covert et al., 2020a; Merrick and Taly, 2020; Sundararajan and Najmi, 2020; Kumar et al., 2020) compared influences of baseline values on explanations, without providing any principles for setting baseline values. Shrikumar et al. (2017) proposed DeepLIFT to estimate attributions of input variables, and also mentioned the choice of baseline values. Besides, Agarwal and Nguyen (2021) and Frye et al. (2021) used generative models to alleviate the out-of-distribution problem caused by baseline values. Unlike previous studies, we rethink and formulate baseline values from the perspective of gametheoretic causality. We define the absent state of input variables, and propose a method to learn optimal baseline values based on the number of causal patterns. B QUANTITATIVE EVALUATION OF ATTRIBUTIONS FOR IMAGE CLASSIFICATION In order to quantitatively evaluate Shapley values computed by different baseline values on the MNIST dataset, we constructed an And-Or decision tree following (Harradon et al., 2018), whose structure directly provided the ground-truth Shapley value for each input variable. Then, we used different attribution methods to explain the decision tree. Table 7 shows that our method generated more accurate Shapley values than other baseline values. We constructed a decision tree (Song et al., 2013) for each category in the MNIST dataset. Specifically, for each category (digit), we first computed the average image over all training samples in this category. Let x̄(c) ∈ Rn denote the average image of the c-th category. Then, we built a decision tree by considering each pixel as an internal node. The splitting rule for the decision tree was designed as follows. Given an input x in the category c, the splitting criterion at the pixel (node) xi was designed as ( (x̄ (c) i > 0.5)&(xi > 0.5) ) 2. If (x̄(c)i > 0.5)&(xi > 0.5) = True, then the pixel value xi was added to the output; otherwise, xi was ignored. In this way, the output of the decision tree was f(x) = ∑ i∈V xi, where V = {i ∈ N |(x̄ (c) i > 0.5)&(xi > 0.5) = True} denote the set of all pixels that satisfied the above equation. For inference, the probability of x belonging to the category c was p(c|x) = sigmoid(γ(f(x)− β)), where γ = 40 was a constant and β ∝ ∑ i∈N 1x̄(c)i >0.5 . In this case, we defined v(xN ) = log p(c|x) 1−p(c|x) . Thus, the co-appearing of pixels in V formed a causal pattern to contribute for v(xN ). In other words, because ∀i ∈ N, xi ≥ 0, the absence of any pixel in V might deactivate this pattern by leading to a small probability p(c|x) < 0.5 and a small v. This pattern can also be understood as an AND node in the And-Or decision tree (Song et al., 2013). In the above decision tree, the ground-truth Shapley values of input variables (pixels) were easy to determine. The above decision tree ensured that the absence of any variable in V would deactivate the causal pattern. Therefore, according to Theorem 2 in the paper, the output probability should be fairly assigned to pixels in V , i.e., they shared the same Shapley values ϕ̂i = v(xN ) |V | . For other pixels that were not contained in the output, their ground-truth Shapley values were zero. We estimated Shapley values of input variables in the above decision tree by using zero baseline values, mean baseline values, baseline values in SHAP, and the learned baseline values by our method, respectively. Let ϕi denote the estimated Shapley value of the variable i. If |ϕi−ϕ̂i| ≤ 0.01, we considered the estimated Shapley value ϕi correct; otherwise, incorrect. In this way, we computed the accuracy of the estimated Shapley values, and Table 7 shows that our method achieved the highest accuracy. C REMOVING ADVERSARIAL PERTURBATIONS FROM THE INPUT Let x denote the normal sample, and let xadv = x + δ denote the adversarial example generated by (Madry et al., 2018). According to (Ren et al., 2021), the adversarial example xadv mainly created out-of-distribution bivariate interactions with high-order contexts, which were actually related to the high-order interactions (causal patterns) in this paper. Thus, in the scenario of this study, the adversarial utility was owing to out-of-distribution high-order interactions (causal patterns). The removal of input variables was supposed to remove most high-order causal patterns. Therefore, the baseline value can be considered as the recovery of the original sample. In this way, we used the adversarial example xadv to initialize baseline values before learning, and used Lmarginal to learn baseline values. If the learned baseline values b satisfy ∥b−x∥1≤∥xadv−x∥1, we considered that our method successfully recovered the original sample to some extent. We conducted experiments using LeNet, AlexNet (Krizhevsky et al., 2012), and ResNet-20 on the MNIST dataset (∥δ∥∞ ≤ 32/255) and the CIFAR-10 dataset (∥δ∥∞≤8/255). Table 8 shows that our method recovered original samples from adversarial examples, which demonstrated the effectiveness of our method. D AXIOMS OF THE SHAPLEY VALUE The Shapley value (Shapley, 1953) was first introduced in game theory, which measures the contribution of each player in a game. Actually, given an input x with n input variables, i.e., x = [x1, . . . , xn], we can consider a deep model as a game with n players N = {1, 2, · · · , n}. Each player i is an input variable xi (e.g. an input dimension, a pixel, or a word). In this way, the problem of fairly estimating attributions of input variables in the DNN is equivalent to the problem of fairly assigning the total reward in the game to each player. The Shapley value is widely considered a fair attribution method, because it satisfies the following four axioms (Weber, 1988). (1) Linearity axiom: If two games can be merged into a new game u(xS) = v(xS) + w(xS), then Shapley values in the two old games also can be merged, i.e. ∀i ∈ N , ϕi,u = ϕi,v + ϕi,w. (2) Dummy axiom and nullity axiom: The dummy player i is defined as a player without any interactions with other players, i.e. satisfying ∀S ⊆ N \ {i}, v(xS∪{i}) = v(xS) + v(x{i}). Then, the dummy player’s Shapley value is computed as ϕi = v(x{i}). The null player i is defined as a player that satisfies ∀S ⊆ N \ {i}, v(xS∪{i}) = v(xS). Then, the null player’s Shapley value is ϕi = 0. 2For Table 7, the splitting criterion was designed as (x̄(c)i > 0.5). (3) Symmetry axiom: If ∀S ⊆ N \ {i, j}, v(xS∪{i}) = v(xS∪{j}), then ϕi = ϕj . (4) Efficiency axiom: The overall reward of the game is equal to the sum of Shapley values of all players, i.e. v(xN )− v(x∅) = ∑ i∈N ϕi. E PROOFS OF THEOREMS This section provides proofs of theorems in the main paper. E.1 PROOF OF THEOREM 1 Theorem 1 (Faithfulness, proven by Ren et al. (2023a)) Let us consider a DNN v and an input sample x with n input variables. We can generate 2n different masked samples, i.e., {xS |S ⊆ N}. The DNN’s outputs on all masked samples can always be well mimicked as the sum of the triggered interaction effects in Eq. (3), i.e., ∀S⊆N, v(xS) = ∑ S′⊆S US′ . Proof: According to the definition of the Harsanyi dividend, we have ∀S ⊆ N ,∑ S′⊆S US′ = ∑ S′⊆S ∑ L⊆S′ (−1)|S ′|−|L|v(xL) = ∑ L⊆S ∑ S′⊆S:S′⊇L (−1)|S ′|−|L|v(xL) = ∑ L⊆S |S|∑ s′=|L| ∑ S′⊆S:S⊇L |S′|=s′ (−1)s ′−|L|v(xL) = ∑ L⊆S v(xL) |S|−|L|∑ m=0 ( |S| − |L| m ) (−1)m = v(xS) E.2 PROOF OF THEOREM 2 Theorem 2 Harsanyi dividends can be considered as causal patterns of the Shapley value. ϕi = ∑ S⊆N\{i} 1 |S|+ 1 US∪{i} (8) In this way, the effect of an causal pattern consisting of m variables can be fairly assigned to the m variables. This connection has been proved in (Harsanyi, 1982). • Proof: right = ∑ S⊆N\{i} 1 |S|+ 1US∪{i} = ∑ S⊆N\{i} 1 |S|+ 1 ∑ L⊆S (−1)|S|+1−|L|v(L) + ∑ L⊆S (−1)|S|−|L|v(L ∪ {i}) = ∑ S⊆N\{i} 1 |S|+ 1 ∑ L⊆S (−1)|S|−|L| [v(L ∪ {i})− v(L)] = ∑ L⊆N\{i} ∑ K⊆N\L\{i} (−1)|K| |K|+ |L|+ 1 [v(L ∪ {i})− v(L)] % Let K = S \ L = ∑ L⊆N\{i} n−1−|L|∑ k=0 (−1)k k + |L|+ 1 ( n− 1− |L| k ) [v(L ∪ {i})− v(L)] % Let k = |K| = ∑ L⊆N\{i} |L|!(n− 1− |L|)! n! [v(L ∪ {i})− v(L)] % by the property of combinitorial number = ϕi = left Table 9: Comparison between ground-truth baseline values and incorrect baseline values. The last column shows ratios of causal patterns of different orders rm = ∑ S⊆N,|S|=m |US |∑ S⊆N,S ̸=∅ |US | . We consider interactions of input samples that activate causal patterns. We find that when models/functions contain a single complex collaborations between multiple variables (i.e. high-order causal patterns), incorrect baseline values usually generate a mixture of many low-order causal patterns. In comparison, ground-truth baseline values lead to sparse and high-order causal patterns. Functions (∀i ∈ N, i ∈ {0, 1}) Baseline values b Ratios r f(x) = x1x2x3x4x5 x = [1, 1, 1, 1, 1] Learned baseline values by Learned baseline values by Zero baseline values ∑ | ( ) | ∑ ∑ | ( ) | 0 0.5 1 ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) ∗ ( ) ( ) ( ) 1 5ground truth: b∗ = [0, 0, 0, 0, 0] incorrect: b(1) = [0.5, 0.5, 0.5, 0.5, 0.5] incorrect: b(2) = [0.1, 0.2, 0.6, 0.0, 0.1] incorrect: b(3) = [0.7, 0.1, 0.3, 0.5, 0.1] f(x) = sigmoid(5x1x2x3+ 5x4 − 7.5) x = [1, 1, 1, 1] Learned baseline values by Learned baseline values by Zero baseline values ∑ | ( ) | ∑ ∑ | ( ) | 0 0.5 1 ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) ∗ ( ) ( ) ( ) 1 5 ground truth: b∗ = [0, 0, 0, 0] incorrect: b(1) = [0.5, 0.5, 0.5, 0.5] incorrect: b(2) = [0.6, 0.4, 0.7, 0.3] incorrect: b(3) = [0.3, 0.6, 0.5, 0.8] f(x) = x1(x2 + x3 − x4)3 x = [1, 1, 1, 0] Learned baseline values by Learned baseline values by Zero baseline values ∑ | ( ) | ∑ ∑ | ( ) | 0 0.5 1 ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) ∗ ( ) ( ) ( ) 1 5 ground truth: b∗ = [0, 0, 0, 1] incorrect: b(1) = [0.5, 0.5, 0.5, 0.5] incorrect: b(2) = [0.2, 0.3, 0.6, 0.1] incorrect: b(3) = [1.0, 0.3, 1.0, 0.1] E.3 PROOF OF REMARK2, THEOREM 3, AND THEOREM 4 Remark 2 Let us consider a function with a single causal pattern f(xS) =wS ∏ j∈S(xj−δj). Accordingly, ground-truth baseline values of variables are obviously {δj}, because setting any variable ∀j ∈ S, xj = δj will deactivate this pattern. Given the correct baseline values b∗j = δj , we can use a single causal pattern to regress f(xS), i.e., US = f(xS), ∀ S′ ̸= S,US′ = 0. Theorem 3 For the function f(xS) = wS ∏ j∈S(xj − δj), if we use m ′ incorrect baseline values {b′j |b′j ̸=δj} to replace correct ones to compute causal effects, then the function will be explained to contain at most 2m ′ causal patterns. Theorem 4 If we use m′ incorrect baseline values to compute causal effects in the function f(xS)= wS ∏ j∈S(xj−δj), a total of ( m′ k−|S|+m′ ) causal patterns of the k-th order emerge, k ≥ |S|−m′. A causal pattern of the k-th order means that this causal pattern represents the AND relationship between k variables. • Theoretical proof: Without loss of generality, let us consider an input sample x, with ∀j ∈ S, xj ̸= δj . Based on the ground-truth baseline value {δj}, we have (1) v(xS) = f(xS) = wS ∏ j∈S(xj − δj) ̸= 0, (2) ∀S′ ⊊ S, v(xS′) = wS ∏ j∈S′(xj − δj) ∏ k∈S\S′(δk − δk) = 0, Accordingly, we have US = ∑ S′⊆S(−1) |S|−|S′|v(xS′) = v(xS) ̸= 0. For S′ ⊊ S, we have US′ =∑ L⊆S′(−1) |S′|−|L|v(xL) = ∑ L⊆S′ 0 = 0. (3) ∀S′ ̸= S, let S′ = L ∪M , where L ⊆ S and M ∩ S = ∅. Then, we have US′ = ∑ T⊆S′ (−1)|S ′|−|T |v(xT ) = ∑ L′⊆L L′ ̸=∅ (−1)|S ′|−|L′|v(xL′) + ∑ M′⊆M M′ ̸=∅ (−1)|S ′|−|M′| v(xM′)︸ ︷︷ ︸ =v(x∅)=0 + ∑ L′⊆L,M′⊆M L′ ̸=∅,M′ ̸=∅ (−1)|S|−|L ′|−|M′| v(xL′∪M′)︸ ︷︷ ︸ =v(L′) +(−1)|S ′| v(x∅)︸ ︷︷ ︸ =0 = ∑ L′⊆L L′ ̸=∅ (−1)|S ′|−|L′|v(xL′) + ∑ L′⊆L,M′⊆M L′ ̸=∅,M′ ̸=∅ (−1)|S|−|L ′|−|M′|v(xL′) =(−1)|S ′|−|S|v(xS) + ∑ M′⊆M M′ ̸=∅ (−1)|S ′|−|S|−|M′|v(xS) % v(xL′) ̸= 0 only if L′ = S = ∑ M′⊆M (−1)|S ′|−|S|−|M′|v(xS) = 0 Therefore, there is only one causal pattern with non-zero effect US . In comparison, if we use m′ incorrect baseline values {δ′j}, where ∑ j∈S 1δ′j ̸=δj = m′, then the function will be explained to contain at most 2m ′ causal patterns. For the simplicity of notations, let S = {1, 2, ...,m}, and δ′1 = δ1 + ϵ1, ..., δ′m′ = δm′ + ϵm′ , where ϵ1, ..., ϵm′ ̸= 0. Let T = {1, 2, . . . ,m′}. In this case, we have (1) v(xS) = f(xS) ̸= 0 (2) ∀S′ ⊊ S, |S′| < m−m′, v(xS′) = wS ∏ j∈S′(xj − δj) ∏ l∈S\S′(δ ′ l − δl). Because |S| − |S′| > m′, there is at least one variable with ground-truth baseline value in S \ S′. Therefore, v(xS′) = 0. Furthermore, US′ = ∑ L⊆S′(−1) |S′|−|L|v(xL) = 0 (3) ∀S′ ⊊ S, |S′| = k ≥ m − m′, v(xS′) = wS ∏ j∈S′(xj − δj) ∏ l∈S\S′(δ ′ l − δl). If S \ T ⊆ S′, then S \ S′ ⊆ T and v(xS′) ̸= 0. Otherwise, v(xS′) = 0. Then, US′ = ∑ L⊆S′ (−1)|S ′|−|L|v(xL) = ∑ L⊆S′,|L|<m−m′ (−1)|S ′|−|L|v(xL) + ∑ L⊆S′,L≥m−m′ (−1)|S ′|−|L|v(xL) = 0 + ∑ L⊆S′,L≥m−m′,L⊇S\T (−1)|S ′|−|L|v(xL) + ∑ L⊆S′,L≥m−m′,L⊉S\T (−1)|S ′|−|L|v(xL) = ∑ L⊆S′,L≥m−m′,L⊇S\T (−1)|S ′|−|L|v(xL) If the above US′ = 0, it indicates that S\T ⊈ S′. In this case, there is no subset L ⊆ S′ s.t. S\T ⊆ L. In other words, only if S \ T ⊆ S′, US′ ̸= 0. In this way, a total of ( m′ k−(|S|−m′) ) causal patterns of the k-th order emerge, where the order k of a causal pattern means that this causal pattern S′ contains k = |S′| variables. There are totally ∑m k=|S|−m′ ( m′ k−(|S|−m′) ) = 2m ′ causal patterns in x. For example, if the input x is given as follows, xi = { δi + 2ϵi, i ∈ T = {1, . . . ,m′} δi + ϵi, i ∈ S \ T = {m′ + 1, . . . ,m} where ϵi ̸= 0 are arbitrary non-zero scalars. In this case, we have ∀S′ ⊆ T,US′∪{m′+1,...,m} = ϵ1ϵ2...ϵm ̸= 0. Besides, if {m′ + 1, ...,m} ⊈ S′, we have US′ = 0. In this way, there are totally 2m ′ causal patterns in x. • Experimental verification: We further conducted experiments to show that the incorrect setting of baseline values makes a model/function consisting of high-order causal patterns be mistakenly explained as a mixture of low-order and high-order causal patterns. To show this phenomenon, we compare causal patterns computed using ground-truth baseline values and incorrect baseline values in Table 9, and the results verify our conclusion. We find that when models/functions contain complex collaborations between multiple variables (i.e. high-order causal patterns), incorrect baseline values usually generate fewer high-order causal patterns and more low-order causal patterns than ground-truth baseline values. In other words, the model/function is explained as massive low-order causal patterns. In comparison, ground-truth baseline values lead to sparse and high-order salient patterns. F PROVING THAT MASKING INPUT VARIABLES REMOVES CAUSAL EFFECTS In this section, we prove that for the causal pattern S ∋ i, if the input variable i is masked, then the causal effect wS = 0. Proof: let S = S′ ∪ {i}. If i ∈ S is masked, then ∀L s.t. i /∈ L,xL = xL∪{i}. Therefore, v(L ∪ {i}) = v(L). According to the definition of Harsanyi dividend (Harsanyi, 1982), we have US = ∑ L⊆S (−1)|S|−|L|v(L) = ∑ L⊆(S′∪{i}) (−1)|S ′|+1−|L|v(L) = ∑ L⊆S′ (−1)|S ′|+1−|L|v(L) + ∑ L⊆S′ (−1)|S ′|−|L|v(L ∪ {i}) = ∑ L⊆S′ (−1)|S ′|+1−|L|v(L) + ∑ L⊆S′ (−1)|S ′|−|L|v(L) = ∑ L⊆S′ ( (−1)|S ′|+1−|L| + (−1)|S ′|−|L| ) v(L) = ∑ L⊆S′ (−1 + 1)(−1)|S ′|−|L|v(L) = 0 Note that the causal pattern not containing i will not be deactivated by the masking of i. For example, {eyes, beak} is not deactivated by the absence of forehead, because this pattern represents the AND relationship between eyes and beak, and it does not contain forehead. G MORE EXPERIMENTAL DETAILS AND RESULTS G.1 VERIFICATION OF THE SPARSITY OF CAUSAL PATTERNS In this subsection, we conducted experiments to verify the sparsity of causal effects, which is introduced in Remark 1. To this end, we computed causal effects US of all 2n causal patterns encoded by a DNN. Specifically, we trained a three-layer MLP on the income dataset and computed causal effects in the model. Figure 4 shows the distribution of absolute causal effects |US | of causal patterns in the first five samples of each category of the income dataset. These results show that most causal patterns had insignificant causal effects, US ≈ 0. Only a few causal patterns had salient causal effects. Moreover, we also conducted experiments to demonstrate the universality of this phenomenon. We trained the five-layer MLP, CNN, LSTM, ResNet-32, and VGG-16 on the UCI census income dataset, the UCI TV news channel commercial detection dataset, the SST-2 dataset, and the MNIST dataset, respectively. Figure 5 shows the absolute causal effects US in the descending order. These results show that various DNNs learned on different tasks could be explained by a set of sparse causal patterns. G.2 VERIFICATION OF USING CAUSAL PATTERNS TO EXAMINE THE STATE OF INPUT VARIABLES In this subsection, we conducted experiments to verify that causal patterns reflect the states of removing existing patterns. Given causal effects US in the normal input image and causal effects U (noise) S in the white noise input, we compared their distributions in Figure 6. Note that we assumed that the white noise input naturally contained information for classification than the normal input image. We found that most causal effects in the white noise input were close to zero, and there were few salient causal patterns. Besides, we computed the average strength of causal effects in the above two inputs. In the normal input, the average strength of causal effects ES⊆N |US | = 5.5285, while in the white noise input, the average strength was much smaller, ES⊆N |U (noise)S | = 0.2321. These results indicated that salient causal patterns could reflect the information encoded in the input. G.3 EFFECTS OF THE PROPOSED METHOD ON MULTI-ORDER SHAPLEY VALUES AND MULTI-ORDER MARGINAL BENEFITS In this section, we conducted experiments to verify that baseline values b learned by the proposed loss function in Eq. (7) could effectively reduce causal effects of low-order causal patterns in Eq. (5). To this end, we computed the metric Ex[(E|S|=m|US |)/(v(xN ) − v(x∅))] to measure the relative strength of causal patterns of a specific order m, in order to evaluate the effectiveness of baseline values. Fig. 7(a) shows that compared to zero baseline values, our method effectively reduced low-order causal patterns. In addition, Fig. 8 and Fig. 7(b) verify that the loss LShapley in Eq. (7) reduced the number of salient causal patterns in Ω, which means LShapley avoided the exponential number of causal patterns caused by incorrect baseline values. G.4 DISCUSSION ABOUT THE SETTING OF GROUND-TRUTH BASELINE VALUES. This section discusses the ground truth of baseline values of synthetic functions in Section 5.1 of the main paper. In order to verify the correctness of the learned baseline values, we conducted experiments on synthetic functions with ground-truth baseline values. We randomly generated 100 Zero baseline values Learned baseline values Count in the log space 𝑈𝑆 100 102 104 -40 0 40-10 0 10 𝑈𝑆 0 4 8 x10 4 Count Figure 8: Distribution of causal effects US of causal patterns in 20 samples in the credit dataset. functions whose causal patterns and ground truth of baseline values could be easily determined. As Table 10 shows, the generated functions were composed of addition, subtraction, multiplication, exponentiation, and sigmoid operations. The ground truth of baseline values in these functions was determined based on causal patterns between input variables. In order to represent the absence states of variables, baseline values should activate as few salient patterns as possible, where activation states of causal patterns were considered as the most infrequent state. Thus, we first identified the activation states of causal patterns of variables, and the ground-truth of baseline values was set as values that inactivated causal patterns under different masks. We took the following examples to discuss the setting of ground-truth baseline values (in the following examples, ∀i ∈ N, xi ∈ {0, 1} and b∗i ∈ {0, 1}). • f(x) = x1x2x3 + sigmoid(x4 + x5 − 0.5) · · · . Let us just focus on the term of x1x2x3 in f(x). The activation state of this causal pattern is x1x2x3 = 1 when ∀i ∈ {1, 2, 3}, xi = 1. In order to inactivate the causal pattern, we set ∀i ∈ {1, 2, 3}, b∗i = 0. • f(x) = −x1x2x3 + (x4 + x5)3 + · · · . Let us just focus on the term of −x1x2x3 in f(x). The activation state of this causal pattern is −x1x2x3 = −1 when ∀i ∈ {1, 2, 3}, xi = 1. In order to inactivate the causal pattern, we set ∀i ∈ {1, 2, 3}, b∗i = 0. • f(x) = (x1 + x2 − x3)3 + · · · . Let us just focus on the term of (x1 + x2 − x3)3 in f(x). The activation state of this causal pattern is (x1 + x2 − x3)3 = 8 when x1 = x2 = 1, x3 = 0. In order to inactivate the causal pattern under different masks, we set b∗1 = b ∗ 2 = 0, b ∗ 3 = 1. • f(x) = sigmoid(3x1x2 − 3x3 − 1.5) + · · · . Let us just focus on the term of sigmoid(3x1x2 − 3x3 − 1.5) in f(x). In this case, x1, x2, x3 form a salient causal pattern because sigmoid(3x1x2 − 3x3 − 1.5) > 0.5 only if x1 = x2 = 1 and x3 = 0. Thus, in order to
1. What is the main contribution of the paper regarding the masking method and Shapley values? 2. What are the strengths of the proposed approach, particularly in its experimental results? 3. Do you have any concerns or confusion regarding the problem definition and methodology? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper examines whether the masking method faithfully removes the information encoded in the input variables. Then the authors propose a method to remove the effect encoded in the input variables by learning optimal baseline values for Shapley values. Experimental results demonstrate the effectiveness of the method. Strengths And Weaknesses Strength The experiments are well-established on different datasets. The performance shows the effectiveness of the proposed method. The approximate-yet-efficient solution is nice to reduce the low-order causal patterns and the number of salient causal patterns. Concerns The problem is not defined clearly. Could you clarify the relationship between the absence of states of the input variables and the absence of the causal effect of the input variables at the beginning? It seems that the masking method you seek is actually aiming to remove the causal effect of the masked variable. It's confusing to not have a clear description of the problem right off the bat of what you are to remove by masking. The presentation and some clarification issues. Some fundamental concepts need to be clarified. For example, could you please explain what it means by ‘removing causal pattern’ and ‘introducing causal pattern’ when you first mention it? Also, in this paper, the masking method is considered as faithfully representing the absence states of input variables if it faithfully removes old causal patterns without introducing new causal patterns. But first, to make it clearer, could you please present this definition more formerly; second, for me, it is not that straightforward to define it from this perspective. For example, I am confused about why masking f o r e h a n d should remove the information of the whole pattern { f o r e h a n d , e y e s , b e a k } . Even the causal effect of f o r e h a n d can be removed, is there any proof here? Moreover, could you please give some explanation on why the masking method should be considered in this way? Is there a loss of information except for the masked variable? If yes, is there any metric for such a loss? What is the computational complexity of the proposed method to estimate the optimal baseline values? Clarity, Quality, Novelty And Reproducibility The paper is novel in trying to find the optimal baseline values that remove the causal effect encoded in the masked variables. There are some clarification problems stated above. The experiments are reproducible.
ICLR
Title Can We Faithfully Represent Absence States to Compute Shapley Values on a DNN? Abstract Masking some input variables of a deep neural network (DNN) and computing output changes on the masked input sample represent a typical way to compute attributions of input variables in the sample. People usually mask an input variable using its baseline value. However, there is no theory to examine whether baseline value faithfully represents the absence of an input variable, i.e., removing all signals from the input variable. Fortunately, recent studies (Ren et al., 2023a; Deng et al., 2022a) show that the inference score of a DNN can be strictly disentangled into a set of causal patterns (or concepts) encoded by the DNN. Therefore, we propose to use causal patterns to examine the faithfulness of baseline values. More crucially, it is proven that causal patterns can be explained as the elementary rationale of the Shapley value. Furthermore, we propose a method to learn optimal baseline values, and experimental results have demonstrated its effectiveness. 1 INTRODUCTION Many attribution methods (Zhou et al., 2016; Selvaraju et al., 2017; Lundberg and Lee, 2017; Shrikumar et al., 2017) have been proposed to estimate the attribution (importance) of input variables to the model output, which represents an important direction in explainable AI. In this direction, many studies (Lundberg and Lee, 2017; Ancona et al., 2019; Fong et al., 2019) masked some input variables of a deep neural network (DNN), and they used the change of network outputs on the masked samples to estimate attributions of input variables. As Fig. 1 shows, there are different types of baseline values to represent the absence of input variables. Theoretically, the trustworthiness of attributions highly depends on whether the current baseline value can really remove the signal of the input variable without bringing in new out-of-distribution (OOD) features. However, there is no criterion to evaluate the signal removal of masking methods. To this end, we need to first break the blind faith that seemingly reasonable baseline values can faithfully represent the absence of input variables, and the blind faith that seemingly OOD baseline values definitely cause abnormal features. In fact, because a DNN may have complex inference logic, seemingly OOD baseline values do not necessarily generate OOD features. Concept/causality-emerging phenomenon. The core challenge of theoretically guaranteeing or examining whether the baseline value removes all or partial signals of an input variable is to explicitly define the signal/concept/knowledge encoded by a DNN in a countable manner. To this end, Ren et al. (2023a) have discovered a counter-intuitive concept-emerging phenomenon in a trained DNN. Although the DNN does not have a physical unit to encode explicit causality or concepts, Ren et al. (2023a); Deng et al. (2022a) have surprisingly discovered that when the DNN is sufficiently trained, the sparse and symbolic concepts emerge. Thus, we use such concepts as a new perspective to define the optimal baseline value for the absence of input variables. As Fig. 1 shows, each concept represents an AND relationship between a specific set S of input variables. The co-appearance of these input variables makes a numerical contribution US to the network output. Thus, we can consider such a concept as a causal pattern1 of the network output, ∗Quanshi Zhang is the corresponding author. He is with the Department of Computer Science and Engineering, the John Hopcroft Center, at the Shanghai Jiao Tong University, China. [email protected]. 1Note that in this paper, the causal pattern means the extracted causal relationship between input variables and the output encoded by the DNN, rather than the true intrinsic causal relationship hidden in data. and US is termed the causal effect. For example, the concept of a rooster’s head consists of the forehead, eyes, beak, and crown, i.e., S= {forehead, eyes, beak, crown} = {f, e, b, c} for short. Only if input variables f , e, b, and c co-appear, the causal pattern S is triggered and makes an effect US on the confidence of the head classification. Otherwise, the absence of any input variables in the causal pattern S will remove the effect. Ren et al. (2023a) have extracted a set of sparse causal patterns (concepts) encoded by the DNN. More importantly, the following finding has proven that such causal patterns1 can be considered as elementary inference logic used by the DNN. Specifically, given an input sample with n variables, we can generate 2n different masked samples. We can use a relatively small number of causal patterns to accurately mimic network outputs on all 2n masked samples, which guarantees the faithfulness of causal patterns. Defining optimal baseline values based on causal patterns. From the above perspective of causal patterns, whether baseline values look reasonable and fit human’s intuition is no longer the key factor to determine the trustworthiness of baseline values. Instead, we evaluate the faithfulness of baseline values by using causal patterns. Because the baseline value is supposed to represent the absence of an input variable, we find that setting an optimal baseline value usually generates the most simplified explanation of the DNN, i.e., we may extract a minimum number of causal patterns to explain the DNN. Such an explanation is the most reliable according to Occam’s Razor. • We prove that using incorrect baseline values makes a single causal pattern be explained as an exponential number of redundant causal patterns. Let us consider the following toy example, where the DNN contains a causal pattern S={f, e, b, c} with a considerable causal effect E on the output. If an incorrect baseline value bf of the variable f (forehead) just blurs the image patch, rather than fully remove its appearance, then masking the variable f cannot remove all score E. The remaining score E−U{f,e,b,c} will be explained as redundant causal patterns U{e,b}, U{e,c}, U{e,b,c}, etc. • Furthermore, incorrect baseline values may also generate new patterns. For example, if baseline values of {f, e, b, c} are set as black regions, then masking all four regions may generate a new pattern of a black square, which is a new causal pattern that influences the network output. Therefore, we consider that the optimal baseline value, which faithfully reflects the true inference logic, usually simplifies the set of causal patterns. I.e., it usually reduces the overall strength of existing causal effects most without introducing new causal effects. However, we find that most existing masking methods are not satisfactory from this perspective (see Section 3.2 and Table 1), although the masking method based on conditional distribution of input variables (Covert et al., 2020b; Frye et al., 2021) performs a bit better. In particular, we notice that Shapley values can also be derived from causal patterns in theory, i.e., the causal patterns are proven to be elementary effects of Shapley values. Therefore, we propose a new method to learn optimal baseline values for Shapley values, which removes the causal effects of the masked input variables and avoids introducing new causal effects. Contributions of this paper can be summarized as follows. (1) We propose a metric to examine whether the masking approach in attribution methods could faithfully represent the absence state of input variables. Based on this metric, we find that most previous masking methods are not reliable. (2) We define and develop an approach to estimating optimal baseline values for Shapley values, which ensures the trustworthiness of the attribution. 2 EXPLAINABLE AI THEORIES BASED ON GAME-THEORETIC INTERACTIONS This paper is a typical achievement on the theoretical system of game-theoretic interactions. In fact, our research group has developed and used the game-theoretical interaction as a new perspective to solve two challenges in explainable AI, i.e., (1) how to define and represent implicit knowledge encoded by a DNN as explicit and countable concepts, (2) how to use concepts encoded by the DNN to explain its representation power or performance. More importantly, we find that the gametheoretic interaction is also a good perspective to analyze the common mechanism shared by previous empirical findings and explanations of DNNs. • Explaining the knowledge/concepts encoded by a DNN. Defining interactions between input variables of a DNN in game theory is a typical research direction (Grabisch and Roubens, 1999; Sundararajan et al., 2020). To this end, we further defined the multi-variate interaction (Zhang et al., 2021a;d) and multi-order interaction (Zhang et al., 2021b) to represent interactions of different complexities. Ren et al. (2023a) and Li and Zhang (2023) first discovered that we could consider game-theoretic interactions as the concepts encoded by a DNN, considering the following three terms. (1) We found that a trained DNN usually only encoded very sparse and salient interactions, and each interaction made a certain effect on the network output. (2) We proved that we could just use the effects of such a small number of salient interactions to well mimic/predict network outputs on an exponential number of arbitrarily masked input samples. (3) We found that salient interactions usually exhibited strong transferability across different samples, strong transferability across different DNNs, and strong discrimination power. Thus, the above three perspectives comprised the solid foundation of considering salient interactions as the concepts encoded by a DNN. Furthermore, Cheng et al. (2021b) found that such interactions usually represented the most reliable and prototypical concepts encoded by a DNN. Cheng et al. (2021a) further analyzed the different signal-processing behaviors of a DNN in encoding shapes and textures. • The game-theoretic interaction is also a new perspective to investigate the representation power of a DNN. Deng et al. (2022a) proved a counter-intuitive bottleneck/difficulty of a DNN in representing interactions of the intermediate complexity. Zhang et al. (2021b) explored the effects of the dropout operation on interactions to explain the generalization power of a DNN. Wang et al. (2021a;b); Ren et al. (2021) used interactions between input variables to explain the adversarial robustness and adversarial transferability of a DNN. Zhou et al. (2023) found that complex (highorder) interactions were more likely to be over-fitted, and they used the generalization power of different interaction concepts to explain the generalization power of the entire DNN. Ren et al. (2023b) proved that a Bayesian neural network (BNN) was less likely to encode complex (highorder) interactions, which avoided over-fitting. • Game-theoretic interactions are also used to analyze the common mechanism shared by many empirical findings. Deng et al. (2022b) discovered that almost all (fourteen) attribution methods could be re-formulated as a reallocation of interactions in mathematics. This enabled the fair comparison between different attribution methods. Zhang et al. (2022) proved that twelve previous empirical methods of boosting adversarial transferability could be explained as reducing interactions between pixel-wise adversarial perturbations. 3 PROBLEMS WITH THE REPRESENTATION OF THE MASKED STATES The Shapley value (Shapley, 1953) was first introduced in game theory to measure the contribution of each player in a game. People usually use Shapley values to estimate attributions of input variables of a DNN. Let the input sample x of the DNN contain n input variables, i.e., x = [x1, . . . , xn]. The Shapley value of the i-th input variable ϕi is defined as follows. ϕi = ∑ S⊆N\{i} [|S|!(n− |S| − 1)!/n!] · [ v(xS∪{i})− v(xS) ] (1) where v(xS) ∈ R denotes the model output when variables in S are present, and variables in N \S are masked. Specifically, v(x∅) represents the model output when all input variables are masked. The Shapley value of the variable i is computed as the weighted marginal contribution of i when the variable i is present w.r.t. the case when the variable i is masked, i.e. v(xS∪{i})− v(xS). 𝑣 𝑥𝑆 : the income is less than 50k relation ship marital status age educati on sex 𝑈𝑆 𝐶𝑆1 -0.72 𝐶𝑆2 +0.58 𝐶𝑆3 -1.18 𝐶𝑆4 -0.93 𝐶𝑆5 +0.58 𝐶𝑆6 +0.52 𝐶𝑆7 -0.52 Figure 2: Causal patterns that explain the inference on a sample in the income dataset. Table 1: The ratio R of the remaining and newly introduced causal effects in the masked inputs. A small value of R meant that baseline values removed most original causal effects and did not introduce many new effects. R(zero) R(mean) R(blur) R(conditional) R(ours) MNIST 1.1736 0.3043 0.4159 0.3780 0.2185 CIFAR-10 0.6630 0.8042 0.7288 0.4027 0.1211 The Shapley value is widely considered a fair attribution method, because it satisfies the linearity, dummy, symmetry, and efficiency axioms (Weber, 1988) (please refer to Appendix D). However, when we explain a DNN, a typical challenge is how to faithfully define the absence of an input variable. The most classical way is to use baseline values (or called reference values) b = [b1, b2, . . . , bn] to mask variables to represent their absence. Specifically, given an input sample x, xS denotes a masked sample, which is generated by masking variables in the set N \ S. If i ∈ S, (xS)i = xi; otherwise, (xS)i = bi (2) We aim to learn optimal baseline values b to faithfully represent absent states of input variables. Decomposing a DNN’s output into sparse interactions. Given a trained DNN v and an input x with n input variables, Ren et al. (2023a) have proven that the DNN output v(x) can be decomposed into effects of interactions between input variables. Specifically, let S ⊆ N denote a subset of input variables. The interaction effect between variables in S is defined as the following Harsanyi dividend (Harsanyi, 1982). US def = ∑ S′⊆S (−1)|S|−|S ′| · v(xS′) (3) Based on this definition, we have v(x) = ∑ S⊆N US . Sparse salient interactions can be considered as causal patterns1 (or concepts) encoded by the DNN. Theorem 1 and Remark 1 prove that most interactions have ignorable effects US ≈ 0, and people can use a few salient interactions with non-ignorable effects to well approximate the inference scores on 2n different masked samples. Thus, we can consider such interactions as causal patterns1 or concepts encoded by the DNN. Accordingly, we can consider the interaction effect US as the causal effect. Besides, Remark 1 has been verified on different DNNs learned for various tasks by experiments in both Appendix G.1 and (Ren et al., 2023a). Theorem 1 (Faithfulness, proven by Ren et al. (2023a) and Appendix E.1) Let us consider a DNN v and an input sample x with n input variables. We can generate 2n different masked samples, i.e., {xS |S ⊆ N}. The DNN’s outputs on all masked samples can always be well mimicked as the sum of the triggered interaction effects in Eq. (3), i.e., ∀S⊆N, v(xS) = ∑ S′⊆S US′ . Remark 1 (Sparsity) Interaction effects in most DNNs are usually very sparse. Most interaction effects are almost zero, i.e., US ≈ 0. A few most salient interaction effects in Ω (less than 100 interaction effects in most cases) are already enough to approximate the DNN, i.e., ∀S ⊆ N, v(xS)≈∑ S′∈Ω,S′⊆S US′ , where |Ω| ≪ 2 n. Each causal pattern (concept) S represents an AND relationship between input variables in S. For example, the head pattern of a rooster consists of {forehead, eyes, beak, crown}. If the forehead, eyes, beak, and crown of the rooster co-appear, then the head pattern S={forehead, eyes, beak, crown} is triggered and makes a causal effect US on the output. Otherwise, if any part is masked, the causal pattern S will not be triggered, and the DNN’s inference score v(x) will not receive the causal effect US . In sum, when we mask an input variable i, it is supposed to remove all causal effects of all AND relationships that contain the variable i. Please see Appendix F for the proof. 3.1 EXAMINING THE FAITHFULNESS OF BASELINE VALUES USING CAUSAL PATTERNS We use salient causal patterns (or concepts) to evaluate the faithfulness of masking methods. Specifically, we examine whether baseline values remove most causal effects depending on xi, and whether baseline values generate new causal effects. The evaluation of the masking methods based on salient causal effects is theoretically supported from the following three perspectives. First, Theorem 1 and Remark 1 prove that the inference score of a DNN can be faithfully disentangled into a relatively small number of causal patterns. Second, Theorem 2 shows that Shapley values can be explained as a re-allocation of causal effects to input variables. Therefore, reducing effects of salient patterns means the removal of elementary factors that determine Shapley values. Besides, in order to verify that the reduction of causal patterns can really represent the absence of input variables, we have conducted experiments to find that salient patterns triggered by white noise inputs were much less than those triggered by normal images. Please see Appendix G.2 for details. Theorem 2 (proven by Harsanyi (1982) and Appendix E.2) We can directly derive Shapley values from the effects US of causal patterns. The Shapley value can be considered as uniformly allocating each causal pattern S’s effect US to all its variables, i.e. ϕi = ∑ S⊆N\{i} 1 |S|+1US∪{i}. Third, an incorrect baseline value bi will make partial effects of the AND relationship of the variable i be mistakenly explained as an exponential number of additional redundant causal patterns, which significantly complicates the explanation. Therefore, the optimal baseline value is supposed to generate the most sparse causal patterns as the simplest explanation of the DNN. Compared to dense causal patterns generated by sub-optimal baseline values, the simplest explanation removes as many as existing causal effects as possible without introducing additional causal effects. Remark 2 (proof in Appendix E.3) Let us consider a function with a single causal pattern f(xS)= wS ∏ j∈S(xj−δj). Accordingly, ground-truth baseline values of variables are obviously {δj}, because setting any variable ∀j ∈ S, xj = δj will deactivate this pattern. Given the correct baseline values b∗j =δj , we can use a single causal pattern to regress f(xS), i.e., US = f(xS), ∀ S′ ̸= S,US′ = 0. Theorem 3 (proof in Appendix E.3) For the function f(xS)=wS ∏ j∈S(xj−δj), if we use m ′ incorrect baseline values {b′j |b′j ̸=δj} to replace correct ones to compute causal effects, then the function will be explained to contain at most 2m ′ causal patterns. Theorem 4 (proof in Appendix E.3) If we use m′ incorrect baseline values to compute causal effects in the function f(xS) = wS ∏ j∈S(xj −δj), a total of ( m′ k−|S|+m′ ) causal patterns of the k-th order emerge, k ≥ |S|−m′. A causal pattern of the k-th order means that this causal pattern represents the AND relationship between k variables. Specifically, Remark 2, Theorems 3 and 4 provide a new perspective to understand how incorrect baseline values generate new causal patterns. Remark 2 shows how correct baseline values explain a toy model that contains a single causal pattern. Theorems 3 and 4 show that incorrect baseline values will use an exponential number of redundant low-order patterns to explain a single high-order causal pattern. For example, we are given the function f(x) =w(xp−δp)(xq−δq) s.t. xp = 3, xq = 4, δp = 2, δq = 3. If we use ground-truth baseline values {δp, δq}, then the function is explained as simple as a single causal pattern Ω={{p, q}}, which yields correct Shapley values ϕp = ϕq = 0.5 ·w, according to Theorem 2. Otherwise, if we use incorrect baseline values {b′p = 1, b′q = 1}, then this function will be explained as four causal patterns Ω= {∅, {p}, {q}, {p, q}}, i.e., f(x) = U∅C∅ +U{p}C{p} +U{q}C{q} +U{p,q}C{p,q}, where U∅ = 2w, U{p} = −4w, U{q} = −3w, and U{p,q} = 6w are computed using incorrect baseline values. Incorrect baseline values increase complicated causal patterns and lead to incorrect Shapley values ϕp = −w, ϕq = 0. In fact, the existence of most newly introduced causal patterns is due to that the effects of a high-order causal pattern are not fully removed, and that OOD causal patterns (new OOD edges or shapes) may be caused by incorrect baseline values. 3.2 PROBLEMS WITH PREVIOUS MASKING METHODS In this subsection, we compare causal patterns in the masked sample with causal patterns in the original sample to evaluate the following baseline values. (1) Mean baseline values. As Fig. 1 shows, the baseline value of each input variable is set to the mean value of this variable over all samples (Dabkowski and Gal, 2017), i.e. bi = Ex[xi]. However, empirically, this method actually introduces additional signals to the input. For example, mean values introduce massive grey dots to images and may form new edges as abnormal causal patterns. This has been verified by experiments in Table 1. Experimental details will be introduced later. (2) Zero baseline values. Baseline values of all input variables are set to zero (Ancona et al., 2019; Sundararajan et al., 2017), i.e. ∀i ∈ N, bi = 0. As Fig. 1 shows, just like mean baseline values, zero baseline values also introduce additional signals (black dots) to the input (verified in Table 1). (3) Blurring input samples. Fong and Vedaldi (2017) and Fong et al. (2019) blur image pixels xi using a Gaussian kernel as its masked state. Covert et al. (2020a); Sturmfels et al. (2020) mentioned that this approach only removed high-frequency signals, but failed to remove low-frequency signals. (4) For each input variable, determining a different baseline value for each specific context S. Instead of fixing baseline values as constants, some studies use varying baseline values to compute v(xS) given x, which are determined temporarily by the context S in x. Some methods (Frye et al., 2021; Covert et al., 2020b) define v(xS) by modeling the conditional distribution of variable values in N \S given the context S, i.e. v(xS) = Ep(x′|xS)[model(xS ⊔ x ′ N\S)]. The operation ⊔ means the concatenation of x’s dimensions in S and x′’s dimensions in N \ S. By assuming the independence between input variables, the above conditional baseline values can be simplified to marginal baseline values (Lundberg and Lee, 2017), i.e. v(xS)=Ep(x′)[model(xS ⊔ x′N\S)]. We conducted experiments to examine whether the above baseline values remove all causal patterns in the original input and whether baseline values introduce new causal patterns. We used the metric R=Ex [ ( ∑ S⊆N |U ′ S |− ∑ S⊆N |U (noise) S |)/( ∑ S⊆N |US |) ] to evaluate the quality of masking. We generated a set of samples based on x, where a set of input variables were masked, and U ′S denote the causal effect in such masked samples. US denote the causal effect in the original sample x, which was used for normalization. U (noise)S denotes the causal effect in a white noise input, and it represents the unavoidable effect of huge amounts of noise patterns. Thus, we considered the U (noise)S term as an inevitable anchor value and removed it from R for a more convincing evaluation. The masking method would have two kinds of effects on causal patterns. (1) We hoped to remove all existing salient patterns in the original sample. (2) We did not expect the masking method to introduce new salient patterns. Interestingly, the removal of existing salient patterns decreased the R value, while the triggering of new patterns increased the R value. Thus, the R metric reflected both effects. A small value of R indicated a good setting of baseline values. We used 20 images in the MNIST dataset (LeCun et al., 1998) and 20 images in the CIFAR-10 dataset (Krizhevsky et al., 2009) to compute R, respectively. We split each MNIST image into 7× 7 grids and split each CIFAR-10 image into 8× 8 grids. For each image, we masked the central 4× 3 grids using the zero baseline, mean baseline, blur baseline, and the baseline based on the conditional distribution, and computed the metric of R(zero), R(mean), R(blur), and R(conditional), respectively. Table 1 shows that the ratio R by using previous baseline values were all large. Although the masking method based on conditional distribution performed better than some other baseline values, our method exhibited the best performance. It indicates that previous masking methods did not remove most existing patterns and/or trigger new patterns. 3.3 ABSENCE STATES AND OPTIMAL BASELINE VALUES In the original scenario of game theory, the Shapley value was proposed without the need to define the absence of players. When people explain a DNN, we consider that the true absence state of variables should generate the most simplified causal explanation. Remark 2 and Theorem 3 show that correct baseline values usually generate the simplest causal explanation, i.e., using the least number of causal patterns to explain the DNN. In comparison, if an incorrect baseline value bi does not fully remove all effects of AND relationships of the variable i, then the remained effects will be mistakenly explained as a large number of other redundant patterns. The above proof well fits Occam’s Razor, i.e., the simplest causality with the minimum causal patterns is more likely to represent the essence of the DNN’s inference logic. This also lets us consider the baseline values that minimize the number of salient causal patterns (i.e., achieving the simplest causality) as the optimal baseline values. Therefore, the learning of the baseline value b∗i of the i-th variable can be formulated to sparsify causal patterns in the deep model. Particularly, such baseline values are supposed to remove existing causal effects without introducing many new effects. b∗ = argminb ∑ x |Ω(x)|, subject to Ω(x) = {S ⊆ N ||US(x|b)| > τ} (4) where US(x|b) denotes the causal effect computed on the sample x by setting baseline values to b. 4 ESTIMATING BASELINE VALUES Based on Theorem 3, we derive Eq. (4) to learn optimal baseline values, but the computational cost of enumerating all causal patterns is exponential. Thus, we explore an approximate solution to learning baseline values. According to Theorem 4, incorrect baseline values usually mistakenly explain high-order causal patterns as an unnecessarily large number of low-order causal patterns, where the order m of the causal effect US is defined as the cardinality of S, m = |S|. Thus, the objective of learning baseline values is roughly equivalent to penalizing effects of loworder causal patterns, in order to prevent learning incorrect baseline values that mistakenly represent the high-order pattern as an exponential number of low-order patterns. min b L(b), subject to L(b) = ∑ x ∑ S⊆N,|S|≤k |US(x|b)| (5) An approximate-yet-efficient solution. When each input sample contains a huge number of variables, e.g., an image sample, directly optimizing Eq. (5) is NP-hard. Fortunately, we find the multiorder Shapley value and the multi-order marginal benefit in the following equation have strong connections with multi-order causal patterns (proven in Appendix H), as follows. ϕ (m) i (x|b) def =ES⊆N\{i} |S|=m [ v(xS∪{i}, b)−v(xS , b) ] = ES⊆N\{i} |S|=m [∑ L⊆S UL∪{i}(x|b) ] ∆vi(S|x, b) def =v(xS∪{i}, b)−v(xS , b) = ∑ L⊆S UL∪{i}(x|b) (6) where ϕ(m)i (x|b) and ∆vi(S|x, b) denote the m-order Shapley value and the m-order marginal benefit computed using baseline values b, respectively, where the order m is given as m= |S|. According to the above equation, high-order casual patterns US are only contained by high-order Shapley values ϕ(m)i and high-order marginal benefits ∆vi. Therefore, in order to penalize the effects of low-order causal patterns, we penalize the strength of low-order Shapley values and low-order marginal benefits, respectively, as an engineering solution to boost computational efficiency. In experiments, these loss functions were optimized via SGD. LShapley(b) = ∑ m∼Unif(0,λ) ∑ x∈X ∑ i∈N |ϕ(m)i (x|b)|, Lmarginal(b) = ∑ m∼Unif(0,λ) ∑ x∈X ∑ i∈N E S⊆N |S|=m |∆vi(S|x, b)| (7) where λ ≥ m denotes the maximum order to be penalized. We have conducted experiments to verify that baseline values b learned by loss functions in Eq. (7) could effectively sparsify causal effects of low-order causal patterns in Eq. (5). Please see Appendix G.3 for results. Most importantly, we still used the metric R in Section 3.2 to check whether the learned baseline values removed original causal patterns in the input while not introducing new patterns. The low value of R(ours) in Table 1 shows that baseline values learned by our method successfully removed existing salient causal patterns without introducing many new salient patterns. 5 EXPERIMENTS 5.1 VERIFICATION OF CORRECTNESS OF BASELINE VALUES AND SHAPLEY VALUES Correctness of baseline values on synthetic functions. People usually cannot determine the ground truth of baseline values for real images, such as the MNIST dataset. Therefore, we conducted experiments on synthetic functions with ground-truth baseline values, in order to verify the Table 5: Accuracy of Shapley values on the extended Addition-Multiplication dataset using different settings of baseline values. Zero Mean Baseline values in SHAP Kernel SHAP Frye et al. (2021) Ours Accuracy 82.88% 72.63% 81.25% 33.88% 66.00% 100% Table 6: An example of Shapley values computed on different baseline values. The function is model(x) =−2.62x1−5x3− 1.98x6(x4−0.94)+1.15(x5−0.91)−4.23x7, and the input is x = [0, 1, 1, 1, 1, 1, 1]. Baseline values The computed Shapley values {ϕi} Truth/Ours {0, 0,−5,−0.06, 0.10,−0.06,−4.23} Zero baseline {0, 0,−5,−0.99, 1.15, 0.87,−4.23} Mean baseline {1.31, 0,−2.50,−0.74, 0.58, 0.19,−2.11} Setting in SHAP {0.014, 0,−0.011,−0.003, 0.003, 0.001,−0.010} correctness of the learned baseline values. We randomly generated 100 functions, whose causal patterns and ground truth of baseline values could be easily determined. This dataset has been released at https://github.com/zzp1012/faithful-baseline-value. The generated functions were composed of addition, subtraction, multiplication, exponentiation, and the sigmoid operations (see Table 3). For example, for the function y=sigmoid(3x1x2−3x3−1.5)−x4x5+0.25(x6+x7)2, xi∈{0, 1}, there were three causal patterns (i.e. {x1, x2, x3}, {x4, x5}, {x6, x7}), which were activated only if xi =1 for i∈{1, 2, 4, 5, 6, 7} and x3 =0. In this case, the ground truth of baseline values was b∗i = 0 for i ∈ {1, 2, 4, 5, 6, 7} and b∗3 = 1. Please see Appendix G.4 for more discussions about the setting of ground-truth baseline values. We used our method to learn baseline values on these functions and tested the accuracy. Note that |bi−b∗i |∈ [0, 1] and b∗i ∈{0, 1}. If |bi−b∗i |<0.5, we considered the learned baseline value correct. We set λ=0.5n in both LShapley and Lmarginal. The results are reported in Table 4 and are discussed later. Correctness of baseline values on functions in (Tsang et al., 2018). Besides, we also evaluated the correctness of the learned baseline values using functions in Tsang et al. (2018). Among all the 92 input variables in these functions, the ground truth of 61 variables could be determined (see Appendix G.4). Thus, we used these annotated baseline values to test the accuracy. Table 4 reports the accuracy of the learned baseline values on the above functions. In most cases, the accuracy was above 90%, showing that our method could effectively learn correct baseline values. A few functions in (Tsang et al., 2018) did not have salient causal patterns, which caused errors in the learning. Besides, in experiments, we tested our method under three different initializations of baseline values (i.e., 0, 0.5, and 1). Table 4 shows that baseline values learned with different initialization settings all converged to similar and high accuracy. Correctness of the computed Shapley values. Incorrect baseline values lead to incorrect Shapley values. We verified the correctness of the computed Shapley values on the extended AdditionMultiplication dataset (Zhang et al., 2021c). We added the subtraction operation to avoid all baseline values being zero. Theorem 2 considers the Shapley value as a uniform assignment of effects of each causal pattern to its compositional variables. This enabled us to determine the ground-truth Shapley value of variables without baseline values based on causal patterns. For example, the function f(x) = 3x1x2 + 5x3x4 + x5 s.t. x = [1, 1, 1, 1, 1] contained three causal patterns, according to the principle of the most simplified causality. Accordingly, the ground-truth Shapley values were ϕ̂1= ϕ̂2=3/2, ϕ̂3= ϕ̂4=5/2, and ϕ̂5=1. See Appendix G.5 for more details. The estimated Shapley value ϕi was considered correct if |ϕi− ϕ̂i| ≤ 0.01; otherwise, incorrect. Then, we computed the accuracy of the estimated Shapley values as the ratio of input variables with correct Shapley values. Discussion on why the learned baseline values generated correct Shapley values. We computed Shapley values of variables in the extended Addition-Multiplication dataset using different baseline values, and compared their accuracy in Table 5. The result shows that our method exhibited the highest accuracy. Table 6 shows an example of incorrect Shapley values computed by using other baseline values. Our method generated correct Shapley values in this example. For the variable x6, due to its negative coefficient −1.98, its contribution should be negative. However, all other baseline values generated positive Shapley values for x6. The term −4.23x7 showed the significant effect of the variable x7 on the output, but its Shapley value computed using baseline values in SHAP was just −0.010, which was obviously incorrect. age workclass education maritalstatus occupatio n relationshi p race sex capitalgain capitalloss hours-perweek nativecountry -20 0 20 40 60 80 100 120 Workclass Education Marital-statu Occupation Relationship Race Sex Capital-gai Capital-los Hours-per-we Native-country Age 0 20 40 60 80 1200100 Baseline values learned by our methods Using (zero-init) Using (mean-init) Using (zero-init) Using (mean-init) age workclass education marital-status occupation relationship race sex capital-gain capital-loss hours-per-week native-country 0 20 40 60 80 Values of the input sample Shapley value Ag Workclass Educatio Marital-st tu Occupatio Relationshi Rac Se Capital-gai Capital-los Hours-per-we Native-countr 0 20 40 60 Zero baseline values 0 Mean baseline values 0 Baseline in SHAP 0 Baseline in SAGE 0 Ours 𝐿Shapley zero-init 0 Ours 𝐿marginal zero-init 0 163 age workclass education marital-status occupation relationship race sex capital-gain capital-loss hours-per-week native-country 0 20 40 60 Shapley value Ag Workclass Educatio Marital-st tu Occupatio Relationshi Rac Se Capital-gai Capital-los Hours-per-we Native-countr 0 20 40 60 0 0 0 0 0 0 330 Values of the input sample Zero baseline Mean baseline Baseline in SHAP Baseline in SAGE Ours 𝐿 Ours 𝐿 Figure 3: The learned baseline values (left) and Shapley values computed with different baseline values (right) on the income dataset. Results on the MNIST, the CIFAR-10, and the credit datasets are shown in Appendix G.6 and G.7. 5.2 RESULTS AND EVALUATION ON REALISTIC DATASETS AND MODELS Learning baseline values. We used our method to learn baseline values for MLPs, LeNet (LeCun et al., 1998), and ResNet-20 (He et al., 2016) trained on the UCI South German Credit dataset (namely credit dataset) (Dua and Graff, 2017), the UCI Census Income dataset (namely income dataset) (Dua and Graff, 2017), the MNIST dataset (LeCun et al., 1998), and the CIFAR-10 dataset (Krizhevsky et al., 2009), respectively. We learned baseline values by using either LShapley or Lmarginal as the loss function. In the computation of LShapley, we set v(xS) = log p(y truth|xS) 1−p(ytruth|xS) . In the computation of Lmarginal, |∆vi(S)| was set to |∆vi(S)|=∥h(xS∪{i})−h(xS)∥1, where h(xS) denotes the output feature of the penultimate layer given the masked input xS , in order to boost the efficiency of learning. We set λ=0.2n for the MNIST and the CIFAR-10 datasets, and set λ=0.5n for the simpler data in two UCI datasets. Given baseline values, we used the sampling-based approximation (Castro et al., 2009) to estimate Shapley values. We used two ways to initialize baseline values before learning, i.e. setting baseline values to zero or mean values over different samples, namely zero-init and mean-init, respectively. Fig. 3 (left) shows that baseline values learned with different initialization settings all converged to similar baseline values, except for very few dimensions having multiple local-minimum solutions (discussed in Appendix G.7), which proved the stability of our method. Comparison of attributions computed using different baseline values. Fig. 3 shows the learned baseline values and the computed Shapley values on the income dataset. We found that attributions generated by zero/mean baseline values conflicted with the results of all other methods. Our method obtained that the occupation had more influence than the marital status on the income, which was somewhat consistent with our life experience. However, baseline values in SHAP and SAGE sometimes generated abnormal explanations. In this top-right example, the attribute capital gain was zero, which was not supposed to support the prediction of “the person made over 50K a year.” However, the SAGE’s baseline values generated a large positive Shapley value for capital gain. In the bottom-right example, both SHAP and SAGE considered the marital status important for the prediction. SHAP did not consider the occupation as an important variable. Therefore, we considered these explanations not reliable. Attribution maps and baseline values generated on the CIFAR-10 and the MNIST datasets are provided in Appendix G.6. Compared to zero/mean/blurring baseline values, our baseline values were more likely to ignore noisy variables in the background, which were far from the foreground in images. Compared to SHAP, our method yielded more informative attributions. Besides, our method generated smoother attributions than SAGE. 6 CONCLUSIONS In this paper, we have defined the absence state of input variables in terms of causality. Then, we have found that most existing masking methods cannot faithfully remove existing causal patterns without triggering new patterns. In this way, we have formulated optimal baseline values for the computation of Shapley values as those that remove most causal patterns. Then, we have proposed an approximate-yet-efficient method to learn optimal baseline values that represent the absence states of input variables. Experimental results have demonstrated the effectiveness of our method. ETHIC STATEMENT This paper aims to examine the masking approach in previous explaining methods. We find that previous settings of the masking approach cannot faithfully represent the absence of input variables, thereby hurting the trustworthiness of the obtained explanations. Therefore, we propose a new method to learn optimal baseline values to represent the absence of input variables. In this way, the trustworthiness of explanations of the DNN is further boosted. There are no ethical issues with this paper. REPRODUCIBILITY STATEMENT We have provided proofs for all theoretical results in Appendix E and Appendix H. We have also provided experimental details in Section 5 and Appendix G. Furthermore, we will release the code when the paper is accepted. ACKNOWLEDGEMENT This work is partially supported by the National Nature Science Foundation of China (62276165), National Key R&D Program of China (2021ZD0111602), Shanghai Natural Science Foundation (21JC1403800,21ZR1434600), National Nature Science Foundation of China (U19B2043). This work is also partially supported by Huawei Technologies Inc. A RELATED WORKS No previous methods directly examined the faithfulness of the masking methods. Instead, we made a survey in a larger scope of attribution methods and other explainable AI studies, and put them in the appendix. Nevertheless, we will put this section back to the main paper if the paper is accepted. In the scope of explainable AI, many methods (Simonyan et al., 2014; Yosinski et al., 2015; Mordvintsev et al., 2015; Dosovitskiy and Brox, 2016; Zhou et al., 2015) have been proposed to explain the DNN. Among all methods, the estimation of attributions for each input variable represents a classical direction (Zhou et al., 2016; Selvaraju et al., 2017; Lundberg and Lee, 2017; Shrikumar et al., 2017). In this paper, we mainly focus on attributions based on Shapley values. Shapley values. The Shapley value (Shapley, 1953) in game theory was widely considered as a fair distribution of the overall reward in a game to each player (Weber, 1988). (Sen et al., 1981) and (Grömping, 2007) used the Shapley value to attribute the correlation coefficient of a linear regression to input features. (Štrumbelj et al., 2009; Štrumbelj and Kononenko, 2014) used the Shapley value to attribute the prediction of a model to input features. (Bork et al., 2004) used the Shapley value to measure importances of protein interactions in large, complex biological interaction networks. (Keinan et al., 2004) employed the Shapley value to measure causal effects in neurophysical models. (Sundararajan et al., 2017) proposed Integrated Gradients based on the AumannShapley(Aumann and Shapley, 2015) cost-sharing technique. Besides above local explanations, (Covert et al., 2020b) focused on the global interpretability. In order to compute the Shapley value in deep models efficiently, (Lundberg and Lee, 2017) proposed various approximations for Shapley valus in DNNs. (Lundberg et al., 2018) further computed the Shapley value on tree emsembles. (Aas et al., 2021) generalized the approximation method in (Lundberg and Lee, 2017) to the case when features were related to each other. (Ancona et al., 2019) further formulated a polynomial-time approximation of Shapley values for DNNs. Baseline values. In terms of baseline values of Shapley values, most studies (Covert et al., 2020a; Merrick and Taly, 2020; Sundararajan and Najmi, 2020; Kumar et al., 2020) compared influences of baseline values on explanations, without providing any principles for setting baseline values. Shrikumar et al. (2017) proposed DeepLIFT to estimate attributions of input variables, and also mentioned the choice of baseline values. Besides, Agarwal and Nguyen (2021) and Frye et al. (2021) used generative models to alleviate the out-of-distribution problem caused by baseline values. Unlike previous studies, we rethink and formulate baseline values from the perspective of gametheoretic causality. We define the absent state of input variables, and propose a method to learn optimal baseline values based on the number of causal patterns. B QUANTITATIVE EVALUATION OF ATTRIBUTIONS FOR IMAGE CLASSIFICATION In order to quantitatively evaluate Shapley values computed by different baseline values on the MNIST dataset, we constructed an And-Or decision tree following (Harradon et al., 2018), whose structure directly provided the ground-truth Shapley value for each input variable. Then, we used different attribution methods to explain the decision tree. Table 7 shows that our method generated more accurate Shapley values than other baseline values. We constructed a decision tree (Song et al., 2013) for each category in the MNIST dataset. Specifically, for each category (digit), we first computed the average image over all training samples in this category. Let x̄(c) ∈ Rn denote the average image of the c-th category. Then, we built a decision tree by considering each pixel as an internal node. The splitting rule for the decision tree was designed as follows. Given an input x in the category c, the splitting criterion at the pixel (node) xi was designed as ( (x̄ (c) i > 0.5)&(xi > 0.5) ) 2. If (x̄(c)i > 0.5)&(xi > 0.5) = True, then the pixel value xi was added to the output; otherwise, xi was ignored. In this way, the output of the decision tree was f(x) = ∑ i∈V xi, where V = {i ∈ N |(x̄ (c) i > 0.5)&(xi > 0.5) = True} denote the set of all pixels that satisfied the above equation. For inference, the probability of x belonging to the category c was p(c|x) = sigmoid(γ(f(x)− β)), where γ = 40 was a constant and β ∝ ∑ i∈N 1x̄(c)i >0.5 . In this case, we defined v(xN ) = log p(c|x) 1−p(c|x) . Thus, the co-appearing of pixels in V formed a causal pattern to contribute for v(xN ). In other words, because ∀i ∈ N, xi ≥ 0, the absence of any pixel in V might deactivate this pattern by leading to a small probability p(c|x) < 0.5 and a small v. This pattern can also be understood as an AND node in the And-Or decision tree (Song et al., 2013). In the above decision tree, the ground-truth Shapley values of input variables (pixels) were easy to determine. The above decision tree ensured that the absence of any variable in V would deactivate the causal pattern. Therefore, according to Theorem 2 in the paper, the output probability should be fairly assigned to pixels in V , i.e., they shared the same Shapley values ϕ̂i = v(xN ) |V | . For other pixels that were not contained in the output, their ground-truth Shapley values were zero. We estimated Shapley values of input variables in the above decision tree by using zero baseline values, mean baseline values, baseline values in SHAP, and the learned baseline values by our method, respectively. Let ϕi denote the estimated Shapley value of the variable i. If |ϕi−ϕ̂i| ≤ 0.01, we considered the estimated Shapley value ϕi correct; otherwise, incorrect. In this way, we computed the accuracy of the estimated Shapley values, and Table 7 shows that our method achieved the highest accuracy. C REMOVING ADVERSARIAL PERTURBATIONS FROM THE INPUT Let x denote the normal sample, and let xadv = x + δ denote the adversarial example generated by (Madry et al., 2018). According to (Ren et al., 2021), the adversarial example xadv mainly created out-of-distribution bivariate interactions with high-order contexts, which were actually related to the high-order interactions (causal patterns) in this paper. Thus, in the scenario of this study, the adversarial utility was owing to out-of-distribution high-order interactions (causal patterns). The removal of input variables was supposed to remove most high-order causal patterns. Therefore, the baseline value can be considered as the recovery of the original sample. In this way, we used the adversarial example xadv to initialize baseline values before learning, and used Lmarginal to learn baseline values. If the learned baseline values b satisfy ∥b−x∥1≤∥xadv−x∥1, we considered that our method successfully recovered the original sample to some extent. We conducted experiments using LeNet, AlexNet (Krizhevsky et al., 2012), and ResNet-20 on the MNIST dataset (∥δ∥∞ ≤ 32/255) and the CIFAR-10 dataset (∥δ∥∞≤8/255). Table 8 shows that our method recovered original samples from adversarial examples, which demonstrated the effectiveness of our method. D AXIOMS OF THE SHAPLEY VALUE The Shapley value (Shapley, 1953) was first introduced in game theory, which measures the contribution of each player in a game. Actually, given an input x with n input variables, i.e., x = [x1, . . . , xn], we can consider a deep model as a game with n players N = {1, 2, · · · , n}. Each player i is an input variable xi (e.g. an input dimension, a pixel, or a word). In this way, the problem of fairly estimating attributions of input variables in the DNN is equivalent to the problem of fairly assigning the total reward in the game to each player. The Shapley value is widely considered a fair attribution method, because it satisfies the following four axioms (Weber, 1988). (1) Linearity axiom: If two games can be merged into a new game u(xS) = v(xS) + w(xS), then Shapley values in the two old games also can be merged, i.e. ∀i ∈ N , ϕi,u = ϕi,v + ϕi,w. (2) Dummy axiom and nullity axiom: The dummy player i is defined as a player without any interactions with other players, i.e. satisfying ∀S ⊆ N \ {i}, v(xS∪{i}) = v(xS) + v(x{i}). Then, the dummy player’s Shapley value is computed as ϕi = v(x{i}). The null player i is defined as a player that satisfies ∀S ⊆ N \ {i}, v(xS∪{i}) = v(xS). Then, the null player’s Shapley value is ϕi = 0. 2For Table 7, the splitting criterion was designed as (x̄(c)i > 0.5). (3) Symmetry axiom: If ∀S ⊆ N \ {i, j}, v(xS∪{i}) = v(xS∪{j}), then ϕi = ϕj . (4) Efficiency axiom: The overall reward of the game is equal to the sum of Shapley values of all players, i.e. v(xN )− v(x∅) = ∑ i∈N ϕi. E PROOFS OF THEOREMS This section provides proofs of theorems in the main paper. E.1 PROOF OF THEOREM 1 Theorem 1 (Faithfulness, proven by Ren et al. (2023a)) Let us consider a DNN v and an input sample x with n input variables. We can generate 2n different masked samples, i.e., {xS |S ⊆ N}. The DNN’s outputs on all masked samples can always be well mimicked as the sum of the triggered interaction effects in Eq. (3), i.e., ∀S⊆N, v(xS) = ∑ S′⊆S US′ . Proof: According to the definition of the Harsanyi dividend, we have ∀S ⊆ N ,∑ S′⊆S US′ = ∑ S′⊆S ∑ L⊆S′ (−1)|S ′|−|L|v(xL) = ∑ L⊆S ∑ S′⊆S:S′⊇L (−1)|S ′|−|L|v(xL) = ∑ L⊆S |S|∑ s′=|L| ∑ S′⊆S:S⊇L |S′|=s′ (−1)s ′−|L|v(xL) = ∑ L⊆S v(xL) |S|−|L|∑ m=0 ( |S| − |L| m ) (−1)m = v(xS) E.2 PROOF OF THEOREM 2 Theorem 2 Harsanyi dividends can be considered as causal patterns of the Shapley value. ϕi = ∑ S⊆N\{i} 1 |S|+ 1 US∪{i} (8) In this way, the effect of an causal pattern consisting of m variables can be fairly assigned to the m variables. This connection has been proved in (Harsanyi, 1982). • Proof: right = ∑ S⊆N\{i} 1 |S|+ 1US∪{i} = ∑ S⊆N\{i} 1 |S|+ 1 ∑ L⊆S (−1)|S|+1−|L|v(L) + ∑ L⊆S (−1)|S|−|L|v(L ∪ {i}) = ∑ S⊆N\{i} 1 |S|+ 1 ∑ L⊆S (−1)|S|−|L| [v(L ∪ {i})− v(L)] = ∑ L⊆N\{i} ∑ K⊆N\L\{i} (−1)|K| |K|+ |L|+ 1 [v(L ∪ {i})− v(L)] % Let K = S \ L = ∑ L⊆N\{i} n−1−|L|∑ k=0 (−1)k k + |L|+ 1 ( n− 1− |L| k ) [v(L ∪ {i})− v(L)] % Let k = |K| = ∑ L⊆N\{i} |L|!(n− 1− |L|)! n! [v(L ∪ {i})− v(L)] % by the property of combinitorial number = ϕi = left Table 9: Comparison between ground-truth baseline values and incorrect baseline values. The last column shows ratios of causal patterns of different orders rm = ∑ S⊆N,|S|=m |US |∑ S⊆N,S ̸=∅ |US | . We consider interactions of input samples that activate causal patterns. We find that when models/functions contain a single complex collaborations between multiple variables (i.e. high-order causal patterns), incorrect baseline values usually generate a mixture of many low-order causal patterns. In comparison, ground-truth baseline values lead to sparse and high-order causal patterns. Functions (∀i ∈ N, i ∈ {0, 1}) Baseline values b Ratios r f(x) = x1x2x3x4x5 x = [1, 1, 1, 1, 1] Learned baseline values by Learned baseline values by Zero baseline values ∑ | ( ) | ∑ ∑ | ( ) | 0 0.5 1 ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) ∗ ( ) ( ) ( ) 1 5ground truth: b∗ = [0, 0, 0, 0, 0] incorrect: b(1) = [0.5, 0.5, 0.5, 0.5, 0.5] incorrect: b(2) = [0.1, 0.2, 0.6, 0.0, 0.1] incorrect: b(3) = [0.7, 0.1, 0.3, 0.5, 0.1] f(x) = sigmoid(5x1x2x3+ 5x4 − 7.5) x = [1, 1, 1, 1] Learned baseline values by Learned baseline values by Zero baseline values ∑ | ( ) | ∑ ∑ | ( ) | 0 0.5 1 ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) ∗ ( ) ( ) ( ) 1 5 ground truth: b∗ = [0, 0, 0, 0] incorrect: b(1) = [0.5, 0.5, 0.5, 0.5] incorrect: b(2) = [0.6, 0.4, 0.7, 0.3] incorrect: b(3) = [0.3, 0.6, 0.5, 0.8] f(x) = x1(x2 + x3 − x4)3 x = [1, 1, 1, 0] Learned baseline values by Learned baseline values by Zero baseline values ∑ | ( ) | ∑ ∑ | ( ) | 0 0.5 1 ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) ∗ ( ) ( ) ( ) 1 5 ground truth: b∗ = [0, 0, 0, 1] incorrect: b(1) = [0.5, 0.5, 0.5, 0.5] incorrect: b(2) = [0.2, 0.3, 0.6, 0.1] incorrect: b(3) = [1.0, 0.3, 1.0, 0.1] E.3 PROOF OF REMARK2, THEOREM 3, AND THEOREM 4 Remark 2 Let us consider a function with a single causal pattern f(xS) =wS ∏ j∈S(xj−δj). Accordingly, ground-truth baseline values of variables are obviously {δj}, because setting any variable ∀j ∈ S, xj = δj will deactivate this pattern. Given the correct baseline values b∗j = δj , we can use a single causal pattern to regress f(xS), i.e., US = f(xS), ∀ S′ ̸= S,US′ = 0. Theorem 3 For the function f(xS) = wS ∏ j∈S(xj − δj), if we use m ′ incorrect baseline values {b′j |b′j ̸=δj} to replace correct ones to compute causal effects, then the function will be explained to contain at most 2m ′ causal patterns. Theorem 4 If we use m′ incorrect baseline values to compute causal effects in the function f(xS)= wS ∏ j∈S(xj−δj), a total of ( m′ k−|S|+m′ ) causal patterns of the k-th order emerge, k ≥ |S|−m′. A causal pattern of the k-th order means that this causal pattern represents the AND relationship between k variables. • Theoretical proof: Without loss of generality, let us consider an input sample x, with ∀j ∈ S, xj ̸= δj . Based on the ground-truth baseline value {δj}, we have (1) v(xS) = f(xS) = wS ∏ j∈S(xj − δj) ̸= 0, (2) ∀S′ ⊊ S, v(xS′) = wS ∏ j∈S′(xj − δj) ∏ k∈S\S′(δk − δk) = 0, Accordingly, we have US = ∑ S′⊆S(−1) |S|−|S′|v(xS′) = v(xS) ̸= 0. For S′ ⊊ S, we have US′ =∑ L⊆S′(−1) |S′|−|L|v(xL) = ∑ L⊆S′ 0 = 0. (3) ∀S′ ̸= S, let S′ = L ∪M , where L ⊆ S and M ∩ S = ∅. Then, we have US′ = ∑ T⊆S′ (−1)|S ′|−|T |v(xT ) = ∑ L′⊆L L′ ̸=∅ (−1)|S ′|−|L′|v(xL′) + ∑ M′⊆M M′ ̸=∅ (−1)|S ′|−|M′| v(xM′)︸ ︷︷ ︸ =v(x∅)=0 + ∑ L′⊆L,M′⊆M L′ ̸=∅,M′ ̸=∅ (−1)|S|−|L ′|−|M′| v(xL′∪M′)︸ ︷︷ ︸ =v(L′) +(−1)|S ′| v(x∅)︸ ︷︷ ︸ =0 = ∑ L′⊆L L′ ̸=∅ (−1)|S ′|−|L′|v(xL′) + ∑ L′⊆L,M′⊆M L′ ̸=∅,M′ ̸=∅ (−1)|S|−|L ′|−|M′|v(xL′) =(−1)|S ′|−|S|v(xS) + ∑ M′⊆M M′ ̸=∅ (−1)|S ′|−|S|−|M′|v(xS) % v(xL′) ̸= 0 only if L′ = S = ∑ M′⊆M (−1)|S ′|−|S|−|M′|v(xS) = 0 Therefore, there is only one causal pattern with non-zero effect US . In comparison, if we use m′ incorrect baseline values {δ′j}, where ∑ j∈S 1δ′j ̸=δj = m′, then the function will be explained to contain at most 2m ′ causal patterns. For the simplicity of notations, let S = {1, 2, ...,m}, and δ′1 = δ1 + ϵ1, ..., δ′m′ = δm′ + ϵm′ , where ϵ1, ..., ϵm′ ̸= 0. Let T = {1, 2, . . . ,m′}. In this case, we have (1) v(xS) = f(xS) ̸= 0 (2) ∀S′ ⊊ S, |S′| < m−m′, v(xS′) = wS ∏ j∈S′(xj − δj) ∏ l∈S\S′(δ ′ l − δl). Because |S| − |S′| > m′, there is at least one variable with ground-truth baseline value in S \ S′. Therefore, v(xS′) = 0. Furthermore, US′ = ∑ L⊆S′(−1) |S′|−|L|v(xL) = 0 (3) ∀S′ ⊊ S, |S′| = k ≥ m − m′, v(xS′) = wS ∏ j∈S′(xj − δj) ∏ l∈S\S′(δ ′ l − δl). If S \ T ⊆ S′, then S \ S′ ⊆ T and v(xS′) ̸= 0. Otherwise, v(xS′) = 0. Then, US′ = ∑ L⊆S′ (−1)|S ′|−|L|v(xL) = ∑ L⊆S′,|L|<m−m′ (−1)|S ′|−|L|v(xL) + ∑ L⊆S′,L≥m−m′ (−1)|S ′|−|L|v(xL) = 0 + ∑ L⊆S′,L≥m−m′,L⊇S\T (−1)|S ′|−|L|v(xL) + ∑ L⊆S′,L≥m−m′,L⊉S\T (−1)|S ′|−|L|v(xL) = ∑ L⊆S′,L≥m−m′,L⊇S\T (−1)|S ′|−|L|v(xL) If the above US′ = 0, it indicates that S\T ⊈ S′. In this case, there is no subset L ⊆ S′ s.t. S\T ⊆ L. In other words, only if S \ T ⊆ S′, US′ ̸= 0. In this way, a total of ( m′ k−(|S|−m′) ) causal patterns of the k-th order emerge, where the order k of a causal pattern means that this causal pattern S′ contains k = |S′| variables. There are totally ∑m k=|S|−m′ ( m′ k−(|S|−m′) ) = 2m ′ causal patterns in x. For example, if the input x is given as follows, xi = { δi + 2ϵi, i ∈ T = {1, . . . ,m′} δi + ϵi, i ∈ S \ T = {m′ + 1, . . . ,m} where ϵi ̸= 0 are arbitrary non-zero scalars. In this case, we have ∀S′ ⊆ T,US′∪{m′+1,...,m} = ϵ1ϵ2...ϵm ̸= 0. Besides, if {m′ + 1, ...,m} ⊈ S′, we have US′ = 0. In this way, there are totally 2m ′ causal patterns in x. • Experimental verification: We further conducted experiments to show that the incorrect setting of baseline values makes a model/function consisting of high-order causal patterns be mistakenly explained as a mixture of low-order and high-order causal patterns. To show this phenomenon, we compare causal patterns computed using ground-truth baseline values and incorrect baseline values in Table 9, and the results verify our conclusion. We find that when models/functions contain complex collaborations between multiple variables (i.e. high-order causal patterns), incorrect baseline values usually generate fewer high-order causal patterns and more low-order causal patterns than ground-truth baseline values. In other words, the model/function is explained as massive low-order causal patterns. In comparison, ground-truth baseline values lead to sparse and high-order salient patterns. F PROVING THAT MASKING INPUT VARIABLES REMOVES CAUSAL EFFECTS In this section, we prove that for the causal pattern S ∋ i, if the input variable i is masked, then the causal effect wS = 0. Proof: let S = S′ ∪ {i}. If i ∈ S is masked, then ∀L s.t. i /∈ L,xL = xL∪{i}. Therefore, v(L ∪ {i}) = v(L). According to the definition of Harsanyi dividend (Harsanyi, 1982), we have US = ∑ L⊆S (−1)|S|−|L|v(L) = ∑ L⊆(S′∪{i}) (−1)|S ′|+1−|L|v(L) = ∑ L⊆S′ (−1)|S ′|+1−|L|v(L) + ∑ L⊆S′ (−1)|S ′|−|L|v(L ∪ {i}) = ∑ L⊆S′ (−1)|S ′|+1−|L|v(L) + ∑ L⊆S′ (−1)|S ′|−|L|v(L) = ∑ L⊆S′ ( (−1)|S ′|+1−|L| + (−1)|S ′|−|L| ) v(L) = ∑ L⊆S′ (−1 + 1)(−1)|S ′|−|L|v(L) = 0 Note that the causal pattern not containing i will not be deactivated by the masking of i. For example, {eyes, beak} is not deactivated by the absence of forehead, because this pattern represents the AND relationship between eyes and beak, and it does not contain forehead. G MORE EXPERIMENTAL DETAILS AND RESULTS G.1 VERIFICATION OF THE SPARSITY OF CAUSAL PATTERNS In this subsection, we conducted experiments to verify the sparsity of causal effects, which is introduced in Remark 1. To this end, we computed causal effects US of all 2n causal patterns encoded by a DNN. Specifically, we trained a three-layer MLP on the income dataset and computed causal effects in the model. Figure 4 shows the distribution of absolute causal effects |US | of causal patterns in the first five samples of each category of the income dataset. These results show that most causal patterns had insignificant causal effects, US ≈ 0. Only a few causal patterns had salient causal effects. Moreover, we also conducted experiments to demonstrate the universality of this phenomenon. We trained the five-layer MLP, CNN, LSTM, ResNet-32, and VGG-16 on the UCI census income dataset, the UCI TV news channel commercial detection dataset, the SST-2 dataset, and the MNIST dataset, respectively. Figure 5 shows the absolute causal effects US in the descending order. These results show that various DNNs learned on different tasks could be explained by a set of sparse causal patterns. G.2 VERIFICATION OF USING CAUSAL PATTERNS TO EXAMINE THE STATE OF INPUT VARIABLES In this subsection, we conducted experiments to verify that causal patterns reflect the states of removing existing patterns. Given causal effects US in the normal input image and causal effects U (noise) S in the white noise input, we compared their distributions in Figure 6. Note that we assumed that the white noise input naturally contained information for classification than the normal input image. We found that most causal effects in the white noise input were close to zero, and there were few salient causal patterns. Besides, we computed the average strength of causal effects in the above two inputs. In the normal input, the average strength of causal effects ES⊆N |US | = 5.5285, while in the white noise input, the average strength was much smaller, ES⊆N |U (noise)S | = 0.2321. These results indicated that salient causal patterns could reflect the information encoded in the input. G.3 EFFECTS OF THE PROPOSED METHOD ON MULTI-ORDER SHAPLEY VALUES AND MULTI-ORDER MARGINAL BENEFITS In this section, we conducted experiments to verify that baseline values b learned by the proposed loss function in Eq. (7) could effectively reduce causal effects of low-order causal patterns in Eq. (5). To this end, we computed the metric Ex[(E|S|=m|US |)/(v(xN ) − v(x∅))] to measure the relative strength of causal patterns of a specific order m, in order to evaluate the effectiveness of baseline values. Fig. 7(a) shows that compared to zero baseline values, our method effectively reduced low-order causal patterns. In addition, Fig. 8 and Fig. 7(b) verify that the loss LShapley in Eq. (7) reduced the number of salient causal patterns in Ω, which means LShapley avoided the exponential number of causal patterns caused by incorrect baseline values. G.4 DISCUSSION ABOUT THE SETTING OF GROUND-TRUTH BASELINE VALUES. This section discusses the ground truth of baseline values of synthetic functions in Section 5.1 of the main paper. In order to verify the correctness of the learned baseline values, we conducted experiments on synthetic functions with ground-truth baseline values. We randomly generated 100 Zero baseline values Learned baseline values Count in the log space 𝑈𝑆 100 102 104 -40 0 40-10 0 10 𝑈𝑆 0 4 8 x10 4 Count Figure 8: Distribution of causal effects US of causal patterns in 20 samples in the credit dataset. functions whose causal patterns and ground truth of baseline values could be easily determined. As Table 10 shows, the generated functions were composed of addition, subtraction, multiplication, exponentiation, and sigmoid operations. The ground truth of baseline values in these functions was determined based on causal patterns between input variables. In order to represent the absence states of variables, baseline values should activate as few salient patterns as possible, where activation states of causal patterns were considered as the most infrequent state. Thus, we first identified the activation states of causal patterns of variables, and the ground-truth of baseline values was set as values that inactivated causal patterns under different masks. We took the following examples to discuss the setting of ground-truth baseline values (in the following examples, ∀i ∈ N, xi ∈ {0, 1} and b∗i ∈ {0, 1}). • f(x) = x1x2x3 + sigmoid(x4 + x5 − 0.5) · · · . Let us just focus on the term of x1x2x3 in f(x). The activation state of this causal pattern is x1x2x3 = 1 when ∀i ∈ {1, 2, 3}, xi = 1. In order to inactivate the causal pattern, we set ∀i ∈ {1, 2, 3}, b∗i = 0. • f(x) = −x1x2x3 + (x4 + x5)3 + · · · . Let us just focus on the term of −x1x2x3 in f(x). The activation state of this causal pattern is −x1x2x3 = −1 when ∀i ∈ {1, 2, 3}, xi = 1. In order to inactivate the causal pattern, we set ∀i ∈ {1, 2, 3}, b∗i = 0. • f(x) = (x1 + x2 − x3)3 + · · · . Let us just focus on the term of (x1 + x2 − x3)3 in f(x). The activation state of this causal pattern is (x1 + x2 − x3)3 = 8 when x1 = x2 = 1, x3 = 0. In order to inactivate the causal pattern under different masks, we set b∗1 = b ∗ 2 = 0, b ∗ 3 = 1. • f(x) = sigmoid(3x1x2 − 3x3 − 1.5) + · · · . Let us just focus on the term of sigmoid(3x1x2 − 3x3 − 1.5) in f(x). In this case, x1, x2, x3 form a salient causal pattern because sigmoid(3x1x2 − 3x3 − 1.5) > 0.5 only if x1 = x2 = 1 and x3 = 0. Thus, in order to
1. What is the focus and contribution of the paper on feature attribution? 2. What are the strengths of the proposed approach, particularly in its ability to achieve good results compared to other methods? 3. What are the weaknesses of the paper, especially regarding the theoretical analysis and the choice of baseline values? 4. Do you have any concerns about the clarity and conciseness of the paper's content, including Theorem 3? 5. How does the reviewer assess the complexity and computational intensity of the proposed algorithm? 6. How does the reviewer evaluate the novelty and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a novel idea based on minimizing causal patterns to find the optimal baseline values in feature attribution. Strengths And Weaknesses Strength: The paper proposes a novel idea to find the optimal baseline values in feature attribution. Empirically it achieves very good results compared to other existing methods such as zero/mean baselines. A side weakness here: Why was conditional method not compared for CIFAR-10? Theoretical weakness: The authors propose their criteria of choosing baseline values based on the observation that " incorrect baseline values generate new causal pattern". While this observation is quite intuitive, why is the converse true? That is, it is a bit hard to see why minimizing the number of causal patterns would provide a good baseline value. Clarity weakness: A minor point: Theorem 3 needs to be more concise. Is it possible to move part of it into a different property, or remark, or derivations ahead of the formal theorem? Complexity of the algorithm: There is a trade-off between complexity and accuracy when we approximate the optimization. In practice, the proposed algorithm could be computationally intensive to get good baseline values, in particular for complex problems with higher orders of interactions. Clarity, Quality, Novelty And Reproducibility It is in general quite easy to follow the logic of the authors. The idea, to my knowledge, is novel. Also authors have addressed previous work that tried to tackle the problem, such as conditional distribution. The experiments are reproducible with details.
ICLR
Title Can We Faithfully Represent Absence States to Compute Shapley Values on a DNN? Abstract Masking some input variables of a deep neural network (DNN) and computing output changes on the masked input sample represent a typical way to compute attributions of input variables in the sample. People usually mask an input variable using its baseline value. However, there is no theory to examine whether baseline value faithfully represents the absence of an input variable, i.e., removing all signals from the input variable. Fortunately, recent studies (Ren et al., 2023a; Deng et al., 2022a) show that the inference score of a DNN can be strictly disentangled into a set of causal patterns (or concepts) encoded by the DNN. Therefore, we propose to use causal patterns to examine the faithfulness of baseline values. More crucially, it is proven that causal patterns can be explained as the elementary rationale of the Shapley value. Furthermore, we propose a method to learn optimal baseline values, and experimental results have demonstrated its effectiveness. 1 INTRODUCTION Many attribution methods (Zhou et al., 2016; Selvaraju et al., 2017; Lundberg and Lee, 2017; Shrikumar et al., 2017) have been proposed to estimate the attribution (importance) of input variables to the model output, which represents an important direction in explainable AI. In this direction, many studies (Lundberg and Lee, 2017; Ancona et al., 2019; Fong et al., 2019) masked some input variables of a deep neural network (DNN), and they used the change of network outputs on the masked samples to estimate attributions of input variables. As Fig. 1 shows, there are different types of baseline values to represent the absence of input variables. Theoretically, the trustworthiness of attributions highly depends on whether the current baseline value can really remove the signal of the input variable without bringing in new out-of-distribution (OOD) features. However, there is no criterion to evaluate the signal removal of masking methods. To this end, we need to first break the blind faith that seemingly reasonable baseline values can faithfully represent the absence of input variables, and the blind faith that seemingly OOD baseline values definitely cause abnormal features. In fact, because a DNN may have complex inference logic, seemingly OOD baseline values do not necessarily generate OOD features. Concept/causality-emerging phenomenon. The core challenge of theoretically guaranteeing or examining whether the baseline value removes all or partial signals of an input variable is to explicitly define the signal/concept/knowledge encoded by a DNN in a countable manner. To this end, Ren et al. (2023a) have discovered a counter-intuitive concept-emerging phenomenon in a trained DNN. Although the DNN does not have a physical unit to encode explicit causality or concepts, Ren et al. (2023a); Deng et al. (2022a) have surprisingly discovered that when the DNN is sufficiently trained, the sparse and symbolic concepts emerge. Thus, we use such concepts as a new perspective to define the optimal baseline value for the absence of input variables. As Fig. 1 shows, each concept represents an AND relationship between a specific set S of input variables. The co-appearance of these input variables makes a numerical contribution US to the network output. Thus, we can consider such a concept as a causal pattern1 of the network output, ∗Quanshi Zhang is the corresponding author. He is with the Department of Computer Science and Engineering, the John Hopcroft Center, at the Shanghai Jiao Tong University, China. [email protected]. 1Note that in this paper, the causal pattern means the extracted causal relationship between input variables and the output encoded by the DNN, rather than the true intrinsic causal relationship hidden in data. and US is termed the causal effect. For example, the concept of a rooster’s head consists of the forehead, eyes, beak, and crown, i.e., S= {forehead, eyes, beak, crown} = {f, e, b, c} for short. Only if input variables f , e, b, and c co-appear, the causal pattern S is triggered and makes an effect US on the confidence of the head classification. Otherwise, the absence of any input variables in the causal pattern S will remove the effect. Ren et al. (2023a) have extracted a set of sparse causal patterns (concepts) encoded by the DNN. More importantly, the following finding has proven that such causal patterns1 can be considered as elementary inference logic used by the DNN. Specifically, given an input sample with n variables, we can generate 2n different masked samples. We can use a relatively small number of causal patterns to accurately mimic network outputs on all 2n masked samples, which guarantees the faithfulness of causal patterns. Defining optimal baseline values based on causal patterns. From the above perspective of causal patterns, whether baseline values look reasonable and fit human’s intuition is no longer the key factor to determine the trustworthiness of baseline values. Instead, we evaluate the faithfulness of baseline values by using causal patterns. Because the baseline value is supposed to represent the absence of an input variable, we find that setting an optimal baseline value usually generates the most simplified explanation of the DNN, i.e., we may extract a minimum number of causal patterns to explain the DNN. Such an explanation is the most reliable according to Occam’s Razor. • We prove that using incorrect baseline values makes a single causal pattern be explained as an exponential number of redundant causal patterns. Let us consider the following toy example, where the DNN contains a causal pattern S={f, e, b, c} with a considerable causal effect E on the output. If an incorrect baseline value bf of the variable f (forehead) just blurs the image patch, rather than fully remove its appearance, then masking the variable f cannot remove all score E. The remaining score E−U{f,e,b,c} will be explained as redundant causal patterns U{e,b}, U{e,c}, U{e,b,c}, etc. • Furthermore, incorrect baseline values may also generate new patterns. For example, if baseline values of {f, e, b, c} are set as black regions, then masking all four regions may generate a new pattern of a black square, which is a new causal pattern that influences the network output. Therefore, we consider that the optimal baseline value, which faithfully reflects the true inference logic, usually simplifies the set of causal patterns. I.e., it usually reduces the overall strength of existing causal effects most without introducing new causal effects. However, we find that most existing masking methods are not satisfactory from this perspective (see Section 3.2 and Table 1), although the masking method based on conditional distribution of input variables (Covert et al., 2020b; Frye et al., 2021) performs a bit better. In particular, we notice that Shapley values can also be derived from causal patterns in theory, i.e., the causal patterns are proven to be elementary effects of Shapley values. Therefore, we propose a new method to learn optimal baseline values for Shapley values, which removes the causal effects of the masked input variables and avoids introducing new causal effects. Contributions of this paper can be summarized as follows. (1) We propose a metric to examine whether the masking approach in attribution methods could faithfully represent the absence state of input variables. Based on this metric, we find that most previous masking methods are not reliable. (2) We define and develop an approach to estimating optimal baseline values for Shapley values, which ensures the trustworthiness of the attribution. 2 EXPLAINABLE AI THEORIES BASED ON GAME-THEORETIC INTERACTIONS This paper is a typical achievement on the theoretical system of game-theoretic interactions. In fact, our research group has developed and used the game-theoretical interaction as a new perspective to solve two challenges in explainable AI, i.e., (1) how to define and represent implicit knowledge encoded by a DNN as explicit and countable concepts, (2) how to use concepts encoded by the DNN to explain its representation power or performance. More importantly, we find that the gametheoretic interaction is also a good perspective to analyze the common mechanism shared by previous empirical findings and explanations of DNNs. • Explaining the knowledge/concepts encoded by a DNN. Defining interactions between input variables of a DNN in game theory is a typical research direction (Grabisch and Roubens, 1999; Sundararajan et al., 2020). To this end, we further defined the multi-variate interaction (Zhang et al., 2021a;d) and multi-order interaction (Zhang et al., 2021b) to represent interactions of different complexities. Ren et al. (2023a) and Li and Zhang (2023) first discovered that we could consider game-theoretic interactions as the concepts encoded by a DNN, considering the following three terms. (1) We found that a trained DNN usually only encoded very sparse and salient interactions, and each interaction made a certain effect on the network output. (2) We proved that we could just use the effects of such a small number of salient interactions to well mimic/predict network outputs on an exponential number of arbitrarily masked input samples. (3) We found that salient interactions usually exhibited strong transferability across different samples, strong transferability across different DNNs, and strong discrimination power. Thus, the above three perspectives comprised the solid foundation of considering salient interactions as the concepts encoded by a DNN. Furthermore, Cheng et al. (2021b) found that such interactions usually represented the most reliable and prototypical concepts encoded by a DNN. Cheng et al. (2021a) further analyzed the different signal-processing behaviors of a DNN in encoding shapes and textures. • The game-theoretic interaction is also a new perspective to investigate the representation power of a DNN. Deng et al. (2022a) proved a counter-intuitive bottleneck/difficulty of a DNN in representing interactions of the intermediate complexity. Zhang et al. (2021b) explored the effects of the dropout operation on interactions to explain the generalization power of a DNN. Wang et al. (2021a;b); Ren et al. (2021) used interactions between input variables to explain the adversarial robustness and adversarial transferability of a DNN. Zhou et al. (2023) found that complex (highorder) interactions were more likely to be over-fitted, and they used the generalization power of different interaction concepts to explain the generalization power of the entire DNN. Ren et al. (2023b) proved that a Bayesian neural network (BNN) was less likely to encode complex (highorder) interactions, which avoided over-fitting. • Game-theoretic interactions are also used to analyze the common mechanism shared by many empirical findings. Deng et al. (2022b) discovered that almost all (fourteen) attribution methods could be re-formulated as a reallocation of interactions in mathematics. This enabled the fair comparison between different attribution methods. Zhang et al. (2022) proved that twelve previous empirical methods of boosting adversarial transferability could be explained as reducing interactions between pixel-wise adversarial perturbations. 3 PROBLEMS WITH THE REPRESENTATION OF THE MASKED STATES The Shapley value (Shapley, 1953) was first introduced in game theory to measure the contribution of each player in a game. People usually use Shapley values to estimate attributions of input variables of a DNN. Let the input sample x of the DNN contain n input variables, i.e., x = [x1, . . . , xn]. The Shapley value of the i-th input variable ϕi is defined as follows. ϕi = ∑ S⊆N\{i} [|S|!(n− |S| − 1)!/n!] · [ v(xS∪{i})− v(xS) ] (1) where v(xS) ∈ R denotes the model output when variables in S are present, and variables in N \S are masked. Specifically, v(x∅) represents the model output when all input variables are masked. The Shapley value of the variable i is computed as the weighted marginal contribution of i when the variable i is present w.r.t. the case when the variable i is masked, i.e. v(xS∪{i})− v(xS). 𝑣 𝑥𝑆 : the income is less than 50k relation ship marital status age educati on sex 𝑈𝑆 𝐶𝑆1 -0.72 𝐶𝑆2 +0.58 𝐶𝑆3 -1.18 𝐶𝑆4 -0.93 𝐶𝑆5 +0.58 𝐶𝑆6 +0.52 𝐶𝑆7 -0.52 Figure 2: Causal patterns that explain the inference on a sample in the income dataset. Table 1: The ratio R of the remaining and newly introduced causal effects in the masked inputs. A small value of R meant that baseline values removed most original causal effects and did not introduce many new effects. R(zero) R(mean) R(blur) R(conditional) R(ours) MNIST 1.1736 0.3043 0.4159 0.3780 0.2185 CIFAR-10 0.6630 0.8042 0.7288 0.4027 0.1211 The Shapley value is widely considered a fair attribution method, because it satisfies the linearity, dummy, symmetry, and efficiency axioms (Weber, 1988) (please refer to Appendix D). However, when we explain a DNN, a typical challenge is how to faithfully define the absence of an input variable. The most classical way is to use baseline values (or called reference values) b = [b1, b2, . . . , bn] to mask variables to represent their absence. Specifically, given an input sample x, xS denotes a masked sample, which is generated by masking variables in the set N \ S. If i ∈ S, (xS)i = xi; otherwise, (xS)i = bi (2) We aim to learn optimal baseline values b to faithfully represent absent states of input variables. Decomposing a DNN’s output into sparse interactions. Given a trained DNN v and an input x with n input variables, Ren et al. (2023a) have proven that the DNN output v(x) can be decomposed into effects of interactions between input variables. Specifically, let S ⊆ N denote a subset of input variables. The interaction effect between variables in S is defined as the following Harsanyi dividend (Harsanyi, 1982). US def = ∑ S′⊆S (−1)|S|−|S ′| · v(xS′) (3) Based on this definition, we have v(x) = ∑ S⊆N US . Sparse salient interactions can be considered as causal patterns1 (or concepts) encoded by the DNN. Theorem 1 and Remark 1 prove that most interactions have ignorable effects US ≈ 0, and people can use a few salient interactions with non-ignorable effects to well approximate the inference scores on 2n different masked samples. Thus, we can consider such interactions as causal patterns1 or concepts encoded by the DNN. Accordingly, we can consider the interaction effect US as the causal effect. Besides, Remark 1 has been verified on different DNNs learned for various tasks by experiments in both Appendix G.1 and (Ren et al., 2023a). Theorem 1 (Faithfulness, proven by Ren et al. (2023a) and Appendix E.1) Let us consider a DNN v and an input sample x with n input variables. We can generate 2n different masked samples, i.e., {xS |S ⊆ N}. The DNN’s outputs on all masked samples can always be well mimicked as the sum of the triggered interaction effects in Eq. (3), i.e., ∀S⊆N, v(xS) = ∑ S′⊆S US′ . Remark 1 (Sparsity) Interaction effects in most DNNs are usually very sparse. Most interaction effects are almost zero, i.e., US ≈ 0. A few most salient interaction effects in Ω (less than 100 interaction effects in most cases) are already enough to approximate the DNN, i.e., ∀S ⊆ N, v(xS)≈∑ S′∈Ω,S′⊆S US′ , where |Ω| ≪ 2 n. Each causal pattern (concept) S represents an AND relationship between input variables in S. For example, the head pattern of a rooster consists of {forehead, eyes, beak, crown}. If the forehead, eyes, beak, and crown of the rooster co-appear, then the head pattern S={forehead, eyes, beak, crown} is triggered and makes a causal effect US on the output. Otherwise, if any part is masked, the causal pattern S will not be triggered, and the DNN’s inference score v(x) will not receive the causal effect US . In sum, when we mask an input variable i, it is supposed to remove all causal effects of all AND relationships that contain the variable i. Please see Appendix F for the proof. 3.1 EXAMINING THE FAITHFULNESS OF BASELINE VALUES USING CAUSAL PATTERNS We use salient causal patterns (or concepts) to evaluate the faithfulness of masking methods. Specifically, we examine whether baseline values remove most causal effects depending on xi, and whether baseline values generate new causal effects. The evaluation of the masking methods based on salient causal effects is theoretically supported from the following three perspectives. First, Theorem 1 and Remark 1 prove that the inference score of a DNN can be faithfully disentangled into a relatively small number of causal patterns. Second, Theorem 2 shows that Shapley values can be explained as a re-allocation of causal effects to input variables. Therefore, reducing effects of salient patterns means the removal of elementary factors that determine Shapley values. Besides, in order to verify that the reduction of causal patterns can really represent the absence of input variables, we have conducted experiments to find that salient patterns triggered by white noise inputs were much less than those triggered by normal images. Please see Appendix G.2 for details. Theorem 2 (proven by Harsanyi (1982) and Appendix E.2) We can directly derive Shapley values from the effects US of causal patterns. The Shapley value can be considered as uniformly allocating each causal pattern S’s effect US to all its variables, i.e. ϕi = ∑ S⊆N\{i} 1 |S|+1US∪{i}. Third, an incorrect baseline value bi will make partial effects of the AND relationship of the variable i be mistakenly explained as an exponential number of additional redundant causal patterns, which significantly complicates the explanation. Therefore, the optimal baseline value is supposed to generate the most sparse causal patterns as the simplest explanation of the DNN. Compared to dense causal patterns generated by sub-optimal baseline values, the simplest explanation removes as many as existing causal effects as possible without introducing additional causal effects. Remark 2 (proof in Appendix E.3) Let us consider a function with a single causal pattern f(xS)= wS ∏ j∈S(xj−δj). Accordingly, ground-truth baseline values of variables are obviously {δj}, because setting any variable ∀j ∈ S, xj = δj will deactivate this pattern. Given the correct baseline values b∗j =δj , we can use a single causal pattern to regress f(xS), i.e., US = f(xS), ∀ S′ ̸= S,US′ = 0. Theorem 3 (proof in Appendix E.3) For the function f(xS)=wS ∏ j∈S(xj−δj), if we use m ′ incorrect baseline values {b′j |b′j ̸=δj} to replace correct ones to compute causal effects, then the function will be explained to contain at most 2m ′ causal patterns. Theorem 4 (proof in Appendix E.3) If we use m′ incorrect baseline values to compute causal effects in the function f(xS) = wS ∏ j∈S(xj −δj), a total of ( m′ k−|S|+m′ ) causal patterns of the k-th order emerge, k ≥ |S|−m′. A causal pattern of the k-th order means that this causal pattern represents the AND relationship between k variables. Specifically, Remark 2, Theorems 3 and 4 provide a new perspective to understand how incorrect baseline values generate new causal patterns. Remark 2 shows how correct baseline values explain a toy model that contains a single causal pattern. Theorems 3 and 4 show that incorrect baseline values will use an exponential number of redundant low-order patterns to explain a single high-order causal pattern. For example, we are given the function f(x) =w(xp−δp)(xq−δq) s.t. xp = 3, xq = 4, δp = 2, δq = 3. If we use ground-truth baseline values {δp, δq}, then the function is explained as simple as a single causal pattern Ω={{p, q}}, which yields correct Shapley values ϕp = ϕq = 0.5 ·w, according to Theorem 2. Otherwise, if we use incorrect baseline values {b′p = 1, b′q = 1}, then this function will be explained as four causal patterns Ω= {∅, {p}, {q}, {p, q}}, i.e., f(x) = U∅C∅ +U{p}C{p} +U{q}C{q} +U{p,q}C{p,q}, where U∅ = 2w, U{p} = −4w, U{q} = −3w, and U{p,q} = 6w are computed using incorrect baseline values. Incorrect baseline values increase complicated causal patterns and lead to incorrect Shapley values ϕp = −w, ϕq = 0. In fact, the existence of most newly introduced causal patterns is due to that the effects of a high-order causal pattern are not fully removed, and that OOD causal patterns (new OOD edges or shapes) may be caused by incorrect baseline values. 3.2 PROBLEMS WITH PREVIOUS MASKING METHODS In this subsection, we compare causal patterns in the masked sample with causal patterns in the original sample to evaluate the following baseline values. (1) Mean baseline values. As Fig. 1 shows, the baseline value of each input variable is set to the mean value of this variable over all samples (Dabkowski and Gal, 2017), i.e. bi = Ex[xi]. However, empirically, this method actually introduces additional signals to the input. For example, mean values introduce massive grey dots to images and may form new edges as abnormal causal patterns. This has been verified by experiments in Table 1. Experimental details will be introduced later. (2) Zero baseline values. Baseline values of all input variables are set to zero (Ancona et al., 2019; Sundararajan et al., 2017), i.e. ∀i ∈ N, bi = 0. As Fig. 1 shows, just like mean baseline values, zero baseline values also introduce additional signals (black dots) to the input (verified in Table 1). (3) Blurring input samples. Fong and Vedaldi (2017) and Fong et al. (2019) blur image pixels xi using a Gaussian kernel as its masked state. Covert et al. (2020a); Sturmfels et al. (2020) mentioned that this approach only removed high-frequency signals, but failed to remove low-frequency signals. (4) For each input variable, determining a different baseline value for each specific context S. Instead of fixing baseline values as constants, some studies use varying baseline values to compute v(xS) given x, which are determined temporarily by the context S in x. Some methods (Frye et al., 2021; Covert et al., 2020b) define v(xS) by modeling the conditional distribution of variable values in N \S given the context S, i.e. v(xS) = Ep(x′|xS)[model(xS ⊔ x ′ N\S)]. The operation ⊔ means the concatenation of x’s dimensions in S and x′’s dimensions in N \ S. By assuming the independence between input variables, the above conditional baseline values can be simplified to marginal baseline values (Lundberg and Lee, 2017), i.e. v(xS)=Ep(x′)[model(xS ⊔ x′N\S)]. We conducted experiments to examine whether the above baseline values remove all causal patterns in the original input and whether baseline values introduce new causal patterns. We used the metric R=Ex [ ( ∑ S⊆N |U ′ S |− ∑ S⊆N |U (noise) S |)/( ∑ S⊆N |US |) ] to evaluate the quality of masking. We generated a set of samples based on x, where a set of input variables were masked, and U ′S denote the causal effect in such masked samples. US denote the causal effect in the original sample x, which was used for normalization. U (noise)S denotes the causal effect in a white noise input, and it represents the unavoidable effect of huge amounts of noise patterns. Thus, we considered the U (noise)S term as an inevitable anchor value and removed it from R for a more convincing evaluation. The masking method would have two kinds of effects on causal patterns. (1) We hoped to remove all existing salient patterns in the original sample. (2) We did not expect the masking method to introduce new salient patterns. Interestingly, the removal of existing salient patterns decreased the R value, while the triggering of new patterns increased the R value. Thus, the R metric reflected both effects. A small value of R indicated a good setting of baseline values. We used 20 images in the MNIST dataset (LeCun et al., 1998) and 20 images in the CIFAR-10 dataset (Krizhevsky et al., 2009) to compute R, respectively. We split each MNIST image into 7× 7 grids and split each CIFAR-10 image into 8× 8 grids. For each image, we masked the central 4× 3 grids using the zero baseline, mean baseline, blur baseline, and the baseline based on the conditional distribution, and computed the metric of R(zero), R(mean), R(blur), and R(conditional), respectively. Table 1 shows that the ratio R by using previous baseline values were all large. Although the masking method based on conditional distribution performed better than some other baseline values, our method exhibited the best performance. It indicates that previous masking methods did not remove most existing patterns and/or trigger new patterns. 3.3 ABSENCE STATES AND OPTIMAL BASELINE VALUES In the original scenario of game theory, the Shapley value was proposed without the need to define the absence of players. When people explain a DNN, we consider that the true absence state of variables should generate the most simplified causal explanation. Remark 2 and Theorem 3 show that correct baseline values usually generate the simplest causal explanation, i.e., using the least number of causal patterns to explain the DNN. In comparison, if an incorrect baseline value bi does not fully remove all effects of AND relationships of the variable i, then the remained effects will be mistakenly explained as a large number of other redundant patterns. The above proof well fits Occam’s Razor, i.e., the simplest causality with the minimum causal patterns is more likely to represent the essence of the DNN’s inference logic. This also lets us consider the baseline values that minimize the number of salient causal patterns (i.e., achieving the simplest causality) as the optimal baseline values. Therefore, the learning of the baseline value b∗i of the i-th variable can be formulated to sparsify causal patterns in the deep model. Particularly, such baseline values are supposed to remove existing causal effects without introducing many new effects. b∗ = argminb ∑ x |Ω(x)|, subject to Ω(x) = {S ⊆ N ||US(x|b)| > τ} (4) where US(x|b) denotes the causal effect computed on the sample x by setting baseline values to b. 4 ESTIMATING BASELINE VALUES Based on Theorem 3, we derive Eq. (4) to learn optimal baseline values, but the computational cost of enumerating all causal patterns is exponential. Thus, we explore an approximate solution to learning baseline values. According to Theorem 4, incorrect baseline values usually mistakenly explain high-order causal patterns as an unnecessarily large number of low-order causal patterns, where the order m of the causal effect US is defined as the cardinality of S, m = |S|. Thus, the objective of learning baseline values is roughly equivalent to penalizing effects of loworder causal patterns, in order to prevent learning incorrect baseline values that mistakenly represent the high-order pattern as an exponential number of low-order patterns. min b L(b), subject to L(b) = ∑ x ∑ S⊆N,|S|≤k |US(x|b)| (5) An approximate-yet-efficient solution. When each input sample contains a huge number of variables, e.g., an image sample, directly optimizing Eq. (5) is NP-hard. Fortunately, we find the multiorder Shapley value and the multi-order marginal benefit in the following equation have strong connections with multi-order causal patterns (proven in Appendix H), as follows. ϕ (m) i (x|b) def =ES⊆N\{i} |S|=m [ v(xS∪{i}, b)−v(xS , b) ] = ES⊆N\{i} |S|=m [∑ L⊆S UL∪{i}(x|b) ] ∆vi(S|x, b) def =v(xS∪{i}, b)−v(xS , b) = ∑ L⊆S UL∪{i}(x|b) (6) where ϕ(m)i (x|b) and ∆vi(S|x, b) denote the m-order Shapley value and the m-order marginal benefit computed using baseline values b, respectively, where the order m is given as m= |S|. According to the above equation, high-order casual patterns US are only contained by high-order Shapley values ϕ(m)i and high-order marginal benefits ∆vi. Therefore, in order to penalize the effects of low-order causal patterns, we penalize the strength of low-order Shapley values and low-order marginal benefits, respectively, as an engineering solution to boost computational efficiency. In experiments, these loss functions were optimized via SGD. LShapley(b) = ∑ m∼Unif(0,λ) ∑ x∈X ∑ i∈N |ϕ(m)i (x|b)|, Lmarginal(b) = ∑ m∼Unif(0,λ) ∑ x∈X ∑ i∈N E S⊆N |S|=m |∆vi(S|x, b)| (7) where λ ≥ m denotes the maximum order to be penalized. We have conducted experiments to verify that baseline values b learned by loss functions in Eq. (7) could effectively sparsify causal effects of low-order causal patterns in Eq. (5). Please see Appendix G.3 for results. Most importantly, we still used the metric R in Section 3.2 to check whether the learned baseline values removed original causal patterns in the input while not introducing new patterns. The low value of R(ours) in Table 1 shows that baseline values learned by our method successfully removed existing salient causal patterns without introducing many new salient patterns. 5 EXPERIMENTS 5.1 VERIFICATION OF CORRECTNESS OF BASELINE VALUES AND SHAPLEY VALUES Correctness of baseline values on synthetic functions. People usually cannot determine the ground truth of baseline values for real images, such as the MNIST dataset. Therefore, we conducted experiments on synthetic functions with ground-truth baseline values, in order to verify the Table 5: Accuracy of Shapley values on the extended Addition-Multiplication dataset using different settings of baseline values. Zero Mean Baseline values in SHAP Kernel SHAP Frye et al. (2021) Ours Accuracy 82.88% 72.63% 81.25% 33.88% 66.00% 100% Table 6: An example of Shapley values computed on different baseline values. The function is model(x) =−2.62x1−5x3− 1.98x6(x4−0.94)+1.15(x5−0.91)−4.23x7, and the input is x = [0, 1, 1, 1, 1, 1, 1]. Baseline values The computed Shapley values {ϕi} Truth/Ours {0, 0,−5,−0.06, 0.10,−0.06,−4.23} Zero baseline {0, 0,−5,−0.99, 1.15, 0.87,−4.23} Mean baseline {1.31, 0,−2.50,−0.74, 0.58, 0.19,−2.11} Setting in SHAP {0.014, 0,−0.011,−0.003, 0.003, 0.001,−0.010} correctness of the learned baseline values. We randomly generated 100 functions, whose causal patterns and ground truth of baseline values could be easily determined. This dataset has been released at https://github.com/zzp1012/faithful-baseline-value. The generated functions were composed of addition, subtraction, multiplication, exponentiation, and the sigmoid operations (see Table 3). For example, for the function y=sigmoid(3x1x2−3x3−1.5)−x4x5+0.25(x6+x7)2, xi∈{0, 1}, there were three causal patterns (i.e. {x1, x2, x3}, {x4, x5}, {x6, x7}), which were activated only if xi =1 for i∈{1, 2, 4, 5, 6, 7} and x3 =0. In this case, the ground truth of baseline values was b∗i = 0 for i ∈ {1, 2, 4, 5, 6, 7} and b∗3 = 1. Please see Appendix G.4 for more discussions about the setting of ground-truth baseline values. We used our method to learn baseline values on these functions and tested the accuracy. Note that |bi−b∗i |∈ [0, 1] and b∗i ∈{0, 1}. If |bi−b∗i |<0.5, we considered the learned baseline value correct. We set λ=0.5n in both LShapley and Lmarginal. The results are reported in Table 4 and are discussed later. Correctness of baseline values on functions in (Tsang et al., 2018). Besides, we also evaluated the correctness of the learned baseline values using functions in Tsang et al. (2018). Among all the 92 input variables in these functions, the ground truth of 61 variables could be determined (see Appendix G.4). Thus, we used these annotated baseline values to test the accuracy. Table 4 reports the accuracy of the learned baseline values on the above functions. In most cases, the accuracy was above 90%, showing that our method could effectively learn correct baseline values. A few functions in (Tsang et al., 2018) did not have salient causal patterns, which caused errors in the learning. Besides, in experiments, we tested our method under three different initializations of baseline values (i.e., 0, 0.5, and 1). Table 4 shows that baseline values learned with different initialization settings all converged to similar and high accuracy. Correctness of the computed Shapley values. Incorrect baseline values lead to incorrect Shapley values. We verified the correctness of the computed Shapley values on the extended AdditionMultiplication dataset (Zhang et al., 2021c). We added the subtraction operation to avoid all baseline values being zero. Theorem 2 considers the Shapley value as a uniform assignment of effects of each causal pattern to its compositional variables. This enabled us to determine the ground-truth Shapley value of variables without baseline values based on causal patterns. For example, the function f(x) = 3x1x2 + 5x3x4 + x5 s.t. x = [1, 1, 1, 1, 1] contained three causal patterns, according to the principle of the most simplified causality. Accordingly, the ground-truth Shapley values were ϕ̂1= ϕ̂2=3/2, ϕ̂3= ϕ̂4=5/2, and ϕ̂5=1. See Appendix G.5 for more details. The estimated Shapley value ϕi was considered correct if |ϕi− ϕ̂i| ≤ 0.01; otherwise, incorrect. Then, we computed the accuracy of the estimated Shapley values as the ratio of input variables with correct Shapley values. Discussion on why the learned baseline values generated correct Shapley values. We computed Shapley values of variables in the extended Addition-Multiplication dataset using different baseline values, and compared their accuracy in Table 5. The result shows that our method exhibited the highest accuracy. Table 6 shows an example of incorrect Shapley values computed by using other baseline values. Our method generated correct Shapley values in this example. For the variable x6, due to its negative coefficient −1.98, its contribution should be negative. However, all other baseline values generated positive Shapley values for x6. The term −4.23x7 showed the significant effect of the variable x7 on the output, but its Shapley value computed using baseline values in SHAP was just −0.010, which was obviously incorrect. age workclass education maritalstatus occupatio n relationshi p race sex capitalgain capitalloss hours-perweek nativecountry -20 0 20 40 60 80 100 120 Workclass Education Marital-statu Occupation Relationship Race Sex Capital-gai Capital-los Hours-per-we Native-country Age 0 20 40 60 80 1200100 Baseline values learned by our methods Using (zero-init) Using (mean-init) Using (zero-init) Using (mean-init) age workclass education marital-status occupation relationship race sex capital-gain capital-loss hours-per-week native-country 0 20 40 60 80 Values of the input sample Shapley value Ag Workclass Educatio Marital-st tu Occupatio Relationshi Rac Se Capital-gai Capital-los Hours-per-we Native-countr 0 20 40 60 Zero baseline values 0 Mean baseline values 0 Baseline in SHAP 0 Baseline in SAGE 0 Ours 𝐿Shapley zero-init 0 Ours 𝐿marginal zero-init 0 163 age workclass education marital-status occupation relationship race sex capital-gain capital-loss hours-per-week native-country 0 20 40 60 Shapley value Ag Workclass Educatio Marital-st tu Occupatio Relationshi Rac Se Capital-gai Capital-los Hours-per-we Native-countr 0 20 40 60 0 0 0 0 0 0 330 Values of the input sample Zero baseline Mean baseline Baseline in SHAP Baseline in SAGE Ours 𝐿 Ours 𝐿 Figure 3: The learned baseline values (left) and Shapley values computed with different baseline values (right) on the income dataset. Results on the MNIST, the CIFAR-10, and the credit datasets are shown in Appendix G.6 and G.7. 5.2 RESULTS AND EVALUATION ON REALISTIC DATASETS AND MODELS Learning baseline values. We used our method to learn baseline values for MLPs, LeNet (LeCun et al., 1998), and ResNet-20 (He et al., 2016) trained on the UCI South German Credit dataset (namely credit dataset) (Dua and Graff, 2017), the UCI Census Income dataset (namely income dataset) (Dua and Graff, 2017), the MNIST dataset (LeCun et al., 1998), and the CIFAR-10 dataset (Krizhevsky et al., 2009), respectively. We learned baseline values by using either LShapley or Lmarginal as the loss function. In the computation of LShapley, we set v(xS) = log p(y truth|xS) 1−p(ytruth|xS) . In the computation of Lmarginal, |∆vi(S)| was set to |∆vi(S)|=∥h(xS∪{i})−h(xS)∥1, where h(xS) denotes the output feature of the penultimate layer given the masked input xS , in order to boost the efficiency of learning. We set λ=0.2n for the MNIST and the CIFAR-10 datasets, and set λ=0.5n for the simpler data in two UCI datasets. Given baseline values, we used the sampling-based approximation (Castro et al., 2009) to estimate Shapley values. We used two ways to initialize baseline values before learning, i.e. setting baseline values to zero or mean values over different samples, namely zero-init and mean-init, respectively. Fig. 3 (left) shows that baseline values learned with different initialization settings all converged to similar baseline values, except for very few dimensions having multiple local-minimum solutions (discussed in Appendix G.7), which proved the stability of our method. Comparison of attributions computed using different baseline values. Fig. 3 shows the learned baseline values and the computed Shapley values on the income dataset. We found that attributions generated by zero/mean baseline values conflicted with the results of all other methods. Our method obtained that the occupation had more influence than the marital status on the income, which was somewhat consistent with our life experience. However, baseline values in SHAP and SAGE sometimes generated abnormal explanations. In this top-right example, the attribute capital gain was zero, which was not supposed to support the prediction of “the person made over 50K a year.” However, the SAGE’s baseline values generated a large positive Shapley value for capital gain. In the bottom-right example, both SHAP and SAGE considered the marital status important for the prediction. SHAP did not consider the occupation as an important variable. Therefore, we considered these explanations not reliable. Attribution maps and baseline values generated on the CIFAR-10 and the MNIST datasets are provided in Appendix G.6. Compared to zero/mean/blurring baseline values, our baseline values were more likely to ignore noisy variables in the background, which were far from the foreground in images. Compared to SHAP, our method yielded more informative attributions. Besides, our method generated smoother attributions than SAGE. 6 CONCLUSIONS In this paper, we have defined the absence state of input variables in terms of causality. Then, we have found that most existing masking methods cannot faithfully remove existing causal patterns without triggering new patterns. In this way, we have formulated optimal baseline values for the computation of Shapley values as those that remove most causal patterns. Then, we have proposed an approximate-yet-efficient method to learn optimal baseline values that represent the absence states of input variables. Experimental results have demonstrated the effectiveness of our method. ETHIC STATEMENT This paper aims to examine the masking approach in previous explaining methods. We find that previous settings of the masking approach cannot faithfully represent the absence of input variables, thereby hurting the trustworthiness of the obtained explanations. Therefore, we propose a new method to learn optimal baseline values to represent the absence of input variables. In this way, the trustworthiness of explanations of the DNN is further boosted. There are no ethical issues with this paper. REPRODUCIBILITY STATEMENT We have provided proofs for all theoretical results in Appendix E and Appendix H. We have also provided experimental details in Section 5 and Appendix G. Furthermore, we will release the code when the paper is accepted. ACKNOWLEDGEMENT This work is partially supported by the National Nature Science Foundation of China (62276165), National Key R&D Program of China (2021ZD0111602), Shanghai Natural Science Foundation (21JC1403800,21ZR1434600), National Nature Science Foundation of China (U19B2043). This work is also partially supported by Huawei Technologies Inc. A RELATED WORKS No previous methods directly examined the faithfulness of the masking methods. Instead, we made a survey in a larger scope of attribution methods and other explainable AI studies, and put them in the appendix. Nevertheless, we will put this section back to the main paper if the paper is accepted. In the scope of explainable AI, many methods (Simonyan et al., 2014; Yosinski et al., 2015; Mordvintsev et al., 2015; Dosovitskiy and Brox, 2016; Zhou et al., 2015) have been proposed to explain the DNN. Among all methods, the estimation of attributions for each input variable represents a classical direction (Zhou et al., 2016; Selvaraju et al., 2017; Lundberg and Lee, 2017; Shrikumar et al., 2017). In this paper, we mainly focus on attributions based on Shapley values. Shapley values. The Shapley value (Shapley, 1953) in game theory was widely considered as a fair distribution of the overall reward in a game to each player (Weber, 1988). (Sen et al., 1981) and (Grömping, 2007) used the Shapley value to attribute the correlation coefficient of a linear regression to input features. (Štrumbelj et al., 2009; Štrumbelj and Kononenko, 2014) used the Shapley value to attribute the prediction of a model to input features. (Bork et al., 2004) used the Shapley value to measure importances of protein interactions in large, complex biological interaction networks. (Keinan et al., 2004) employed the Shapley value to measure causal effects in neurophysical models. (Sundararajan et al., 2017) proposed Integrated Gradients based on the AumannShapley(Aumann and Shapley, 2015) cost-sharing technique. Besides above local explanations, (Covert et al., 2020b) focused on the global interpretability. In order to compute the Shapley value in deep models efficiently, (Lundberg and Lee, 2017) proposed various approximations for Shapley valus in DNNs. (Lundberg et al., 2018) further computed the Shapley value on tree emsembles. (Aas et al., 2021) generalized the approximation method in (Lundberg and Lee, 2017) to the case when features were related to each other. (Ancona et al., 2019) further formulated a polynomial-time approximation of Shapley values for DNNs. Baseline values. In terms of baseline values of Shapley values, most studies (Covert et al., 2020a; Merrick and Taly, 2020; Sundararajan and Najmi, 2020; Kumar et al., 2020) compared influences of baseline values on explanations, without providing any principles for setting baseline values. Shrikumar et al. (2017) proposed DeepLIFT to estimate attributions of input variables, and also mentioned the choice of baseline values. Besides, Agarwal and Nguyen (2021) and Frye et al. (2021) used generative models to alleviate the out-of-distribution problem caused by baseline values. Unlike previous studies, we rethink and formulate baseline values from the perspective of gametheoretic causality. We define the absent state of input variables, and propose a method to learn optimal baseline values based on the number of causal patterns. B QUANTITATIVE EVALUATION OF ATTRIBUTIONS FOR IMAGE CLASSIFICATION In order to quantitatively evaluate Shapley values computed by different baseline values on the MNIST dataset, we constructed an And-Or decision tree following (Harradon et al., 2018), whose structure directly provided the ground-truth Shapley value for each input variable. Then, we used different attribution methods to explain the decision tree. Table 7 shows that our method generated more accurate Shapley values than other baseline values. We constructed a decision tree (Song et al., 2013) for each category in the MNIST dataset. Specifically, for each category (digit), we first computed the average image over all training samples in this category. Let x̄(c) ∈ Rn denote the average image of the c-th category. Then, we built a decision tree by considering each pixel as an internal node. The splitting rule for the decision tree was designed as follows. Given an input x in the category c, the splitting criterion at the pixel (node) xi was designed as ( (x̄ (c) i > 0.5)&(xi > 0.5) ) 2. If (x̄(c)i > 0.5)&(xi > 0.5) = True, then the pixel value xi was added to the output; otherwise, xi was ignored. In this way, the output of the decision tree was f(x) = ∑ i∈V xi, where V = {i ∈ N |(x̄ (c) i > 0.5)&(xi > 0.5) = True} denote the set of all pixels that satisfied the above equation. For inference, the probability of x belonging to the category c was p(c|x) = sigmoid(γ(f(x)− β)), where γ = 40 was a constant and β ∝ ∑ i∈N 1x̄(c)i >0.5 . In this case, we defined v(xN ) = log p(c|x) 1−p(c|x) . Thus, the co-appearing of pixels in V formed a causal pattern to contribute for v(xN ). In other words, because ∀i ∈ N, xi ≥ 0, the absence of any pixel in V might deactivate this pattern by leading to a small probability p(c|x) < 0.5 and a small v. This pattern can also be understood as an AND node in the And-Or decision tree (Song et al., 2013). In the above decision tree, the ground-truth Shapley values of input variables (pixels) were easy to determine. The above decision tree ensured that the absence of any variable in V would deactivate the causal pattern. Therefore, according to Theorem 2 in the paper, the output probability should be fairly assigned to pixels in V , i.e., they shared the same Shapley values ϕ̂i = v(xN ) |V | . For other pixels that were not contained in the output, their ground-truth Shapley values were zero. We estimated Shapley values of input variables in the above decision tree by using zero baseline values, mean baseline values, baseline values in SHAP, and the learned baseline values by our method, respectively. Let ϕi denote the estimated Shapley value of the variable i. If |ϕi−ϕ̂i| ≤ 0.01, we considered the estimated Shapley value ϕi correct; otherwise, incorrect. In this way, we computed the accuracy of the estimated Shapley values, and Table 7 shows that our method achieved the highest accuracy. C REMOVING ADVERSARIAL PERTURBATIONS FROM THE INPUT Let x denote the normal sample, and let xadv = x + δ denote the adversarial example generated by (Madry et al., 2018). According to (Ren et al., 2021), the adversarial example xadv mainly created out-of-distribution bivariate interactions with high-order contexts, which were actually related to the high-order interactions (causal patterns) in this paper. Thus, in the scenario of this study, the adversarial utility was owing to out-of-distribution high-order interactions (causal patterns). The removal of input variables was supposed to remove most high-order causal patterns. Therefore, the baseline value can be considered as the recovery of the original sample. In this way, we used the adversarial example xadv to initialize baseline values before learning, and used Lmarginal to learn baseline values. If the learned baseline values b satisfy ∥b−x∥1≤∥xadv−x∥1, we considered that our method successfully recovered the original sample to some extent. We conducted experiments using LeNet, AlexNet (Krizhevsky et al., 2012), and ResNet-20 on the MNIST dataset (∥δ∥∞ ≤ 32/255) and the CIFAR-10 dataset (∥δ∥∞≤8/255). Table 8 shows that our method recovered original samples from adversarial examples, which demonstrated the effectiveness of our method. D AXIOMS OF THE SHAPLEY VALUE The Shapley value (Shapley, 1953) was first introduced in game theory, which measures the contribution of each player in a game. Actually, given an input x with n input variables, i.e., x = [x1, . . . , xn], we can consider a deep model as a game with n players N = {1, 2, · · · , n}. Each player i is an input variable xi (e.g. an input dimension, a pixel, or a word). In this way, the problem of fairly estimating attributions of input variables in the DNN is equivalent to the problem of fairly assigning the total reward in the game to each player. The Shapley value is widely considered a fair attribution method, because it satisfies the following four axioms (Weber, 1988). (1) Linearity axiom: If two games can be merged into a new game u(xS) = v(xS) + w(xS), then Shapley values in the two old games also can be merged, i.e. ∀i ∈ N , ϕi,u = ϕi,v + ϕi,w. (2) Dummy axiom and nullity axiom: The dummy player i is defined as a player without any interactions with other players, i.e. satisfying ∀S ⊆ N \ {i}, v(xS∪{i}) = v(xS) + v(x{i}). Then, the dummy player’s Shapley value is computed as ϕi = v(x{i}). The null player i is defined as a player that satisfies ∀S ⊆ N \ {i}, v(xS∪{i}) = v(xS). Then, the null player’s Shapley value is ϕi = 0. 2For Table 7, the splitting criterion was designed as (x̄(c)i > 0.5). (3) Symmetry axiom: If ∀S ⊆ N \ {i, j}, v(xS∪{i}) = v(xS∪{j}), then ϕi = ϕj . (4) Efficiency axiom: The overall reward of the game is equal to the sum of Shapley values of all players, i.e. v(xN )− v(x∅) = ∑ i∈N ϕi. E PROOFS OF THEOREMS This section provides proofs of theorems in the main paper. E.1 PROOF OF THEOREM 1 Theorem 1 (Faithfulness, proven by Ren et al. (2023a)) Let us consider a DNN v and an input sample x with n input variables. We can generate 2n different masked samples, i.e., {xS |S ⊆ N}. The DNN’s outputs on all masked samples can always be well mimicked as the sum of the triggered interaction effects in Eq. (3), i.e., ∀S⊆N, v(xS) = ∑ S′⊆S US′ . Proof: According to the definition of the Harsanyi dividend, we have ∀S ⊆ N ,∑ S′⊆S US′ = ∑ S′⊆S ∑ L⊆S′ (−1)|S ′|−|L|v(xL) = ∑ L⊆S ∑ S′⊆S:S′⊇L (−1)|S ′|−|L|v(xL) = ∑ L⊆S |S|∑ s′=|L| ∑ S′⊆S:S⊇L |S′|=s′ (−1)s ′−|L|v(xL) = ∑ L⊆S v(xL) |S|−|L|∑ m=0 ( |S| − |L| m ) (−1)m = v(xS) E.2 PROOF OF THEOREM 2 Theorem 2 Harsanyi dividends can be considered as causal patterns of the Shapley value. ϕi = ∑ S⊆N\{i} 1 |S|+ 1 US∪{i} (8) In this way, the effect of an causal pattern consisting of m variables can be fairly assigned to the m variables. This connection has been proved in (Harsanyi, 1982). • Proof: right = ∑ S⊆N\{i} 1 |S|+ 1US∪{i} = ∑ S⊆N\{i} 1 |S|+ 1 ∑ L⊆S (−1)|S|+1−|L|v(L) + ∑ L⊆S (−1)|S|−|L|v(L ∪ {i}) = ∑ S⊆N\{i} 1 |S|+ 1 ∑ L⊆S (−1)|S|−|L| [v(L ∪ {i})− v(L)] = ∑ L⊆N\{i} ∑ K⊆N\L\{i} (−1)|K| |K|+ |L|+ 1 [v(L ∪ {i})− v(L)] % Let K = S \ L = ∑ L⊆N\{i} n−1−|L|∑ k=0 (−1)k k + |L|+ 1 ( n− 1− |L| k ) [v(L ∪ {i})− v(L)] % Let k = |K| = ∑ L⊆N\{i} |L|!(n− 1− |L|)! n! [v(L ∪ {i})− v(L)] % by the property of combinitorial number = ϕi = left Table 9: Comparison between ground-truth baseline values and incorrect baseline values. The last column shows ratios of causal patterns of different orders rm = ∑ S⊆N,|S|=m |US |∑ S⊆N,S ̸=∅ |US | . We consider interactions of input samples that activate causal patterns. We find that when models/functions contain a single complex collaborations between multiple variables (i.e. high-order causal patterns), incorrect baseline values usually generate a mixture of many low-order causal patterns. In comparison, ground-truth baseline values lead to sparse and high-order causal patterns. Functions (∀i ∈ N, i ∈ {0, 1}) Baseline values b Ratios r f(x) = x1x2x3x4x5 x = [1, 1, 1, 1, 1] Learned baseline values by Learned baseline values by Zero baseline values ∑ | ( ) | ∑ ∑ | ( ) | 0 0.5 1 ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) ∗ ( ) ( ) ( ) 1 5ground truth: b∗ = [0, 0, 0, 0, 0] incorrect: b(1) = [0.5, 0.5, 0.5, 0.5, 0.5] incorrect: b(2) = [0.1, 0.2, 0.6, 0.0, 0.1] incorrect: b(3) = [0.7, 0.1, 0.3, 0.5, 0.1] f(x) = sigmoid(5x1x2x3+ 5x4 − 7.5) x = [1, 1, 1, 1] Learned baseline values by Learned baseline values by Zero baseline values ∑ | ( ) | ∑ ∑ | ( ) | 0 0.5 1 ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) ∗ ( ) ( ) ( ) 1 5 ground truth: b∗ = [0, 0, 0, 0] incorrect: b(1) = [0.5, 0.5, 0.5, 0.5] incorrect: b(2) = [0.6, 0.4, 0.7, 0.3] incorrect: b(3) = [0.3, 0.6, 0.5, 0.8] f(x) = x1(x2 + x3 − x4)3 x = [1, 1, 1, 0] Learned baseline values by Learned baseline values by Zero baseline values ∑ | ( ) | ∑ ∑ | ( ) | 0 0.5 1 ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) 0 0.5 1 c ∗ ( ) ( ) ( ) ∗ ( ) ( ) ( ) 1 5 ground truth: b∗ = [0, 0, 0, 1] incorrect: b(1) = [0.5, 0.5, 0.5, 0.5] incorrect: b(2) = [0.2, 0.3, 0.6, 0.1] incorrect: b(3) = [1.0, 0.3, 1.0, 0.1] E.3 PROOF OF REMARK2, THEOREM 3, AND THEOREM 4 Remark 2 Let us consider a function with a single causal pattern f(xS) =wS ∏ j∈S(xj−δj). Accordingly, ground-truth baseline values of variables are obviously {δj}, because setting any variable ∀j ∈ S, xj = δj will deactivate this pattern. Given the correct baseline values b∗j = δj , we can use a single causal pattern to regress f(xS), i.e., US = f(xS), ∀ S′ ̸= S,US′ = 0. Theorem 3 For the function f(xS) = wS ∏ j∈S(xj − δj), if we use m ′ incorrect baseline values {b′j |b′j ̸=δj} to replace correct ones to compute causal effects, then the function will be explained to contain at most 2m ′ causal patterns. Theorem 4 If we use m′ incorrect baseline values to compute causal effects in the function f(xS)= wS ∏ j∈S(xj−δj), a total of ( m′ k−|S|+m′ ) causal patterns of the k-th order emerge, k ≥ |S|−m′. A causal pattern of the k-th order means that this causal pattern represents the AND relationship between k variables. • Theoretical proof: Without loss of generality, let us consider an input sample x, with ∀j ∈ S, xj ̸= δj . Based on the ground-truth baseline value {δj}, we have (1) v(xS) = f(xS) = wS ∏ j∈S(xj − δj) ̸= 0, (2) ∀S′ ⊊ S, v(xS′) = wS ∏ j∈S′(xj − δj) ∏ k∈S\S′(δk − δk) = 0, Accordingly, we have US = ∑ S′⊆S(−1) |S|−|S′|v(xS′) = v(xS) ̸= 0. For S′ ⊊ S, we have US′ =∑ L⊆S′(−1) |S′|−|L|v(xL) = ∑ L⊆S′ 0 = 0. (3) ∀S′ ̸= S, let S′ = L ∪M , where L ⊆ S and M ∩ S = ∅. Then, we have US′ = ∑ T⊆S′ (−1)|S ′|−|T |v(xT ) = ∑ L′⊆L L′ ̸=∅ (−1)|S ′|−|L′|v(xL′) + ∑ M′⊆M M′ ̸=∅ (−1)|S ′|−|M′| v(xM′)︸ ︷︷ ︸ =v(x∅)=0 + ∑ L′⊆L,M′⊆M L′ ̸=∅,M′ ̸=∅ (−1)|S|−|L ′|−|M′| v(xL′∪M′)︸ ︷︷ ︸ =v(L′) +(−1)|S ′| v(x∅)︸ ︷︷ ︸ =0 = ∑ L′⊆L L′ ̸=∅ (−1)|S ′|−|L′|v(xL′) + ∑ L′⊆L,M′⊆M L′ ̸=∅,M′ ̸=∅ (−1)|S|−|L ′|−|M′|v(xL′) =(−1)|S ′|−|S|v(xS) + ∑ M′⊆M M′ ̸=∅ (−1)|S ′|−|S|−|M′|v(xS) % v(xL′) ̸= 0 only if L′ = S = ∑ M′⊆M (−1)|S ′|−|S|−|M′|v(xS) = 0 Therefore, there is only one causal pattern with non-zero effect US . In comparison, if we use m′ incorrect baseline values {δ′j}, where ∑ j∈S 1δ′j ̸=δj = m′, then the function will be explained to contain at most 2m ′ causal patterns. For the simplicity of notations, let S = {1, 2, ...,m}, and δ′1 = δ1 + ϵ1, ..., δ′m′ = δm′ + ϵm′ , where ϵ1, ..., ϵm′ ̸= 0. Let T = {1, 2, . . . ,m′}. In this case, we have (1) v(xS) = f(xS) ̸= 0 (2) ∀S′ ⊊ S, |S′| < m−m′, v(xS′) = wS ∏ j∈S′(xj − δj) ∏ l∈S\S′(δ ′ l − δl). Because |S| − |S′| > m′, there is at least one variable with ground-truth baseline value in S \ S′. Therefore, v(xS′) = 0. Furthermore, US′ = ∑ L⊆S′(−1) |S′|−|L|v(xL) = 0 (3) ∀S′ ⊊ S, |S′| = k ≥ m − m′, v(xS′) = wS ∏ j∈S′(xj − δj) ∏ l∈S\S′(δ ′ l − δl). If S \ T ⊆ S′, then S \ S′ ⊆ T and v(xS′) ̸= 0. Otherwise, v(xS′) = 0. Then, US′ = ∑ L⊆S′ (−1)|S ′|−|L|v(xL) = ∑ L⊆S′,|L|<m−m′ (−1)|S ′|−|L|v(xL) + ∑ L⊆S′,L≥m−m′ (−1)|S ′|−|L|v(xL) = 0 + ∑ L⊆S′,L≥m−m′,L⊇S\T (−1)|S ′|−|L|v(xL) + ∑ L⊆S′,L≥m−m′,L⊉S\T (−1)|S ′|−|L|v(xL) = ∑ L⊆S′,L≥m−m′,L⊇S\T (−1)|S ′|−|L|v(xL) If the above US′ = 0, it indicates that S\T ⊈ S′. In this case, there is no subset L ⊆ S′ s.t. S\T ⊆ L. In other words, only if S \ T ⊆ S′, US′ ̸= 0. In this way, a total of ( m′ k−(|S|−m′) ) causal patterns of the k-th order emerge, where the order k of a causal pattern means that this causal pattern S′ contains k = |S′| variables. There are totally ∑m k=|S|−m′ ( m′ k−(|S|−m′) ) = 2m ′ causal patterns in x. For example, if the input x is given as follows, xi = { δi + 2ϵi, i ∈ T = {1, . . . ,m′} δi + ϵi, i ∈ S \ T = {m′ + 1, . . . ,m} where ϵi ̸= 0 are arbitrary non-zero scalars. In this case, we have ∀S′ ⊆ T,US′∪{m′+1,...,m} = ϵ1ϵ2...ϵm ̸= 0. Besides, if {m′ + 1, ...,m} ⊈ S′, we have US′ = 0. In this way, there are totally 2m ′ causal patterns in x. • Experimental verification: We further conducted experiments to show that the incorrect setting of baseline values makes a model/function consisting of high-order causal patterns be mistakenly explained as a mixture of low-order and high-order causal patterns. To show this phenomenon, we compare causal patterns computed using ground-truth baseline values and incorrect baseline values in Table 9, and the results verify our conclusion. We find that when models/functions contain complex collaborations between multiple variables (i.e. high-order causal patterns), incorrect baseline values usually generate fewer high-order causal patterns and more low-order causal patterns than ground-truth baseline values. In other words, the model/function is explained as massive low-order causal patterns. In comparison, ground-truth baseline values lead to sparse and high-order salient patterns. F PROVING THAT MASKING INPUT VARIABLES REMOVES CAUSAL EFFECTS In this section, we prove that for the causal pattern S ∋ i, if the input variable i is masked, then the causal effect wS = 0. Proof: let S = S′ ∪ {i}. If i ∈ S is masked, then ∀L s.t. i /∈ L,xL = xL∪{i}. Therefore, v(L ∪ {i}) = v(L). According to the definition of Harsanyi dividend (Harsanyi, 1982), we have US = ∑ L⊆S (−1)|S|−|L|v(L) = ∑ L⊆(S′∪{i}) (−1)|S ′|+1−|L|v(L) = ∑ L⊆S′ (−1)|S ′|+1−|L|v(L) + ∑ L⊆S′ (−1)|S ′|−|L|v(L ∪ {i}) = ∑ L⊆S′ (−1)|S ′|+1−|L|v(L) + ∑ L⊆S′ (−1)|S ′|−|L|v(L) = ∑ L⊆S′ ( (−1)|S ′|+1−|L| + (−1)|S ′|−|L| ) v(L) = ∑ L⊆S′ (−1 + 1)(−1)|S ′|−|L|v(L) = 0 Note that the causal pattern not containing i will not be deactivated by the masking of i. For example, {eyes, beak} is not deactivated by the absence of forehead, because this pattern represents the AND relationship between eyes and beak, and it does not contain forehead. G MORE EXPERIMENTAL DETAILS AND RESULTS G.1 VERIFICATION OF THE SPARSITY OF CAUSAL PATTERNS In this subsection, we conducted experiments to verify the sparsity of causal effects, which is introduced in Remark 1. To this end, we computed causal effects US of all 2n causal patterns encoded by a DNN. Specifically, we trained a three-layer MLP on the income dataset and computed causal effects in the model. Figure 4 shows the distribution of absolute causal effects |US | of causal patterns in the first five samples of each category of the income dataset. These results show that most causal patterns had insignificant causal effects, US ≈ 0. Only a few causal patterns had salient causal effects. Moreover, we also conducted experiments to demonstrate the universality of this phenomenon. We trained the five-layer MLP, CNN, LSTM, ResNet-32, and VGG-16 on the UCI census income dataset, the UCI TV news channel commercial detection dataset, the SST-2 dataset, and the MNIST dataset, respectively. Figure 5 shows the absolute causal effects US in the descending order. These results show that various DNNs learned on different tasks could be explained by a set of sparse causal patterns. G.2 VERIFICATION OF USING CAUSAL PATTERNS TO EXAMINE THE STATE OF INPUT VARIABLES In this subsection, we conducted experiments to verify that causal patterns reflect the states of removing existing patterns. Given causal effects US in the normal input image and causal effects U (noise) S in the white noise input, we compared their distributions in Figure 6. Note that we assumed that the white noise input naturally contained information for classification than the normal input image. We found that most causal effects in the white noise input were close to zero, and there were few salient causal patterns. Besides, we computed the average strength of causal effects in the above two inputs. In the normal input, the average strength of causal effects ES⊆N |US | = 5.5285, while in the white noise input, the average strength was much smaller, ES⊆N |U (noise)S | = 0.2321. These results indicated that salient causal patterns could reflect the information encoded in the input. G.3 EFFECTS OF THE PROPOSED METHOD ON MULTI-ORDER SHAPLEY VALUES AND MULTI-ORDER MARGINAL BENEFITS In this section, we conducted experiments to verify that baseline values b learned by the proposed loss function in Eq. (7) could effectively reduce causal effects of low-order causal patterns in Eq. (5). To this end, we computed the metric Ex[(E|S|=m|US |)/(v(xN ) − v(x∅))] to measure the relative strength of causal patterns of a specific order m, in order to evaluate the effectiveness of baseline values. Fig. 7(a) shows that compared to zero baseline values, our method effectively reduced low-order causal patterns. In addition, Fig. 8 and Fig. 7(b) verify that the loss LShapley in Eq. (7) reduced the number of salient causal patterns in Ω, which means LShapley avoided the exponential number of causal patterns caused by incorrect baseline values. G.4 DISCUSSION ABOUT THE SETTING OF GROUND-TRUTH BASELINE VALUES. This section discusses the ground truth of baseline values of synthetic functions in Section 5.1 of the main paper. In order to verify the correctness of the learned baseline values, we conducted experiments on synthetic functions with ground-truth baseline values. We randomly generated 100 Zero baseline values Learned baseline values Count in the log space 𝑈𝑆 100 102 104 -40 0 40-10 0 10 𝑈𝑆 0 4 8 x10 4 Count Figure 8: Distribution of causal effects US of causal patterns in 20 samples in the credit dataset. functions whose causal patterns and ground truth of baseline values could be easily determined. As Table 10 shows, the generated functions were composed of addition, subtraction, multiplication, exponentiation, and sigmoid operations. The ground truth of baseline values in these functions was determined based on causal patterns between input variables. In order to represent the absence states of variables, baseline values should activate as few salient patterns as possible, where activation states of causal patterns were considered as the most infrequent state. Thus, we first identified the activation states of causal patterns of variables, and the ground-truth of baseline values was set as values that inactivated causal patterns under different masks. We took the following examples to discuss the setting of ground-truth baseline values (in the following examples, ∀i ∈ N, xi ∈ {0, 1} and b∗i ∈ {0, 1}). • f(x) = x1x2x3 + sigmoid(x4 + x5 − 0.5) · · · . Let us just focus on the term of x1x2x3 in f(x). The activation state of this causal pattern is x1x2x3 = 1 when ∀i ∈ {1, 2, 3}, xi = 1. In order to inactivate the causal pattern, we set ∀i ∈ {1, 2, 3}, b∗i = 0. • f(x) = −x1x2x3 + (x4 + x5)3 + · · · . Let us just focus on the term of −x1x2x3 in f(x). The activation state of this causal pattern is −x1x2x3 = −1 when ∀i ∈ {1, 2, 3}, xi = 1. In order to inactivate the causal pattern, we set ∀i ∈ {1, 2, 3}, b∗i = 0. • f(x) = (x1 + x2 − x3)3 + · · · . Let us just focus on the term of (x1 + x2 − x3)3 in f(x). The activation state of this causal pattern is (x1 + x2 − x3)3 = 8 when x1 = x2 = 1, x3 = 0. In order to inactivate the causal pattern under different masks, we set b∗1 = b ∗ 2 = 0, b ∗ 3 = 1. • f(x) = sigmoid(3x1x2 − 3x3 − 1.5) + · · · . Let us just focus on the term of sigmoid(3x1x2 − 3x3 − 1.5) in f(x). In this case, x1, x2, x3 form a salient causal pattern because sigmoid(3x1x2 − 3x3 − 1.5) > 0.5 only if x1 = x2 = 1 and x3 = 0. Thus, in order to
1. What is the focus and contribution of the paper on interpretable machine learning? 2. What are the strengths of the proposed approach, particularly in representing the absence of input variables and verifying the faithfulness of baseline values? 3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or questions regarding the initialization of the masking method or the evaluation metric for Shapley values?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Aiming at investigating how to represent the absence of input variables and verify the faithfulness of baseline values, this paper proposes to use causal patterns to examine whether the masking method faithfully removes information encoded in input variables and a method to learn the optimal baseline value. Experimental results have demonstrated the effectiveness of proposed method. Strengths And Weaknesses Strong Points: [1]It is a challenging and crucial task that how to represent the absence of input variables and verify the faithfulness of baseline values of Shapley value for interpretable ML. [2]The approach of estimating optimal baseline values for Shapley values can ensure the trustworthiness of the attribution. Weak Points: [1]The authors did not properly back up their claims with evidence somewhere. For instance, “However, we find that most existing masking methods are not satisfactory from this perspective.” and “However, empirically, this method actually introduces additional information to the input.” which lacks key references to support the claim. [2]In this paper, how to initialize the in Eq. (2) which is used to mask variables to represent the absence is not report. [3]How to evaluate the accuracy of Shapley value? There need more details about the metric. [4]The computational time and details of software environment are not reported, especially the time cost or complexity of computing base value via the approximation-yet-efficient solution. [5]Many of the reference should be further improved. For example, few references miss page. In addition, the authors may try to discuss the existing work in published papers (rather than a number of preprinted references from arxiv). Clarity, Quality, Novelty And Reproducibility It is interesting to study interpretable ML; For reproducibility, experimental details need further description.
ICLR
Title In-distribution and Out-of-distribution Generalization for Graph Neural Networks Abstract Graph neural networks (GNNs) are models that allow learning with structured data of varying size. Despite their popularity, theoretical understanding of the generalization of GNNs is an under-explored topic. In this work, we expand the theoretical understanding of both in-distribution and out-of-distribution generalization of GNNs. Firstly, we improve upon the state-of-the-art PAC-Bayes (in-distribution) generalization bound primarily by reducing an exponential dependency on the node degree to a linear dependency. Secondly, utilizing tools from spectral graph theory, we prove some rigorous guarantees about the out-of-distribution (OOD) size generalization of GNNs, where graphs in the training set have different numbers of nodes and edges from those in the test set. To empirically verify our theoretical findings, we conduct experiments on both synthetic and real-world graph datasets. Our computed generalization gaps for the in-distribution case significantly improve the state-of-the-art PAC-Bayes results. For the OOD case, experiments on community classification tasks in large social networks show that GNNs achieve strong size generalization performance in cases guaranteed by our theory. 1 INTRODUCTION Graph neural networks (GNNs), firstly proposed in Scarselli et al. (2008), generalize artificial neural networks from processing fixed-size data to processing arbitrary graph-structured or relational data, which can vary in terms of the number of nodes, the number of edges, and so on. GNNs and their modern variants (Bronstein et al., 2017; Battaglia et al., 2018) have achieved state-of-the-art results in a wide range of application domains, including social networks (Hamilton et al., 2017), material sciences (Xie & Grossman, 2018), drug discovery (Wieder et al., 2020), autonomous driving (Liang et al., 2020), quantum chemistry (Gilmer et al., 2020), and particle physics (Shlomi et al., 2020). Despite their empirical successes, the theoretical understanding of GNNs are somewhat limited. Existing works largely focus on analyzing the expressiveness of GNNs. In particular, Xu et al. (2018) show that GNNs are as powerful as the Weisfeiler-Lehman (WL) graph isomorphism test (Weisfeiler & Leman, 1968) in distinguishing graphs. Chen et al. (2019) further demonstrate an equivalence between graph isomorphism testing and universal approximation of permutation-invariant functions. Loukas (2019) show that GNNs with certain conditions (e.g., on depth and width) are Turing universal. Chen et al. (2020) and Xu et al. (2020a) respectively examine whether GNNs can count substructures and perform algorithmic reasoning. In the vein of statistical learning theory, generalization analyses for GNNs have been developed to bound the gap between training and testing errors using VC-dimension (Vapnik & Chervonenkis, 1971), Rademacher complexity (Bartlett & Mendelson, 2002), algorithmic stability (Bousquet & Elisseeff, 2002), and PACBayes (McAllester, 2003) (a Bayesian extension of PAC learning (Valiant, 1984)). Depending on whether the problem setup is in-distribution (ID) or out-of-distribution (OOD), i.e., whether test data comes from the same distribution as training data, we categorize the literature into two groups. ID Generalization Bounds. Scarselli et al. (2018) provide a VC-dimension based generalization bound for GNNs whereas Verma & Zhang (2019) present the stability-based generalization analysis for singlelayer graph convolutional networks (GCNs) (Kipf & Welling, 2016). Both consider node classification and assume the node features are independent and identically-distributed (IID), which conflicts with the common relational learning setup (e.g., semi-supervised node classification) at which GNNs excel. Relying on the neural tangent kernel (NTK) approach (Jacot et al., 2018), Du et al. (2019) characterize the generalization bound of infinite-width GNNs on graph classification. Garg et al. (2020) derive the Rademacher complexity based bound for message passsing GNNs on graph classification. Lv (2021) establish results for GCNs on node classification using Rademacher complexity as well. Based on PAC-Bayes, Liao et al. (2020) obtain a tighter bound for both GCNs and message passsing GNNs on graph classification compared to (Garg et al., 2020; Scarselli et al., 2018). Subsequently, Ma et al. (2021) also leverage PAC-Bayes and show generalization guarantees of GNNs on subgroups of nodes for node classification. More recently, Li et al. (2022) study the effect of graph subsampling in the generalization of GCNs. OOD Generalization Yehudai et al. (2021) study size generalization for GNNs — this is a specific OOD setting where training and testing graphs differ in the number of nodes and edges. They show negative results that specific GNNs can perfectly fit training graphs but fails on OOD testing ones. Baranwal et al. (2021) consider specific graph generative models, i.e., the contextual stochastic block model (CSBM) (Deshpande et al., 2018), where CSBMs during training and testing are of the same means but different number of nodes, intra-, and inter-class edge probabilities. They present generalization guarantees for single-layer GCNs on binary node classification tasks. Later, Maskey et al. (2022) assume yet another class of graph generative models, i.e., graphons, where the kernel is shared across training and testing but the number of nodes and edges could vary. They obtain generalization bounds of message passing GNNs on graph classification and regression that depend on the Minkowski dimension of the node feature space. Relying on a connection of over-parameterized networks and neural tangent kernel, Xu et al. (2020b) find that taskspecific architecture/feature designs help GNNs extrapolate to OOD algorithmic tasks. Wu et al. (2022a) propose explore-to-extrapolate risk minimization framework, for which the solution is proven to provide an optimal OOD model under the invariance and heterogeneity assumptions. Yang et al. (2022) propose a two-stage model that both infers the latent environment and makes predictions to generalize to OOD data. Empirical studies suggest it works well on real-world molecule datasets. Wu et al. (2022b) study a new objective that can learn invariant and causal graph features that generalize well to OOD data empirically. All above works follow the spirit of invariant risk minimization (Arjovsky et al., 2019) and focus on designing new learning objectives. Instead, we provide generalization bound analysis from the traditional statistical learning theory perspective. Our Contributions. In this paper, we study both in-distribution and out-of-distribution generalization for GNNs. For in-distribution graph classification tasks, we significantly improve the previous state-of-the-art PAC-Bayes results in (Liao et al., 2020) by decreasing an exponential dependency on the maximum node degree to a linear dependency. For OOD node classification tasks, we do not assume any known graph generative models which is in sharp contrast to the existing work. We instead assume GNNs are trained and tested on subgraphs that are sampled via random walks from a single large underlying graph, as an efficient means to generate a connected subgraph. We identify interesting cases where a graph classification task is theoretically guaranteed to perform well at size generalization, and derive generalization bounds. We validate our theoretical results by conducting experiments on synthetic graphs, and also explore size generalization on a collection of real-world social network datasets. In the in-distribution case, we observe an improvement of several orders of magnitude in numerical calculations of the generalization bound. In the out-of-distribution case, we validate that, in cases where the theory guarantees that size generalization works well, the prediction accuracy on large subgraphs is always comparable to the accuracy on small subgraphs, and in many cases is actually better. (a) An example of a small expander graph. Any labelling of its nodes cannot exhibit homophily. (b) Example of a small barbell graph. If a labelling is exactly differentiated between the two groups, then it exhibits homophily. 2 BACKGROUND INFORMATION A graph G is an abstract mathematical model for pairwise relationships, with a set of vertices V and a set of edges E ⊆ V × V . Two vertices v1, v2 are said to be connected if (v1, v2) ∈ E. For a given graph G ∈ G we can also denote its vertices by V (G) and edges E(G). Unless otherwise specified, we assume graphs are undirected and without multi-edges. In machine learning, a graph (or graph-structured data) typically come with a set of node features. Common graph based machine learning tasks include node classification (or regression) and graph classification (or regression). We use the following notation. • Graph data {Gi = (Vi, Ei)}mi=1 ∈ G, where G is the set of all graphs. The neighborhood of a vertex v is denoted N (v) = {u ∈ V (Gi) : (v, u) ∈ E(Gi)}. • Node feature xv : V → X , with X being the feature space, e.g., X = Rdv . • Node labels y : V → Y , with Y being the set of labels, e.g., Y = [n]. Graph neural networks (GNNs). GNNs generalize regular neural networks to process data with varying structures and dependencies. GNNs achieve this flexibility via a message passing computational process. In particular, at the k-th step (or layer) of message passing, we update the representation h(k+1)u of node u as follows, h(k+1)u = UPDATE(h (k) u ,AGGREGATE({h(k)v |v ∈ N (u)})). (1) This update happens for all nodes in parallel within each message passing step. Moreover, the UPDATE and AGGREGATE operators are shared by all nodes, which enables the same GNN to process varyingsized graphs. Once we have finished the finite-step message passing process, we can use the output node representations to make predictions on nodes, edges, and the graph via additionally parameterized readout functions. This message passing framework is quite general since one can instantiate the UPDATE and AGGREGATE operators by different neural networks. For example, the widely used Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016), which are the main interest of our work, have the form h(k+1)u = σ Wk ∑ v∈N (u)∪{u} h (k) v√ |N (u)| √ |N (v)| (2) where one applies a linear transformation (Wk) to all node representations, a weighted-sum over the neighborhood, and an element-wise nonlinearity (e.g., ReLU activation). Note that the learnable weights Wk are different from layer to layer. Homophily. A concept studied in network science, homophily (McPherson et al., 2001) is the property that similar nodes group together. For node classification (or node labelling), this means that neighbouring nodes tend to have the same label. Size generalization is plausible when the labelling of the nodes exhibits homophily. The presence of a homophilic graph labelling implies that the labels of the nodes are unlikely to change during the course of a long random walk on the graph. It is important to note that homophily is also a concept that relates to the graph topology, as not every possible graph structure can be given a labelling that exhibits homophilic properties. An example of one such topology where homophily is impossible is an expander graph (Hoory et al., 2006), as shown in Figure 1a, where nodes have either random or random-like edges connected to a constant number of other nodes in the entire graph. In this case, any labelling of the nodes is far from homophilic, as can be shown using the expansion property. A setting with more homophily is akin to a barbell graph, as shown in Figure 1b, where there are two densely connected components, and comparatively few edges connecting the two dense regions. If the graph labelling of interest lines up with these divisions inherent in the topology, then it is natural to say that it exhibits a homophilic property. Cheeger’s Inequality. A mathematical description of homophily can be given using concepts from spectral graph theory. Cheeger’s inequality (Hoory et al., 2006) is a theorem that pertains to partitions of graphs, or equivalently binary-valued labellings on graphs (one side of the partition is labelled 0, the other 1). A crucial definition is the conductance, defined by ϕ(S) = |E(S, S̄)| |S| ∀S ⊆ V and ϕ(G) = min |S|≤ |V |2 ϕ(S). Here E(S, S̄) is the set of edges connecting a node in S to a node outside of S. Cheeger’s inequality states λ2/2 ≤ ϕ(G) ≤ √ 2λ2, where λ2 is the second-smallest eigenvalue of the normalized Laplacian1 L̃. This inequality links the realvalued quantity λ2 to the concept of homophily. If λ2 is small then the conductance of G must also be low, by Cheeger’s inequality. If a labelling on graph nodes f : V (G) → {0, 1} roughly agrees with a low-conductance partition (i.e., one side of the partition S is generally labelled 0 and the complement S̄ is generally labelled 1) then the labelling f exhibits homophily. 3 IMPROVEMENT OF IN-DISTRIBUTION PAC-BAYES BOUND The state-of-the-art generalization bounds for GNNs in the in-distribution case were formulated by Liao et al. (2020) using the PAC-Bayes theory. Specifically, they build upon the PAC-Bayes theorem in (Neyshabur et al., 2018) that pertains to homogeneous feedforward neural networks. We denote one sample as z = (X,A, y) where X ∈ X , A ∈ G, and y ∈ Y are the node features, the adjacency matrix, and the graph label respectively. Each sample is drawn from some unknown data distribution D (with support X ×G ×Y) in an i.i.d. fashion. Since both training and testing samples are drawn from the same distribution, this is the in-distribution setup. Following (Liao et al., 2020), we consider a margin loss for multi-class graph classifications as below, LD,γ = LD,γ(fw) = Pz∼D ( fw(X,A)[y] ≤ γ +max j ̸=y fw(X,A)[j] ) (3) where γ > 0 is the margin parameter and fw is the model (hypothesis) parameterized by weights w. Since D is unknown, we can not compute this true loss (risk). We instead minimize the empirical loss (risk) that is defined on the sampled training set S as below, LS,γ = LS,γ(fw) = 1 m ∑ z∈S 1 ( fw(Xi, Ai)[y] ≤ γ +max j ̸=y fw(Xi, Ai)[j] ) , (4) 1Here L̃ = D−1/2(D−A)D−1/2, where D is the diagonal matrix of vertex degrees and A is the adjacency matrix. where m is the number of training samples. For simplicity, we abbreviate LD,γ(fw) and LS,γ(fw) as LD,γ and LS,γ respectively from now on. Our main in-distribution result bounds the gap between true and empirical risks for GCNs, shown in the following theorem. The proof is in Appendix A.1. Theorem 3.1. For any B > 0, l > 1, let fw ∈ H : X × G → Rk be an l-layer GCN. Then with probability ≥ 1− δ over the choice of an iid size-m training set S from the data distribution D, we have for any w: LD,0 ≤ LS,γ +O √√√√B2 d l2 (h+ ln l) ∏li=1 ∥Wi∥22∑li=1 (∥Wi∥2F /∥Wi∥22) + ln mδ γ2m (5) Here d equals to one plus the maximum node degree that can be achieved by the data distribution. l is the depth, i.e., the number of layers, of GCNs. Wi is the weight matrix of GCNs in the i-th layer. B is the radius of the minimal ℓ2 ball that contains all node features, i.e., ∀v, ∥xv∥2 ≤ B. This improves the bound in (Liao et al., 2020), which is provided below for a better comparison, LD,0 ≤ LS,γ +O √√√√B2 dl−1 l2h log(lh) ∏li=1 ∥Wi∥22∑li=1(∥Wi∥2F /∥Wi∥22) + log mlδ γ2m . (6) The proof of the theorem from (Liao et al., 2020) is an induction over the l layers, in which the spectral norm of the weights and a maximum degree term is multiplied at each step. We observe that it is possible to avoid passing the maximum degree term via a refined argument. This leads to a tightening of one of the main inequalities used in the induction proof, thus in turn resulting in substantial improvements to the overall bound. As can be seen above, we reduce the exponential term dl−1 to a linear term d, which is a significant improvement for graphs even with small node degrees. 4 TOWARDS DEVELOPING A THEORY FOR SIZE GENERALIZATON In this section, we develop an out-of-distribution (OOD) generalization theory for GNNs. Since we adopt a statistical learning viewpoint, there must necessarily be some assumptions relating the training and testing graphs (otherwise the No-Free Lunch theorem applies). There is a tradeoff between assumptions that are practically relevant, and those for which rigorous guarantees are provable. We have chosen assumptions that we believe strike a balance between those objectives, at least for applications like social networks. Size Generalization Assumptions. We consider the following setup. First, we assume that there exists an extremely large graph G like the user network in Twitter so that one needs to sample subgraphs (e.g., via random walks) for training and testing machine learning models. This is akin to the practical setups of (Grover & Leskovec, 2016; Hamilton et al., 2017). To generate training and testing subgraphs, we run random walks of length N and M respectively on this single large graph, where M ≫ N , and collect the subgraphs induced by these walks. GNNs are then trained on the subgraphs induced by the shorter (length-N ) walks. In testing, we assume a procedure where a length-M random walk induced subgraph is sampled from the large subgraph. Random walks are initiated by choosing an initial node uniformly at random from all the nodes in the graph, and at each step there is an equal probability of selecting any of the current node’s neighbors. This is an interesting OOD problem where training and testing graphs come from different distributions determined by the underlying large graph and the random walk sampling with specific length. We consider the graph classification problem and assume that the graph label is determined by the majority of node labels within the graph, which is reasonable for many applications that involve homophilic graphs. For the node labeling, we assume it is binary but have no assumptions on how labels are generated. Crucially, we assume nothing about the underlying large graph. Therefore, our setup has advantages over some OOD setups in the literature where a generative model of graphs and labels is explicitly assumed. Relation with In-Distribution Result. We know the relationship between true error defined on the unknown data distribution D and empirical error defined on the size-m training set S. Specifically, for any GCN f , with probability at least 1− δ, we have a general bound as follows, LD,0 ≤ LS,γ +A(f, δ,m), (7) where we abbreviate the bound as A(f, δ,m) and omit specific parameters like maximum node degree d. In the size generalization problem, we use random walks with lengths N and M for collecting training and testing subgraphs (data) respectively. We are interested in proving a statement of the following form: for any GCN f , we have with probability at least 1− δ, LDM ,0 ≤ LSN ,γ + B(f, δ,m,M,N). (8) The key detail is that DM is the distribution of subgraphs induced by random walks with length M and SN is the training set of subgraphs induced by random walks with length N . Comparing these two losses is the essence of our OOD result. The final term B(f, δ,m,M,N) is a general bound involving these parameters. Based on the in-distribution result like in Theorem 3.1, we can similarly obtain, LDN ,0 ≤ LSN ,γ +AN (f, δ,m), (9) where DN is the distribution of subgraphs induced by random walks with length N and AN is the general bound. The key question boils down to: what is the relationship between LDN ,0 to LDM ,0? This question will be answered in the following sections. 4.1 A PROBABILITY BOUND FOR PARTITION CROSSES The above size generalization problem involves the distributions of random-walk-induced subgraphs from a large graph G with two lengths: N for training and M for testing. Also, M is much larger than N . Before we state our results, we would like to explain the simple intuition that motivates our theory: If the random walk always stays within the same partition, then the graph label of the random-walk-induced subgraph can be well predicted, no matter how long the random walk is. Here a partition means the subset of nodes with the same node label. The goal of this section is to find bounds on M for which we can provide OOD guarantees. We begin by considering a special labelling. Special Node Labeling: Sparsest Cut. A set S that minimizes ϕ(S) (and has |S| ≤ |V |/2) is called a sparsest cut. For simplicity assume that S is unique. Using Cheeger’s inequality, we first prove the following probability bounds related to this sampling procedure, thereby identifying the length M for which a random walk is likely to stay within the sparsest cut for d-regular graphs. The theorems are as follows. Theorem 4.1. Let UM = [u1, u2, . . . , uM ] be a length-M random walk over a connected, d-regular graph G, with u1 chosen from the stationary distribution of the nodes of G. If M ≤ d/(25/2 √ λ2), then the probability that UM crosses the sparsest-cut partition at least once is under 1/2. Here crossing the sparsest-cut partition S means that there exists an edge (u, v) of the random walk satisfies u ∈ S and v ∈ S̄. λ2 is the second-smallest eigenvalue of the normalized Laplacian. We can easily generalize the previous theorem to an arbitrary probability δ > 0 as below. Corollary 4.1.1. If M ≤ (δd)/23/2 √ λ2, the probability of the above random walk UM crossing over the sparsest-cut partition at least once is at most δ. General Node Labeling. Theorem 4.1 is restrictive in that it requires the partition S to be the sparsest cut. We now modify the proof to yield a quantity that can work for any node labelling f . Specifically, let φ be any boolean (i.e., {0, 1}-valued) labelling on the vertices of the graph. Let the positive node labelling of φ be S = {v ∈ V (G) : φ(v) = 1}. We are interested in bounding the probability that a random walk of length M includes an edge that crosses the positive node labelling S, i.e., an edge (u, v) satisfies u ∈ S and v ∈ S̄. Theorem 4.2. Let φ be a boolean labelling on the nodes of a connected, d-regular graph G with positive node labelling S (0-1 valued vector with φ[i] = 1 if vi ∈ S). Let UM = [u1, u2, . . . , uM ] be a length-M random walk over G, with u1 chosen from the stationary distribution of the nodes of G. Let Xi be the indicator variable of the event that the i-th edge of UM crosses S, i.e., Xi = 1 [ ui ∈ S, ui+1 ∈ S̄ ] and Yk = ∑k i=1 Xi is the number of times that UM crosses S in the first k steps. Let φ ′ = φ− 1(|S|/|V |) and α = φ′⊤Lφ′/∥φ′∥22. The conclusion is that: if M ≤ d 25/2 √ α then Pr [YM ≥ 1] ≤ 1 2 . Corollary 4.2.1. If M ≤ (δd)/23/2 √ α, the probability of the above random walk UM at least crosses over the positive node labelling of f once is at most δ, i.e., Pr [YM ≥ 1] ≤ δ. The formula for α arises from an alternative formulation of Cheeger’s inequality which expresses λ2 using a Rayleigh quotient (Spielman, 2015), in which y may be viewed as a real-valued labelling on the vertices. λ2 = min y⊥d (y⊤Ly)/(y⊤Dy) 4.2 SIZE GENERALIZATION ERROR Recall that, in the size generalization setup, we first train a GNN model f on subgraphs induced by many length-N random walks on G. Then during testing, given a large testing subgraph GM induced by a lengthM random walk on G, we sample a subgraph GN via a length-N random walk on GM and feed it to f to compute the empirical (classification) error for GM . If all nodes of GM are within a single positive node labelling, then all of their labels are the same. Therefore, no matter which subgraph GN is sampled, the generalization error (i.e., the probability of making a wrong prediction) for GM should be the same as the one for GN . Based on this reasoning, we have the following result. Theorem 4.3 (Size Generalization Error). For any δ ∈ [0, 1), if we restrict M , the size of the large random walk-induced subgraph, such that M ≤ (δd)/23/2 √ α, then the in-distribution generalization error LDM ,0, i.e., the probability of a wrong prediction on length-M -random-walk induced subgraphs, satisfies LDM ,0 ≤ δ + LDN ,0. (10) where LDN ,0 is the in-distribution generalization error of f on length-N random-walk-induced subgraphs. Note that this theorem explicitly constrains M , whereas the only condition on N is that LDN ,0 is small. Proof. Observe that, for any events F and E, we have Pr [F ] ≤ Pr [E] + Pr [ F |Ē ] . Let E be the event that a length-M random walk crosses the positive node labelling of the ground truth labels, and let F be the event that we make a wrong prediction on the induced subgraph GM . Theorem 3.1 bounds the second term, Pr [ F |Ē ] , because the generalization error on GM is the same as the one on GN (subgraphs induced by length-N random walks) when GM does not cross the positive node labelling. Corollary 4.2.1 bounds the first term. Substituting the values from the previous two theorems yields the claimed inequality. We already know the bound of the in-distribution generalization error LDN ,0 due to Theorem 3.1 — let us call this quantity δ̂. Using this we can obtain the final result for GCNs under our OOD setup. Theorem 4.3 simply states that, if the length M ≤ (δd)/23/2 √ α, with probability at least 1− δ̂, the OOD generalization error on large subgraphs (induced by length-M random walks) is the sum of error δ and the in-distribution generalization bound on small subgraphs (induced by length-N random walks). 5 EXPERIMENTS 5.1 IN-DISTRIBUTION: NUMERICAL PAC-BAYES BOUND COMPUTATION We conduct multi-class graph classification experiments to compare our improved bound to the original PAC-Bayes bound in (Liao et al., 2020). We use the same GCN model, adopt the same datasets, i.e., 6 synthetic datasets obtained from random graph models and 3 real world graph datasets used in (Yanardag & Vishwanathan, 2015), and follow the same experimental protocol. After training a GCN on each dataset, we compute the theoretical bounds using final model. The numerical comparisons of log bound values are shown in Figure 2. It is clear that our new bounds are significantly tighter and reduce the bound values by several orders of magnitude. The gap is further increased as the depth increases. The tables of bound values and the specific equations to compute them are provided in Appendix B.1. 5.2 OUT-OF-DISTRIBUTION: EFFICACY OF SIZE GENERALIZATION We performed OOD experiments to validate the values of the upper bound on the size of large subgraphs M that was set in Theorem 4.1 and its related theorems, for synthetic graphs. We also performed experiments on synthetic graphs that were non-homophilic with the same values of M and N , to examine size generalization in this case. We also examined the general feasibility of size generalization in real-world social network data. For synthetic graphs, we calculated this theoretical value for the upper bound, and selected large subgraph size M and small subgraph size N ≪ M accordingly. For the real-world case, we chose constant values of N = 10 and M = 50. For each subgraph, we assign as its graph label the label observed most often among its nodes. After sampling datasets of subgraphs of sizes M and N , we train GCN models on the dataset with N -length random walks and measure their performance on the training set, the validation set (a smaller data set generated the same way as the train set), and the testing set (a set of subgraphs inuced by length-M random walks). On the test set we record both the performance when inputting the whole large subgraph (Test error), as well as when performing the sampling procedure used for Theorem 4.3, in which we sample an induced subgraph from an N -length random walk for each data item (Sampling-test error). Synthetic Graphs. We adopt the CSBMs (Deshpande et al., 2018) to generate graphs that exhibit the homophily property. We use two blocks with much higher probability of connections inside the same block than between blocks, which leads to barbell-like graphs. In the non-homophilic case, we set these probabilities to be equal. We generate binary node labellings via the sparsest cut. CSBMs generate node features via a Gaussian mixture where individual choices of the component are determined by the node label. Real-world Graphs. We used social network data for Twitch streamers from (Rozemberczki et al., 2019). Each node is a streamer (Twitch user), and nodes are connected to mutual friendships. Node features are 3,169 different binary indicators of a wide array of attributes, including games liked, location, etc. Each node is labelled with a boolean value of whether the livestreamer has indicated that they use explicit language. In all cases, the GCN model achieves OOD test accuracy on large-subgraph that was comparable to ID accuracy on small-subgraph if not outright better. This is even the case when some of the constraints are violated: no d-regularity constraint was imposed for any of the datasets, and performance was still good for the test error which did not involve further subgraph sampling. This indicates that the theory is promising in practice for more general forms of size generalization. The accuracy on the train set, test set with subgraph sampling, and unaltered test set are shown in Figure 2, and the numerical values are in Appendix B.2. For many cases including all real-world cases, the test accuracy was actually higher than the training accuracy. This could potentially indicate that in the cases where size generalization can be guaranteed to work well, the GCN model benefits significantly from extra node information. It is also possible that because of the sampling procedure, there is overlap in nodes between the training and test sets, since they come from random-walk sampling procedures that naively select a uniformly random node as the initial node. 6 DISCUSSION In this work we have expanded the theoretical understanding of the generalizations of GNNs in both indistribution and out-of-distribution settings, deriving new theoretical guarantees in each setting. The results for in-distribution learning improve upon the state-of-the art PAC-Bayes bounds in (Liao et al., 2020), and the results for out-of-distribution learning provide insight into a practical learning setting under which GNNs are guaranteed to perform effective size generalization. Future directions for the in-distribution understanding would involve lowering the dependencies of other variables like the spectral norm of weights. Generalizing the results to other problems like node classification would also be interesting. In the out-of-distribution case, a number of different observations in experimentation indicate that the theory can still be very much expanded. We have identified cases in real-world datasets where well beyond the bounds on size set forth in the theory, and in all experiments the d-regularity assumption is violated, yet GCN size generalization is still effective in these cases. Expansions to the theory, including generalizing to non-d-regular graphs, can be explored to explain cases like these. A MATHEMATICAL PROOFS A.1 PROOF OF THEOREM 3.1 The proof is as follows, and makes up the remainder of the chapter. A.1.1 IMPROVEMENT ON DEGREE DEPENDENCY In (Liao et al., 2020), a generalization bound is attained on graph convolutional networks; this bound is dependent on a bound on the maximum perturbation of the function value when a perturbation U is applied to the weights W , presented in that paper’s Lemma 3.1. The bound is as follows |fw+u(X,A)− fw(X,A)|2 ≤ eBd l−1 2 ( l∏ i=1 ∥Wi∥2 ) l∑ k=1 ∥Uk∥2 ∥Wk∥2 (11) The primary goal of this set of improvements is to reduce the factor of d l−1 2 . For each layer, let Hi ∈ R|V |×h be the matrix containing the hidden embeddings of all of the nodes in its rows, with h being the hidden dimension. In the process of the proof of Theorem 3.1, we are able to show the following: Φj = max i |Hj [i, :]|2 ≤ d j 2B j∏ i=1 ∥Wi∥2 (12) Ψj = max i |H ′j [i, :]−Hj [i, :]|2 ≤ Bd j 2 ( j∏ i=1 ∥Wi∥2 ) j∑ k=1 ∥Uk∥2 ∥Wk∥2 ( 1 + 1 l )j−k (13) |∆l|2 = ∣∣∣∣ 1n1nH ′l−1(Wl + Ul)− 1n1nHl−1Wl ∣∣∣∣ 2 ≤ eBd l−1 2 ( l∏ i=1 ∥Wi∥2 )[ l∑ k=1 ∥Uk∥2 ∥Wk∥2 ] (11) We begin to simplify these bounds by removing the dependency on d j 2 , replacing it instead with a fixed power of d1/2 that remains constant for every layer, and thus in the final result of Equation 11 as well. Theorem A.1. For all 1 ≤ j ≤ l − 1, we have: Φj ≤ √ d B k∏ i=1 ∥Wi∥2 (14) Ψj ≤ ( 1 + ( 1 + 1 l )j) B √ d ( j∏ i=1 ∥Wi∥2 ) (15) Finally, |fw+u(X,A)− fw(X,A)|2 = |∆l|2 ≤ ( e+ 1 + 2 l ) B √ d l∏ i=1 ∥Wi∥2 (16) The proof follows from a lemma about the 2-norm of any node representation at any layer: Lemma A.1.1. We have, for all k ∈ [n] and for j ∈ [l]: |Hj [u, :]|2 ≤ B √ deg(u) ( j∏ i=1 ∥Wi∥2 ) (17) Proof. We prove this by induction. By definition |H0[u, :]|2 ≤ B and thus |H0[u]| ≤ √ deg(u)B 0∏ k=1 ∥Wk∥2. We assume that for all u, we have Hj−1[u, :] ≤ √ deg(u)B j−1∏ k=1 ∥Wi∥2. From these statements we are able to deduce |Hj [u, :]| ≤ ∑ v∈Nu L̃[u, v]|Hj−1[v, :]|2∥Wj∥2 ≤ ∑ v∈Nu 1√ deg(u)deg(v) [√ deg(v)B j−1∏ k=1 ∥Wk∥2 ] ∥Wj∥2 = ∑ v∈Nu 1√ deg(u) B ( j−1∏ k=1 ∥Wk∥2 ) ∥Wj∥2 = deg(u)√ deg(u) B j∏ k=1 ∥Wk∥2 = √ deg(u)B j∏ k=1 ∥Wk∥2 (18) In these inequalities we use the fact that L̃[i, j] = (A + I)ij/ √ deg(i)deg(j), and we assume the simple case where there are unweighted edges so that (A+ I)ij is 1 if and only if nodes i and j are connected and 0 otherwise. By Lemma A.1.1, we have that Φj = maxi |Hj [i, :]|2 ≤ √ dB ∏j i=1 ∥Wi∥2, which is exactly the result of equation (14). Claim A.1. For all v ∈ [n], |∆j [v, :]|2 ≤ B √ deg(v) ( 1 + 1l )j (∏j i=1 ∥Wi∥ )(∑j i=1 ∥Ui∥ ∥Wi∥ ) Proof. Proof: We use induction assuming this is true for ∆j−1. We then have |∆j [v, :]|2 ≤ ∑ u∈N (v) L̃[v, u]|H ′j−1[u, :]−Hj−1[u, :]|2∥Wj + Uj∥2 + ∑ u∈N (v) L̃[v, u]|Hj−1[u, :]|2∥Uj∥2 ≤ [ B ( 1 + 1 l )j−1(j−1∏ i=1 ∥Wi∥ )( j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) ∥Wj + Uj∥+B∥Uj∥ j−1∏ i=1 ∥Wi∥ ] (19) ∑ u∈N (v) L̃[v, u] √ deg(u) = B √ deg(v) j−1∏ i=1 ∥Wi∥ [ ∥Wj + Uj∥ ( 1 + 1 l )j−1(j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥ ] = B √ deg(v) j∏ i=1 ∥Wi∥ [ ∥Wj + Uj∥2 ∥Wj∥2 ( 1 + 1 l )j−1(j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥2 ∥Wj∥2 ] ≤ B √ deg(v) j∏ i=1 ∥Wi∥ [( 1 + 1 l )j (j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥2 ∥Wj∥2 ] ≤ B √ deg(v) j∏ i=1 ∥Wi∥ ( 1 + 1 l )j ( j∑ i=1 ∥Ui∥2 ∥Wi∥2 ) (20) ∆l has a slightly different formulation but it has a very similar bound: |∆l|2 = ∣∣∣∣ 1n1n ( L̃H ′l−1(Wl + Ul)− 1 n 1nL̃Hl−1(Wl) )∣∣∣∣ 2 = 1 n ∣∣∣1nL̃(H ′l−1 −Hl−1)(Wl + Ul) + 1nL̃Hl−1(Ul)∣∣∣ 2 ≤ 1 n n∑ i=1 |∆l−1[i, :]|2∥Wl + Ul∥2 + 1 n n∑ i=1 |Hl−1[i, :]|2∥Ul∥2 ≤ B √ d l−1∏ i=1 ∥Wi∥ ( 1 + 1 l )l−1( l−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) ∥Wl + Ul∥ +B √ d∥Ul∥2 l−1∏ i=1 ∥Wi∥2 ≤ B √ d l∏ i=1 ∥Wi∥ [( 1 + 1 l )l( l−1∑ i=1 ∥Ui∥ ∥Wi∥ ) + ∥Ul∥ ∥Wl∥ ] ≤ B √ d l∏ i=1 ∥Wi∥ ( 1 + 1 l )l( l∑ i=1 ∥Ui∥ ∥Wi∥ ) ≤ eB √ d l∏ i=1 ∥Wi∥ ( l∑ i=1 ∥Ui∥ ∥Wi∥ ) (21) From this we have proven a tighter bound on the final output of the GNN under perturbation, which we will use to calculate probabilistic and generalization bounds. A.1.2 IMPROVEMENT ON PROBABILISTIC BOUNDS USING RANDOM MATRIX THEORY In (Liao et al., 2020), for all i ∈ [l], with l being the number of layers, the prior and the distribution of the perturbations Ui ∈ Rdi+1×di ,, where all hidden dimensions di are upper-bounded by a value h, were generated by a normal distribution N (0, σ2I), and give probabilistic bounds on the operator norms ∥Ui∥ as P (∀i, ∥Ui∥ ≤ t) with probability greater than 1 − 2lh exp−t2/2hσ2. We improve these bounds using theorems on random matrices from work on high-dimensional probability, namely (Vershynin, 2018). Theorem A.2 (Theorem 4.4.5 in (Vershynin, 2018)). Let A be a matrix in Rm×n, where the entries Aij are independent, mean-zero, sub-Gaussian random variables. Then, for all t > 0 we have ∥A∥ ≤ CK( √ m+ √ n+ t) with probability ≥ 1− exp(−t2), where K = maxi,j ∥Aij∥ψ2 and C is some constant. In the above theorem the norm ∥X∥ψ2 is defined as inf{t : E[exp(X2/t2)] ≤ 2}. In Example 2.5.8 in (V ershynin, 2018), it is shown that if X ∼ N (0, σ2) then it has ∥X∥ψ2 ≤ Cσ. Corollary A.2.1. If U ∈ Rm×n is a random matrix generated with the distribution N (0, σ2I) (i.e. all entries are independent and identically distributed Gaussian random variables), then we have ∥U∥ ≤ σ( √ m+ √ n+ t) with probability at least 1− 2 exp(−t2). With a change of variable, we are able to calculate the following: P (∀i.∥Ui∥2 ≤ t) ≥ 1− P (∃i, ∥Ui∥ > t) ≥ 1− l∑ i=1 P (∥Ui∥ > t) ≥ 1− 2l exp (( t Cσ − 2 √ h )2) And by setting the right-hand side to 1/2, we obtain: t = Cσ(2 √ h+ √ ln(4l)) Using the above equation combined with our bound we are able to get |fw+u(X,A)− fw(X,A)|2 ≤ eB √ dl ( l∏ i=1 ∥Wi∥2 ) l∑ k=1 ∥Uk∥2 ∥Wk∥2 = eB √ dβll l∑ k=1 ∥Uk∥2 β ≤ eB √ dβl−1l(σ(2 √ h+ √ ln(4l))) ≤ e2B √ dβ̃l−1(σ(2 √ h+ √ ln(4l))) ≤ γ 4 (22) Here β̃ is an estimated of β such that |β − β̃| ≤ β/l that can be generated a priori; we discuss this in a later subsection. We can set σ = γ 4e2Bβ̃ √ dC ( 2 √ h+ √ ln(4l) ) to satisfy the final inequality. From this we can calculate the KL-divergence between the posterior and the prior: KL(Q∥P ) = |w| 2 2 2σ2 = 16e4B2dl2β2(l−1) ( 2 √ h+ √ ln(4l) )2 2γ2 l∑ i=1 ∥Wi∥F ≤ O ( B2dβ2ll2(h+ ln(l)) γ2 l∑ i=1 ∥Wi∥2F β2 ) ≤ O ( B2dl2 (h+ ln(l)) ∏l i=1 ∥Wi∥2 γ2 l∑ i=1 ∥Wi∥2F ∥Wi∥2 ) (23) From this we are able to calculate the generalization bound and thus prove the theorem. LD,0 ≤ LS,γ +O √√√√B2dl2(h+ ln(l))∏li=1 ∥Wi∥22∑li=1 ∥Wi∥2F∥Wi∥22 + ln mδ γ2m (24) A.1.3 SELECTING PARAMETER β̃ The prior normal distribution’s variance parameter σ2 is dependent on β, but β cannot be used in its calculation because that information is only known after model training. Instead, we can select a parameter β̂ such that |β − β̂| ≤ 1l β and thus 1 eβ l−1 ≤ β̂l−1 ≤ eβl−1 (as per equation 33 in (Liao et al., 2020)). As in (Liao et al., 2020) we only have to consider values of β in the range ( γ 2B √ d )1/l ≤ β ≤ ( γ √ m 2B √ d )1/l as otherwise the generalization bound holds trivially because LD,0 ≤ 1 by definition. If we consider values of β̂ that cover this interval then by union bound we are still able to get a high probability; the covering C needs to have |C| = l2 (m 1 2l − 1). A.2 PROOFS OF OUT-OF-DISTRIBUTION PROBABILITY BOUNDS A.2.1 PROOF OF THEOREM 4.1 Proof. Because u1 is chosen from the stationary distribution (uniform over vertices, because G is connected and d-regular), then for all i ≥ 1 the distribution for ui, ui+1 follows the distribution Unif[E], where E is the edge set of the graph. Let S be the sparsest-cut partition of G. Let Xi be the indicator of the event that the vertex pair is in the set of edges crossing the partition, namely 1{(ui, ui+1) ∈ E(S, S̄)}. By linearity of expectation, this means that E[Xi] = |E(S, S̄)|/|E|. Furthermore, let Yk be the cumulative number of edges crossing the partition along the first k steps of the random walk. This is expressed nicely as Yk = ∑k i=1 Xi. Thus E[Yk] = k |E(S,S̄)| |E| . Applying Markov’s inequality, we get Pr[Yk ≥ tk|E(S, S̄)|/|E|] ≤ 1/t. Suppose we wish to examine under what conditions we can ensure that we do not cross over the partition at all in M steps, i.e. Pr[YM ≥ 1] ≤ 1/2. From the inequality above, we are able to get that Pr [ YM ≥ 2M |E(S, S̄)| |E| ] ≤ 1 2 just by setting k = M and t = 2. We then use the following basic fact: if we have an inequality of the form Pr[Z ≥ z] ≤ 12 , then Pr[Z ≥ z ′] ≤ 12 for any z ′ ≥ z. Let E(S) denote the set of edges connected to any vertex in S. Because |E(S)| ≤ |E|, then we have |E(S, S̄)|/|E| ≤ |E(S, S̄)|/|E(S)|. Furthermore, since we assume a connected graph, |E(S)| ≥ (d/2)|S|, and thus |E(S, S̄)|/|E(S)| ≤ |E(S, S̄)|/[(d/2)|S|]. 2 Thus using the fact above we can deduce Pr [ YM ≥ 2M |E(S, S̄)| (d/2)|S| ] ≤ 1 2 Note that |E(S, S̄)|/|S| is the conductance of the graph ϕ(G), because S was defined to be the sparsest-cut partition of G. Thus we can apply the fact again with Cheeger’s inequality to get Pr [ YM ≥ 2M(2/d) √ 2λ2 ] ≤ 1 2 And since we are interested in Pr[YM ≥ 1], we can thus set 2M √ 2λ2 ≤ 1 to get a necessary condition for M , from which we achieve M ≤ d 25/2 √ λ2 This completes the proof. 2It is important to note that this specific dependency of |E(S)| on d requires G to be a d-regular graph. If the theorem is to be expanded to more general cases, one may use the simple inequality |E(S)| ≥ |S|. A.2.2 PROOF OF THEOREM 4.2 Proof. The quantity φ′ is a transformation of φ that retains all the information contained in φ while still being orthogonal to the all-ones vector 1, so that we can apply Cheeger’s inequality. This orthogonalization is rather standard and can be found in (Spielman, 2015). Let s = |S|/|V (G)|. Note that s ∈ [0, 1], and without loss of generality we can assume that s ≤ 1/2. We observe that the vth coordinate of the vector φ′ corresponds to the mapping φ′(v) = { 1− s v ∈ S −s v /∈ S (25) This ensures that φ′ is orthogonal to 1, as φ′⊤1 = n∑ i=1 φ′(vi) = |S| ( 1− |S| |V | ) + (|V | − |S|) ( − |S| |V | ) = |S| − |V | ( |S| |V | ) = 0. We then note that ∥φ′∥22 = ∑n i=1 φ(v) 2 is equal to s(1− s)|V |, and we can infer |S|/2 ≤ ∥φ′∥22 ≤ |S|; the first inequality holds since s ≤ 1/2. The number of edges |E(S, S̄)| crossing the labelling-partition is equal to φ′⊤Lφ′, as φ′⊤Lφ′ = ∑ (u,v)∈E ((φ(u)− s)− (φ(v)− s))2 = |E(S, S̄)| where L is the Laplacian matrix of G. Thus the quantity 2M |E(S,S̄)||E(S)| ≤ 2M φ′⊤Lφ′ |E(S)| ≤ 2M φ′⊤Lφ′ (d/2)|S| . We are able to get the second inequality because we know |E(S)| ≥ (d/2)|S|. Because we know that |S| ≥ ∥φ′∥2, we can then upper bound this further by 2M φ ′TLφ′ (d/2)∥φ′∥22 . Substituting this quantity in the proof of Theorem 4.1, we achieve the desired bound for M . B EXPERIMENTAL METHODOLOGY AND RESULTS B.1 IN-DISTRIBUTION EXPERIMENTS The datasets used are a combination of synthetic (Erdos-Renyi and Stochastic Block Model) and real-world graphs (IMDBBINARY and IMDBMULTI of data from the Internet Movie Database, and COLLAB, a dataset of academic collaborations), and a bioinformatics dataset, PROTEINS, from (Yanardag & Vishwanathan, 2015). Two different GCN network depths of of l = 4 and l = 6 were used. We use the following formulae for the generalization bound from (Liao et al., 2020) and our new bound, using an explicit constant factor of 42 from (Liao et al., 2020). GenGap(B, d, l, {Wi}li=1) = √√√√ 42 · B2dl−1l2 ln(4lh) ∏l i=1 ∥Wi∥22 ∑l i=1 ∥Wi∥2F ∥Wi∥22 γ2m (26) Similarly, the formula used for the new PAC-Bayes generalization bound is GenGap(B, d, l, {Wi}li=1) = √√√√ 42 · B2dl2(h+ ln(l)) ∏l i=1 ∥Wi∥22 ∑l i=1 ∥Wi∥2F ∥Wi∥22 γ2m (27) We remove an additive O(logm) term in the numerator within the square root after validating that it was numerically negligible. Tables below are for calculated bounds in the case of 4 layers (Table 1) and 6 layers (Table 2). B.2 OUT-OF-DISTRIBUTION EXPERIMENTS B.2.1 METHODOLOGY Experiments were performed to measure the effectiveness size generalization of GCN models when applied to the size generalization learning case described in Section 4, where the learning task is classifying the most common node label in sub-communities of a large underlying network. For each of the synthetic graphs, we calculate an upper bound for M set in the out-of-distribution inequalities we have derived. Since the graphs examined are all not d-regular, we calculate a value of α as φ ⊤Lφ φ⊤Dφ , where L is the graph Laplacian matrix and D is the diagonal degree matrix, to apply to the formula set in Theorem 4.2. Furthermore, we use a more permissive value of δ = 0.75. Similar upper bounds for M were computed for the real-world cases, but the values were too small for experimental use. In this case, we just set N = 10 and M = 50 to attempt to gain insight about the size generalization task’s general feasibility in real-world cases. All experiments were performed with use of the Adam optimizer (Kingma & Ba, 2015), with a constant learning rate 0.01. Models were trained for 10 epochs, with a batch size 32 randomly selected. The models used are different parameterizations of the Graph Convolutional Network as implemented by the library pytorch-geometric (Fey & Lenssen, 2019). For synthetic experiments, which used smaller graphs with generally smaller degree, the parameterization was 3 layers with a hidden dimension of 5, and for the real-world data case, the parameterization was 10 layers with a hidden dimension of 32. For each underlying graph, we generate three train/validation sets (of size N random walks) and test sets (of size M random walks) and we record the loss and accuracy as the average of the three runs. B.2.2 SYNTHETIC GRAPH EXPERIMENTS A large underlying synthetic graph was generated using the stochastic block model, with some adjustment to ensure that the randomly-generated graph had a single connected component. By controlling the intra- and inter-block connection probability values, we are able to control the homophily of the generated graph, which we validate by measuring the value of λ2, as well as calculating the sparsest cut via “Cheeger rounding” (Spielman, 2015) and subsequently the conductance of the graph with respect to this partition. In the experiments, we generated a graph with approximately 2000 nodes, with in-block connectivity probability set to 8/1000 and inter-block connectivity set to 6/105. Node features are generated from a mixture of multivariate Gaussian distributions with dimension 3, mean (−0.5,−0.5,−0.5) for one block, and mean (0.5, 0.5, 0.5) for the other; the covariance matrix is a diagonal matrix (each coordinate is independent) of variance either 2, 4, or 8. Experiments were also performed on non-homophilic synthetic graphs. Like the homophilic synthetic graphs they are generated with the stochastic block model with about 2000 nodes, about 1000 of each label, and with the same mixture-of-Gaussian node features. However the parameters used for the generation of connection are crucially different. The probabilities of connection between nodes of the same block and nodes of a different block are set to be equal, with both being set to 8/1000. These settings ensure that a node’s label is independent from the labels of its neighbors, so the homophily property is not exhibited. Contrasting with the results shown for the homophilic synthetic graphs, the non-homophilic graph results show that the out-of-distribution test accuracy is less than the training accuracy. This further illustrates the association between homophily and size generalization. B.2.3 REAL-WORLD GRAPH EXPERIMENTS Since the node features are indicators, we encoded the node feature information by using the positional encoding mechanism introduced in the Transformer model (Vaswani et al., 2017). For each node, each of their integer indicators was encoded via positional embedding and aggregated via sum.
1. What are the main contributions and strengths of the paper regarding PAC-Bayes generalization bounds for graph neural networks? 2. What are the weaknesses and limitations of the paper, particularly regarding its scope, assumptions, comparisons, and experimental designs? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the paper's focus on homophilic graphs and graph size shifts, as well as its lack of discussion and comparison with related works?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies the PAC-Bayes generalization bound for both IID and OOD generalization on graphs, with a focus on homophilic graphs and graph size shifts. In particular, the authors reduce an exponential dependency on the node degree to a linear dependency for the IID generalization bound. Then they further apply the generalization bound to study random walk sampled graphs with different sizes. Besides, they conduct some experiments to support the derived bounds. Strengths And Weaknesses While I appreciate the completeness and clarity of the paper, I find the scope of the paper is rather limited, or even overclaimed by the authors. Overclaimed improvements in IID generalization bound. The authors claimed that reduce an exponential dependency on the node degree to a linear dependency for the IID generalization bound. However, they also exacerbate the dependency on hidden dimensions h from log scale to linear. This essentially establishes a trade-off between their bound and that from [1]. The advantages only exist for graphs with high degrees and GNNs with deep layers. In contrast, many of realistic graphs (e.g., molecules) tend to have a lower degree. Practitioners tend to adopt a shallower and wider GNN due to the memory cost, especially for homophilic graphs where simple MLP with some post-hoc modifications can achieve top performances (cf. leaderboard results in OGB). When demonstrating the advances of the established generalization bound over [1], the authors seem to be conducting misleading comparisons. Since the advances only exist for graphs with high degrees and GNNs with deep layers, the authors are using a deeper GNN (4, 6, 10 layers compared to 2, 4, 6, 8 layers in [1]), and small hidden dimension (5, 32 compared to 128 in [1]). This makes the improvements and significance of the IID generalization bound limited. Overclaims in assumptions for OOD generalization bound. When it comes to the OOD generalization bound, the authors claimed their setup has advantages over some OOD setups in the literature where a generative model of graphs and labels is explicitly assumed [2,3,4,5,6,7,8,9]. In particular, in the paragraph of Size Generalization Assumptions in the paper, the authors are essentially making assumptions about the data generation process. The graphs are sampled from random walks with different lengths, which forms a specific graph family just like graphon [2,3,6,8,9]. The labels are determined by the majority of labels in the training graphs, which essentially have little difference from the causal assumptions made in [2,3,4,5,6,7,8]. I didn’t see the advantages of the assumptions made in the paper. However, this paper introduces additional assumptions that require the graphs to be homophilic, which makes the random walk sampling over the large graphs trivial. As [10] already found that random walks with more steps will converge to some stationary distributions over the original graphs. Therefore, analyzing homophilic graphs sampled using random walks with more steps seems to be less interesting. Limited scope and poor coverage of the literature. Although, as pointed out by other reviewers that this work is limited to graph size shifts and missed discussion with many related works, I still find many missing discussions and comparisons in [2,3,4,5,6,7,8,9]. In particular, [2,3,6,8] studied graph size shifts as well, but I can’t find any discussions in work. Both the theoretical and empirical parts of the work lack a comparison with these works. References: [1] A PAC-Bayesian approach to generalization bounds for graph neural networks, ICLR21. [2] From Local Structures to Size Generalization in Graph Neural Networks, ICML21. [3] Size-Invariant Graph Representations for Graph Classification Extrapolations, ICML21. [4] Handling distribution shifts on graphs: an invariance perspective, ICLR22. [5] Discovering Invariant Rationales for Graph Neural Networks, ICLR22. [6] Invariance Principle Meets Out-of-Distribution Generalization on Graphs, ICML22: Workshop on Spurious Correlations, Invariance and Stability. [7] Learning Substructure Invariance for Out-of-Distribution Molecular Representations, NeurIPS22. [8] OOD Link Prediction Generalization Capabilities of Message-Passing GNNs in Larger Test Graphs, NeurIPS22. [9] Generalization Analysis of Message Passing Neural Networks on Large Random Graphs, arXiv22. [10] Representation Learning on Graphs with Jumping Knowledge Networks, ICML18. Clarity, Quality, Novelty And Reproducibility This work is well-written and easy to follow. However, many overclaims and misleading experiments, plus the limited scope of the paper, make the novelty of the work limited.
ICLR
Title In-distribution and Out-of-distribution Generalization for Graph Neural Networks Abstract Graph neural networks (GNNs) are models that allow learning with structured data of varying size. Despite their popularity, theoretical understanding of the generalization of GNNs is an under-explored topic. In this work, we expand the theoretical understanding of both in-distribution and out-of-distribution generalization of GNNs. Firstly, we improve upon the state-of-the-art PAC-Bayes (in-distribution) generalization bound primarily by reducing an exponential dependency on the node degree to a linear dependency. Secondly, utilizing tools from spectral graph theory, we prove some rigorous guarantees about the out-of-distribution (OOD) size generalization of GNNs, where graphs in the training set have different numbers of nodes and edges from those in the test set. To empirically verify our theoretical findings, we conduct experiments on both synthetic and real-world graph datasets. Our computed generalization gaps for the in-distribution case significantly improve the state-of-the-art PAC-Bayes results. For the OOD case, experiments on community classification tasks in large social networks show that GNNs achieve strong size generalization performance in cases guaranteed by our theory. 1 INTRODUCTION Graph neural networks (GNNs), firstly proposed in Scarselli et al. (2008), generalize artificial neural networks from processing fixed-size data to processing arbitrary graph-structured or relational data, which can vary in terms of the number of nodes, the number of edges, and so on. GNNs and their modern variants (Bronstein et al., 2017; Battaglia et al., 2018) have achieved state-of-the-art results in a wide range of application domains, including social networks (Hamilton et al., 2017), material sciences (Xie & Grossman, 2018), drug discovery (Wieder et al., 2020), autonomous driving (Liang et al., 2020), quantum chemistry (Gilmer et al., 2020), and particle physics (Shlomi et al., 2020). Despite their empirical successes, the theoretical understanding of GNNs are somewhat limited. Existing works largely focus on analyzing the expressiveness of GNNs. In particular, Xu et al. (2018) show that GNNs are as powerful as the Weisfeiler-Lehman (WL) graph isomorphism test (Weisfeiler & Leman, 1968) in distinguishing graphs. Chen et al. (2019) further demonstrate an equivalence between graph isomorphism testing and universal approximation of permutation-invariant functions. Loukas (2019) show that GNNs with certain conditions (e.g., on depth and width) are Turing universal. Chen et al. (2020) and Xu et al. (2020a) respectively examine whether GNNs can count substructures and perform algorithmic reasoning. In the vein of statistical learning theory, generalization analyses for GNNs have been developed to bound the gap between training and testing errors using VC-dimension (Vapnik & Chervonenkis, 1971), Rademacher complexity (Bartlett & Mendelson, 2002), algorithmic stability (Bousquet & Elisseeff, 2002), and PACBayes (McAllester, 2003) (a Bayesian extension of PAC learning (Valiant, 1984)). Depending on whether the problem setup is in-distribution (ID) or out-of-distribution (OOD), i.e., whether test data comes from the same distribution as training data, we categorize the literature into two groups. ID Generalization Bounds. Scarselli et al. (2018) provide a VC-dimension based generalization bound for GNNs whereas Verma & Zhang (2019) present the stability-based generalization analysis for singlelayer graph convolutional networks (GCNs) (Kipf & Welling, 2016). Both consider node classification and assume the node features are independent and identically-distributed (IID), which conflicts with the common relational learning setup (e.g., semi-supervised node classification) at which GNNs excel. Relying on the neural tangent kernel (NTK) approach (Jacot et al., 2018), Du et al. (2019) characterize the generalization bound of infinite-width GNNs on graph classification. Garg et al. (2020) derive the Rademacher complexity based bound for message passsing GNNs on graph classification. Lv (2021) establish results for GCNs on node classification using Rademacher complexity as well. Based on PAC-Bayes, Liao et al. (2020) obtain a tighter bound for both GCNs and message passsing GNNs on graph classification compared to (Garg et al., 2020; Scarselli et al., 2018). Subsequently, Ma et al. (2021) also leverage PAC-Bayes and show generalization guarantees of GNNs on subgroups of nodes for node classification. More recently, Li et al. (2022) study the effect of graph subsampling in the generalization of GCNs. OOD Generalization Yehudai et al. (2021) study size generalization for GNNs — this is a specific OOD setting where training and testing graphs differ in the number of nodes and edges. They show negative results that specific GNNs can perfectly fit training graphs but fails on OOD testing ones. Baranwal et al. (2021) consider specific graph generative models, i.e., the contextual stochastic block model (CSBM) (Deshpande et al., 2018), where CSBMs during training and testing are of the same means but different number of nodes, intra-, and inter-class edge probabilities. They present generalization guarantees for single-layer GCNs on binary node classification tasks. Later, Maskey et al. (2022) assume yet another class of graph generative models, i.e., graphons, where the kernel is shared across training and testing but the number of nodes and edges could vary. They obtain generalization bounds of message passing GNNs on graph classification and regression that depend on the Minkowski dimension of the node feature space. Relying on a connection of over-parameterized networks and neural tangent kernel, Xu et al. (2020b) find that taskspecific architecture/feature designs help GNNs extrapolate to OOD algorithmic tasks. Wu et al. (2022a) propose explore-to-extrapolate risk minimization framework, for which the solution is proven to provide an optimal OOD model under the invariance and heterogeneity assumptions. Yang et al. (2022) propose a two-stage model that both infers the latent environment and makes predictions to generalize to OOD data. Empirical studies suggest it works well on real-world molecule datasets. Wu et al. (2022b) study a new objective that can learn invariant and causal graph features that generalize well to OOD data empirically. All above works follow the spirit of invariant risk minimization (Arjovsky et al., 2019) and focus on designing new learning objectives. Instead, we provide generalization bound analysis from the traditional statistical learning theory perspective. Our Contributions. In this paper, we study both in-distribution and out-of-distribution generalization for GNNs. For in-distribution graph classification tasks, we significantly improve the previous state-of-the-art PAC-Bayes results in (Liao et al., 2020) by decreasing an exponential dependency on the maximum node degree to a linear dependency. For OOD node classification tasks, we do not assume any known graph generative models which is in sharp contrast to the existing work. We instead assume GNNs are trained and tested on subgraphs that are sampled via random walks from a single large underlying graph, as an efficient means to generate a connected subgraph. We identify interesting cases where a graph classification task is theoretically guaranteed to perform well at size generalization, and derive generalization bounds. We validate our theoretical results by conducting experiments on synthetic graphs, and also explore size generalization on a collection of real-world social network datasets. In the in-distribution case, we observe an improvement of several orders of magnitude in numerical calculations of the generalization bound. In the out-of-distribution case, we validate that, in cases where the theory guarantees that size generalization works well, the prediction accuracy on large subgraphs is always comparable to the accuracy on small subgraphs, and in many cases is actually better. (a) An example of a small expander graph. Any labelling of its nodes cannot exhibit homophily. (b) Example of a small barbell graph. If a labelling is exactly differentiated between the two groups, then it exhibits homophily. 2 BACKGROUND INFORMATION A graph G is an abstract mathematical model for pairwise relationships, with a set of vertices V and a set of edges E ⊆ V × V . Two vertices v1, v2 are said to be connected if (v1, v2) ∈ E. For a given graph G ∈ G we can also denote its vertices by V (G) and edges E(G). Unless otherwise specified, we assume graphs are undirected and without multi-edges. In machine learning, a graph (or graph-structured data) typically come with a set of node features. Common graph based machine learning tasks include node classification (or regression) and graph classification (or regression). We use the following notation. • Graph data {Gi = (Vi, Ei)}mi=1 ∈ G, where G is the set of all graphs. The neighborhood of a vertex v is denoted N (v) = {u ∈ V (Gi) : (v, u) ∈ E(Gi)}. • Node feature xv : V → X , with X being the feature space, e.g., X = Rdv . • Node labels y : V → Y , with Y being the set of labels, e.g., Y = [n]. Graph neural networks (GNNs). GNNs generalize regular neural networks to process data with varying structures and dependencies. GNNs achieve this flexibility via a message passing computational process. In particular, at the k-th step (or layer) of message passing, we update the representation h(k+1)u of node u as follows, h(k+1)u = UPDATE(h (k) u ,AGGREGATE({h(k)v |v ∈ N (u)})). (1) This update happens for all nodes in parallel within each message passing step. Moreover, the UPDATE and AGGREGATE operators are shared by all nodes, which enables the same GNN to process varyingsized graphs. Once we have finished the finite-step message passing process, we can use the output node representations to make predictions on nodes, edges, and the graph via additionally parameterized readout functions. This message passing framework is quite general since one can instantiate the UPDATE and AGGREGATE operators by different neural networks. For example, the widely used Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016), which are the main interest of our work, have the form h(k+1)u = σ Wk ∑ v∈N (u)∪{u} h (k) v√ |N (u)| √ |N (v)| (2) where one applies a linear transformation (Wk) to all node representations, a weighted-sum over the neighborhood, and an element-wise nonlinearity (e.g., ReLU activation). Note that the learnable weights Wk are different from layer to layer. Homophily. A concept studied in network science, homophily (McPherson et al., 2001) is the property that similar nodes group together. For node classification (or node labelling), this means that neighbouring nodes tend to have the same label. Size generalization is plausible when the labelling of the nodes exhibits homophily. The presence of a homophilic graph labelling implies that the labels of the nodes are unlikely to change during the course of a long random walk on the graph. It is important to note that homophily is also a concept that relates to the graph topology, as not every possible graph structure can be given a labelling that exhibits homophilic properties. An example of one such topology where homophily is impossible is an expander graph (Hoory et al., 2006), as shown in Figure 1a, where nodes have either random or random-like edges connected to a constant number of other nodes in the entire graph. In this case, any labelling of the nodes is far from homophilic, as can be shown using the expansion property. A setting with more homophily is akin to a barbell graph, as shown in Figure 1b, where there are two densely connected components, and comparatively few edges connecting the two dense regions. If the graph labelling of interest lines up with these divisions inherent in the topology, then it is natural to say that it exhibits a homophilic property. Cheeger’s Inequality. A mathematical description of homophily can be given using concepts from spectral graph theory. Cheeger’s inequality (Hoory et al., 2006) is a theorem that pertains to partitions of graphs, or equivalently binary-valued labellings on graphs (one side of the partition is labelled 0, the other 1). A crucial definition is the conductance, defined by ϕ(S) = |E(S, S̄)| |S| ∀S ⊆ V and ϕ(G) = min |S|≤ |V |2 ϕ(S). Here E(S, S̄) is the set of edges connecting a node in S to a node outside of S. Cheeger’s inequality states λ2/2 ≤ ϕ(G) ≤ √ 2λ2, where λ2 is the second-smallest eigenvalue of the normalized Laplacian1 L̃. This inequality links the realvalued quantity λ2 to the concept of homophily. If λ2 is small then the conductance of G must also be low, by Cheeger’s inequality. If a labelling on graph nodes f : V (G) → {0, 1} roughly agrees with a low-conductance partition (i.e., one side of the partition S is generally labelled 0 and the complement S̄ is generally labelled 1) then the labelling f exhibits homophily. 3 IMPROVEMENT OF IN-DISTRIBUTION PAC-BAYES BOUND The state-of-the-art generalization bounds for GNNs in the in-distribution case were formulated by Liao et al. (2020) using the PAC-Bayes theory. Specifically, they build upon the PAC-Bayes theorem in (Neyshabur et al., 2018) that pertains to homogeneous feedforward neural networks. We denote one sample as z = (X,A, y) where X ∈ X , A ∈ G, and y ∈ Y are the node features, the adjacency matrix, and the graph label respectively. Each sample is drawn from some unknown data distribution D (with support X ×G ×Y) in an i.i.d. fashion. Since both training and testing samples are drawn from the same distribution, this is the in-distribution setup. Following (Liao et al., 2020), we consider a margin loss for multi-class graph classifications as below, LD,γ = LD,γ(fw) = Pz∼D ( fw(X,A)[y] ≤ γ +max j ̸=y fw(X,A)[j] ) (3) where γ > 0 is the margin parameter and fw is the model (hypothesis) parameterized by weights w. Since D is unknown, we can not compute this true loss (risk). We instead minimize the empirical loss (risk) that is defined on the sampled training set S as below, LS,γ = LS,γ(fw) = 1 m ∑ z∈S 1 ( fw(Xi, Ai)[y] ≤ γ +max j ̸=y fw(Xi, Ai)[j] ) , (4) 1Here L̃ = D−1/2(D−A)D−1/2, where D is the diagonal matrix of vertex degrees and A is the adjacency matrix. where m is the number of training samples. For simplicity, we abbreviate LD,γ(fw) and LS,γ(fw) as LD,γ and LS,γ respectively from now on. Our main in-distribution result bounds the gap between true and empirical risks for GCNs, shown in the following theorem. The proof is in Appendix A.1. Theorem 3.1. For any B > 0, l > 1, let fw ∈ H : X × G → Rk be an l-layer GCN. Then with probability ≥ 1− δ over the choice of an iid size-m training set S from the data distribution D, we have for any w: LD,0 ≤ LS,γ +O √√√√B2 d l2 (h+ ln l) ∏li=1 ∥Wi∥22∑li=1 (∥Wi∥2F /∥Wi∥22) + ln mδ γ2m (5) Here d equals to one plus the maximum node degree that can be achieved by the data distribution. l is the depth, i.e., the number of layers, of GCNs. Wi is the weight matrix of GCNs in the i-th layer. B is the radius of the minimal ℓ2 ball that contains all node features, i.e., ∀v, ∥xv∥2 ≤ B. This improves the bound in (Liao et al., 2020), which is provided below for a better comparison, LD,0 ≤ LS,γ +O √√√√B2 dl−1 l2h log(lh) ∏li=1 ∥Wi∥22∑li=1(∥Wi∥2F /∥Wi∥22) + log mlδ γ2m . (6) The proof of the theorem from (Liao et al., 2020) is an induction over the l layers, in which the spectral norm of the weights and a maximum degree term is multiplied at each step. We observe that it is possible to avoid passing the maximum degree term via a refined argument. This leads to a tightening of one of the main inequalities used in the induction proof, thus in turn resulting in substantial improvements to the overall bound. As can be seen above, we reduce the exponential term dl−1 to a linear term d, which is a significant improvement for graphs even with small node degrees. 4 TOWARDS DEVELOPING A THEORY FOR SIZE GENERALIZATON In this section, we develop an out-of-distribution (OOD) generalization theory for GNNs. Since we adopt a statistical learning viewpoint, there must necessarily be some assumptions relating the training and testing graphs (otherwise the No-Free Lunch theorem applies). There is a tradeoff between assumptions that are practically relevant, and those for which rigorous guarantees are provable. We have chosen assumptions that we believe strike a balance between those objectives, at least for applications like social networks. Size Generalization Assumptions. We consider the following setup. First, we assume that there exists an extremely large graph G like the user network in Twitter so that one needs to sample subgraphs (e.g., via random walks) for training and testing machine learning models. This is akin to the practical setups of (Grover & Leskovec, 2016; Hamilton et al., 2017). To generate training and testing subgraphs, we run random walks of length N and M respectively on this single large graph, where M ≫ N , and collect the subgraphs induced by these walks. GNNs are then trained on the subgraphs induced by the shorter (length-N ) walks. In testing, we assume a procedure where a length-M random walk induced subgraph is sampled from the large subgraph. Random walks are initiated by choosing an initial node uniformly at random from all the nodes in the graph, and at each step there is an equal probability of selecting any of the current node’s neighbors. This is an interesting OOD problem where training and testing graphs come from different distributions determined by the underlying large graph and the random walk sampling with specific length. We consider the graph classification problem and assume that the graph label is determined by the majority of node labels within the graph, which is reasonable for many applications that involve homophilic graphs. For the node labeling, we assume it is binary but have no assumptions on how labels are generated. Crucially, we assume nothing about the underlying large graph. Therefore, our setup has advantages over some OOD setups in the literature where a generative model of graphs and labels is explicitly assumed. Relation with In-Distribution Result. We know the relationship between true error defined on the unknown data distribution D and empirical error defined on the size-m training set S. Specifically, for any GCN f , with probability at least 1− δ, we have a general bound as follows, LD,0 ≤ LS,γ +A(f, δ,m), (7) where we abbreviate the bound as A(f, δ,m) and omit specific parameters like maximum node degree d. In the size generalization problem, we use random walks with lengths N and M for collecting training and testing subgraphs (data) respectively. We are interested in proving a statement of the following form: for any GCN f , we have with probability at least 1− δ, LDM ,0 ≤ LSN ,γ + B(f, δ,m,M,N). (8) The key detail is that DM is the distribution of subgraphs induced by random walks with length M and SN is the training set of subgraphs induced by random walks with length N . Comparing these two losses is the essence of our OOD result. The final term B(f, δ,m,M,N) is a general bound involving these parameters. Based on the in-distribution result like in Theorem 3.1, we can similarly obtain, LDN ,0 ≤ LSN ,γ +AN (f, δ,m), (9) where DN is the distribution of subgraphs induced by random walks with length N and AN is the general bound. The key question boils down to: what is the relationship between LDN ,0 to LDM ,0? This question will be answered in the following sections. 4.1 A PROBABILITY BOUND FOR PARTITION CROSSES The above size generalization problem involves the distributions of random-walk-induced subgraphs from a large graph G with two lengths: N for training and M for testing. Also, M is much larger than N . Before we state our results, we would like to explain the simple intuition that motivates our theory: If the random walk always stays within the same partition, then the graph label of the random-walk-induced subgraph can be well predicted, no matter how long the random walk is. Here a partition means the subset of nodes with the same node label. The goal of this section is to find bounds on M for which we can provide OOD guarantees. We begin by considering a special labelling. Special Node Labeling: Sparsest Cut. A set S that minimizes ϕ(S) (and has |S| ≤ |V |/2) is called a sparsest cut. For simplicity assume that S is unique. Using Cheeger’s inequality, we first prove the following probability bounds related to this sampling procedure, thereby identifying the length M for which a random walk is likely to stay within the sparsest cut for d-regular graphs. The theorems are as follows. Theorem 4.1. Let UM = [u1, u2, . . . , uM ] be a length-M random walk over a connected, d-regular graph G, with u1 chosen from the stationary distribution of the nodes of G. If M ≤ d/(25/2 √ λ2), then the probability that UM crosses the sparsest-cut partition at least once is under 1/2. Here crossing the sparsest-cut partition S means that there exists an edge (u, v) of the random walk satisfies u ∈ S and v ∈ S̄. λ2 is the second-smallest eigenvalue of the normalized Laplacian. We can easily generalize the previous theorem to an arbitrary probability δ > 0 as below. Corollary 4.1.1. If M ≤ (δd)/23/2 √ λ2, the probability of the above random walk UM crossing over the sparsest-cut partition at least once is at most δ. General Node Labeling. Theorem 4.1 is restrictive in that it requires the partition S to be the sparsest cut. We now modify the proof to yield a quantity that can work for any node labelling f . Specifically, let φ be any boolean (i.e., {0, 1}-valued) labelling on the vertices of the graph. Let the positive node labelling of φ be S = {v ∈ V (G) : φ(v) = 1}. We are interested in bounding the probability that a random walk of length M includes an edge that crosses the positive node labelling S, i.e., an edge (u, v) satisfies u ∈ S and v ∈ S̄. Theorem 4.2. Let φ be a boolean labelling on the nodes of a connected, d-regular graph G with positive node labelling S (0-1 valued vector with φ[i] = 1 if vi ∈ S). Let UM = [u1, u2, . . . , uM ] be a length-M random walk over G, with u1 chosen from the stationary distribution of the nodes of G. Let Xi be the indicator variable of the event that the i-th edge of UM crosses S, i.e., Xi = 1 [ ui ∈ S, ui+1 ∈ S̄ ] and Yk = ∑k i=1 Xi is the number of times that UM crosses S in the first k steps. Let φ ′ = φ− 1(|S|/|V |) and α = φ′⊤Lφ′/∥φ′∥22. The conclusion is that: if M ≤ d 25/2 √ α then Pr [YM ≥ 1] ≤ 1 2 . Corollary 4.2.1. If M ≤ (δd)/23/2 √ α, the probability of the above random walk UM at least crosses over the positive node labelling of f once is at most δ, i.e., Pr [YM ≥ 1] ≤ δ. The formula for α arises from an alternative formulation of Cheeger’s inequality which expresses λ2 using a Rayleigh quotient (Spielman, 2015), in which y may be viewed as a real-valued labelling on the vertices. λ2 = min y⊥d (y⊤Ly)/(y⊤Dy) 4.2 SIZE GENERALIZATION ERROR Recall that, in the size generalization setup, we first train a GNN model f on subgraphs induced by many length-N random walks on G. Then during testing, given a large testing subgraph GM induced by a lengthM random walk on G, we sample a subgraph GN via a length-N random walk on GM and feed it to f to compute the empirical (classification) error for GM . If all nodes of GM are within a single positive node labelling, then all of their labels are the same. Therefore, no matter which subgraph GN is sampled, the generalization error (i.e., the probability of making a wrong prediction) for GM should be the same as the one for GN . Based on this reasoning, we have the following result. Theorem 4.3 (Size Generalization Error). For any δ ∈ [0, 1), if we restrict M , the size of the large random walk-induced subgraph, such that M ≤ (δd)/23/2 √ α, then the in-distribution generalization error LDM ,0, i.e., the probability of a wrong prediction on length-M -random-walk induced subgraphs, satisfies LDM ,0 ≤ δ + LDN ,0. (10) where LDN ,0 is the in-distribution generalization error of f on length-N random-walk-induced subgraphs. Note that this theorem explicitly constrains M , whereas the only condition on N is that LDN ,0 is small. Proof. Observe that, for any events F and E, we have Pr [F ] ≤ Pr [E] + Pr [ F |Ē ] . Let E be the event that a length-M random walk crosses the positive node labelling of the ground truth labels, and let F be the event that we make a wrong prediction on the induced subgraph GM . Theorem 3.1 bounds the second term, Pr [ F |Ē ] , because the generalization error on GM is the same as the one on GN (subgraphs induced by length-N random walks) when GM does not cross the positive node labelling. Corollary 4.2.1 bounds the first term. Substituting the values from the previous two theorems yields the claimed inequality. We already know the bound of the in-distribution generalization error LDN ,0 due to Theorem 3.1 — let us call this quantity δ̂. Using this we can obtain the final result for GCNs under our OOD setup. Theorem 4.3 simply states that, if the length M ≤ (δd)/23/2 √ α, with probability at least 1− δ̂, the OOD generalization error on large subgraphs (induced by length-M random walks) is the sum of error δ and the in-distribution generalization bound on small subgraphs (induced by length-N random walks). 5 EXPERIMENTS 5.1 IN-DISTRIBUTION: NUMERICAL PAC-BAYES BOUND COMPUTATION We conduct multi-class graph classification experiments to compare our improved bound to the original PAC-Bayes bound in (Liao et al., 2020). We use the same GCN model, adopt the same datasets, i.e., 6 synthetic datasets obtained from random graph models and 3 real world graph datasets used in (Yanardag & Vishwanathan, 2015), and follow the same experimental protocol. After training a GCN on each dataset, we compute the theoretical bounds using final model. The numerical comparisons of log bound values are shown in Figure 2. It is clear that our new bounds are significantly tighter and reduce the bound values by several orders of magnitude. The gap is further increased as the depth increases. The tables of bound values and the specific equations to compute them are provided in Appendix B.1. 5.2 OUT-OF-DISTRIBUTION: EFFICACY OF SIZE GENERALIZATION We performed OOD experiments to validate the values of the upper bound on the size of large subgraphs M that was set in Theorem 4.1 and its related theorems, for synthetic graphs. We also performed experiments on synthetic graphs that were non-homophilic with the same values of M and N , to examine size generalization in this case. We also examined the general feasibility of size generalization in real-world social network data. For synthetic graphs, we calculated this theoretical value for the upper bound, and selected large subgraph size M and small subgraph size N ≪ M accordingly. For the real-world case, we chose constant values of N = 10 and M = 50. For each subgraph, we assign as its graph label the label observed most often among its nodes. After sampling datasets of subgraphs of sizes M and N , we train GCN models on the dataset with N -length random walks and measure their performance on the training set, the validation set (a smaller data set generated the same way as the train set), and the testing set (a set of subgraphs inuced by length-M random walks). On the test set we record both the performance when inputting the whole large subgraph (Test error), as well as when performing the sampling procedure used for Theorem 4.3, in which we sample an induced subgraph from an N -length random walk for each data item (Sampling-test error). Synthetic Graphs. We adopt the CSBMs (Deshpande et al., 2018) to generate graphs that exhibit the homophily property. We use two blocks with much higher probability of connections inside the same block than between blocks, which leads to barbell-like graphs. In the non-homophilic case, we set these probabilities to be equal. We generate binary node labellings via the sparsest cut. CSBMs generate node features via a Gaussian mixture where individual choices of the component are determined by the node label. Real-world Graphs. We used social network data for Twitch streamers from (Rozemberczki et al., 2019). Each node is a streamer (Twitch user), and nodes are connected to mutual friendships. Node features are 3,169 different binary indicators of a wide array of attributes, including games liked, location, etc. Each node is labelled with a boolean value of whether the livestreamer has indicated that they use explicit language. In all cases, the GCN model achieves OOD test accuracy on large-subgraph that was comparable to ID accuracy on small-subgraph if not outright better. This is even the case when some of the constraints are violated: no d-regularity constraint was imposed for any of the datasets, and performance was still good for the test error which did not involve further subgraph sampling. This indicates that the theory is promising in practice for more general forms of size generalization. The accuracy on the train set, test set with subgraph sampling, and unaltered test set are shown in Figure 2, and the numerical values are in Appendix B.2. For many cases including all real-world cases, the test accuracy was actually higher than the training accuracy. This could potentially indicate that in the cases where size generalization can be guaranteed to work well, the GCN model benefits significantly from extra node information. It is also possible that because of the sampling procedure, there is overlap in nodes between the training and test sets, since they come from random-walk sampling procedures that naively select a uniformly random node as the initial node. 6 DISCUSSION In this work we have expanded the theoretical understanding of the generalizations of GNNs in both indistribution and out-of-distribution settings, deriving new theoretical guarantees in each setting. The results for in-distribution learning improve upon the state-of-the art PAC-Bayes bounds in (Liao et al., 2020), and the results for out-of-distribution learning provide insight into a practical learning setting under which GNNs are guaranteed to perform effective size generalization. Future directions for the in-distribution understanding would involve lowering the dependencies of other variables like the spectral norm of weights. Generalizing the results to other problems like node classification would also be interesting. In the out-of-distribution case, a number of different observations in experimentation indicate that the theory can still be very much expanded. We have identified cases in real-world datasets where well beyond the bounds on size set forth in the theory, and in all experiments the d-regularity assumption is violated, yet GCN size generalization is still effective in these cases. Expansions to the theory, including generalizing to non-d-regular graphs, can be explored to explain cases like these. A MATHEMATICAL PROOFS A.1 PROOF OF THEOREM 3.1 The proof is as follows, and makes up the remainder of the chapter. A.1.1 IMPROVEMENT ON DEGREE DEPENDENCY In (Liao et al., 2020), a generalization bound is attained on graph convolutional networks; this bound is dependent on a bound on the maximum perturbation of the function value when a perturbation U is applied to the weights W , presented in that paper’s Lemma 3.1. The bound is as follows |fw+u(X,A)− fw(X,A)|2 ≤ eBd l−1 2 ( l∏ i=1 ∥Wi∥2 ) l∑ k=1 ∥Uk∥2 ∥Wk∥2 (11) The primary goal of this set of improvements is to reduce the factor of d l−1 2 . For each layer, let Hi ∈ R|V |×h be the matrix containing the hidden embeddings of all of the nodes in its rows, with h being the hidden dimension. In the process of the proof of Theorem 3.1, we are able to show the following: Φj = max i |Hj [i, :]|2 ≤ d j 2B j∏ i=1 ∥Wi∥2 (12) Ψj = max i |H ′j [i, :]−Hj [i, :]|2 ≤ Bd j 2 ( j∏ i=1 ∥Wi∥2 ) j∑ k=1 ∥Uk∥2 ∥Wk∥2 ( 1 + 1 l )j−k (13) |∆l|2 = ∣∣∣∣ 1n1nH ′l−1(Wl + Ul)− 1n1nHl−1Wl ∣∣∣∣ 2 ≤ eBd l−1 2 ( l∏ i=1 ∥Wi∥2 )[ l∑ k=1 ∥Uk∥2 ∥Wk∥2 ] (11) We begin to simplify these bounds by removing the dependency on d j 2 , replacing it instead with a fixed power of d1/2 that remains constant for every layer, and thus in the final result of Equation 11 as well. Theorem A.1. For all 1 ≤ j ≤ l − 1, we have: Φj ≤ √ d B k∏ i=1 ∥Wi∥2 (14) Ψj ≤ ( 1 + ( 1 + 1 l )j) B √ d ( j∏ i=1 ∥Wi∥2 ) (15) Finally, |fw+u(X,A)− fw(X,A)|2 = |∆l|2 ≤ ( e+ 1 + 2 l ) B √ d l∏ i=1 ∥Wi∥2 (16) The proof follows from a lemma about the 2-norm of any node representation at any layer: Lemma A.1.1. We have, for all k ∈ [n] and for j ∈ [l]: |Hj [u, :]|2 ≤ B √ deg(u) ( j∏ i=1 ∥Wi∥2 ) (17) Proof. We prove this by induction. By definition |H0[u, :]|2 ≤ B and thus |H0[u]| ≤ √ deg(u)B 0∏ k=1 ∥Wk∥2. We assume that for all u, we have Hj−1[u, :] ≤ √ deg(u)B j−1∏ k=1 ∥Wi∥2. From these statements we are able to deduce |Hj [u, :]| ≤ ∑ v∈Nu L̃[u, v]|Hj−1[v, :]|2∥Wj∥2 ≤ ∑ v∈Nu 1√ deg(u)deg(v) [√ deg(v)B j−1∏ k=1 ∥Wk∥2 ] ∥Wj∥2 = ∑ v∈Nu 1√ deg(u) B ( j−1∏ k=1 ∥Wk∥2 ) ∥Wj∥2 = deg(u)√ deg(u) B j∏ k=1 ∥Wk∥2 = √ deg(u)B j∏ k=1 ∥Wk∥2 (18) In these inequalities we use the fact that L̃[i, j] = (A + I)ij/ √ deg(i)deg(j), and we assume the simple case where there are unweighted edges so that (A+ I)ij is 1 if and only if nodes i and j are connected and 0 otherwise. By Lemma A.1.1, we have that Φj = maxi |Hj [i, :]|2 ≤ √ dB ∏j i=1 ∥Wi∥2, which is exactly the result of equation (14). Claim A.1. For all v ∈ [n], |∆j [v, :]|2 ≤ B √ deg(v) ( 1 + 1l )j (∏j i=1 ∥Wi∥ )(∑j i=1 ∥Ui∥ ∥Wi∥ ) Proof. Proof: We use induction assuming this is true for ∆j−1. We then have |∆j [v, :]|2 ≤ ∑ u∈N (v) L̃[v, u]|H ′j−1[u, :]−Hj−1[u, :]|2∥Wj + Uj∥2 + ∑ u∈N (v) L̃[v, u]|Hj−1[u, :]|2∥Uj∥2 ≤ [ B ( 1 + 1 l )j−1(j−1∏ i=1 ∥Wi∥ )( j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) ∥Wj + Uj∥+B∥Uj∥ j−1∏ i=1 ∥Wi∥ ] (19) ∑ u∈N (v) L̃[v, u] √ deg(u) = B √ deg(v) j−1∏ i=1 ∥Wi∥ [ ∥Wj + Uj∥ ( 1 + 1 l )j−1(j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥ ] = B √ deg(v) j∏ i=1 ∥Wi∥ [ ∥Wj + Uj∥2 ∥Wj∥2 ( 1 + 1 l )j−1(j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥2 ∥Wj∥2 ] ≤ B √ deg(v) j∏ i=1 ∥Wi∥ [( 1 + 1 l )j (j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥2 ∥Wj∥2 ] ≤ B √ deg(v) j∏ i=1 ∥Wi∥ ( 1 + 1 l )j ( j∑ i=1 ∥Ui∥2 ∥Wi∥2 ) (20) ∆l has a slightly different formulation but it has a very similar bound: |∆l|2 = ∣∣∣∣ 1n1n ( L̃H ′l−1(Wl + Ul)− 1 n 1nL̃Hl−1(Wl) )∣∣∣∣ 2 = 1 n ∣∣∣1nL̃(H ′l−1 −Hl−1)(Wl + Ul) + 1nL̃Hl−1(Ul)∣∣∣ 2 ≤ 1 n n∑ i=1 |∆l−1[i, :]|2∥Wl + Ul∥2 + 1 n n∑ i=1 |Hl−1[i, :]|2∥Ul∥2 ≤ B √ d l−1∏ i=1 ∥Wi∥ ( 1 + 1 l )l−1( l−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) ∥Wl + Ul∥ +B √ d∥Ul∥2 l−1∏ i=1 ∥Wi∥2 ≤ B √ d l∏ i=1 ∥Wi∥ [( 1 + 1 l )l( l−1∑ i=1 ∥Ui∥ ∥Wi∥ ) + ∥Ul∥ ∥Wl∥ ] ≤ B √ d l∏ i=1 ∥Wi∥ ( 1 + 1 l )l( l∑ i=1 ∥Ui∥ ∥Wi∥ ) ≤ eB √ d l∏ i=1 ∥Wi∥ ( l∑ i=1 ∥Ui∥ ∥Wi∥ ) (21) From this we have proven a tighter bound on the final output of the GNN under perturbation, which we will use to calculate probabilistic and generalization bounds. A.1.2 IMPROVEMENT ON PROBABILISTIC BOUNDS USING RANDOM MATRIX THEORY In (Liao et al., 2020), for all i ∈ [l], with l being the number of layers, the prior and the distribution of the perturbations Ui ∈ Rdi+1×di ,, where all hidden dimensions di are upper-bounded by a value h, were generated by a normal distribution N (0, σ2I), and give probabilistic bounds on the operator norms ∥Ui∥ as P (∀i, ∥Ui∥ ≤ t) with probability greater than 1 − 2lh exp−t2/2hσ2. We improve these bounds using theorems on random matrices from work on high-dimensional probability, namely (Vershynin, 2018). Theorem A.2 (Theorem 4.4.5 in (Vershynin, 2018)). Let A be a matrix in Rm×n, where the entries Aij are independent, mean-zero, sub-Gaussian random variables. Then, for all t > 0 we have ∥A∥ ≤ CK( √ m+ √ n+ t) with probability ≥ 1− exp(−t2), where K = maxi,j ∥Aij∥ψ2 and C is some constant. In the above theorem the norm ∥X∥ψ2 is defined as inf{t : E[exp(X2/t2)] ≤ 2}. In Example 2.5.8 in (V ershynin, 2018), it is shown that if X ∼ N (0, σ2) then it has ∥X∥ψ2 ≤ Cσ. Corollary A.2.1. If U ∈ Rm×n is a random matrix generated with the distribution N (0, σ2I) (i.e. all entries are independent and identically distributed Gaussian random variables), then we have ∥U∥ ≤ σ( √ m+ √ n+ t) with probability at least 1− 2 exp(−t2). With a change of variable, we are able to calculate the following: P (∀i.∥Ui∥2 ≤ t) ≥ 1− P (∃i, ∥Ui∥ > t) ≥ 1− l∑ i=1 P (∥Ui∥ > t) ≥ 1− 2l exp (( t Cσ − 2 √ h )2) And by setting the right-hand side to 1/2, we obtain: t = Cσ(2 √ h+ √ ln(4l)) Using the above equation combined with our bound we are able to get |fw+u(X,A)− fw(X,A)|2 ≤ eB √ dl ( l∏ i=1 ∥Wi∥2 ) l∑ k=1 ∥Uk∥2 ∥Wk∥2 = eB √ dβll l∑ k=1 ∥Uk∥2 β ≤ eB √ dβl−1l(σ(2 √ h+ √ ln(4l))) ≤ e2B √ dβ̃l−1(σ(2 √ h+ √ ln(4l))) ≤ γ 4 (22) Here β̃ is an estimated of β such that |β − β̃| ≤ β/l that can be generated a priori; we discuss this in a later subsection. We can set σ = γ 4e2Bβ̃ √ dC ( 2 √ h+ √ ln(4l) ) to satisfy the final inequality. From this we can calculate the KL-divergence between the posterior and the prior: KL(Q∥P ) = |w| 2 2 2σ2 = 16e4B2dl2β2(l−1) ( 2 √ h+ √ ln(4l) )2 2γ2 l∑ i=1 ∥Wi∥F ≤ O ( B2dβ2ll2(h+ ln(l)) γ2 l∑ i=1 ∥Wi∥2F β2 ) ≤ O ( B2dl2 (h+ ln(l)) ∏l i=1 ∥Wi∥2 γ2 l∑ i=1 ∥Wi∥2F ∥Wi∥2 ) (23) From this we are able to calculate the generalization bound and thus prove the theorem. LD,0 ≤ LS,γ +O √√√√B2dl2(h+ ln(l))∏li=1 ∥Wi∥22∑li=1 ∥Wi∥2F∥Wi∥22 + ln mδ γ2m (24) A.1.3 SELECTING PARAMETER β̃ The prior normal distribution’s variance parameter σ2 is dependent on β, but β cannot be used in its calculation because that information is only known after model training. Instead, we can select a parameter β̂ such that |β − β̂| ≤ 1l β and thus 1 eβ l−1 ≤ β̂l−1 ≤ eβl−1 (as per equation 33 in (Liao et al., 2020)). As in (Liao et al., 2020) we only have to consider values of β in the range ( γ 2B √ d )1/l ≤ β ≤ ( γ √ m 2B √ d )1/l as otherwise the generalization bound holds trivially because LD,0 ≤ 1 by definition. If we consider values of β̂ that cover this interval then by union bound we are still able to get a high probability; the covering C needs to have |C| = l2 (m 1 2l − 1). A.2 PROOFS OF OUT-OF-DISTRIBUTION PROBABILITY BOUNDS A.2.1 PROOF OF THEOREM 4.1 Proof. Because u1 is chosen from the stationary distribution (uniform over vertices, because G is connected and d-regular), then for all i ≥ 1 the distribution for ui, ui+1 follows the distribution Unif[E], where E is the edge set of the graph. Let S be the sparsest-cut partition of G. Let Xi be the indicator of the event that the vertex pair is in the set of edges crossing the partition, namely 1{(ui, ui+1) ∈ E(S, S̄)}. By linearity of expectation, this means that E[Xi] = |E(S, S̄)|/|E|. Furthermore, let Yk be the cumulative number of edges crossing the partition along the first k steps of the random walk. This is expressed nicely as Yk = ∑k i=1 Xi. Thus E[Yk] = k |E(S,S̄)| |E| . Applying Markov’s inequality, we get Pr[Yk ≥ tk|E(S, S̄)|/|E|] ≤ 1/t. Suppose we wish to examine under what conditions we can ensure that we do not cross over the partition at all in M steps, i.e. Pr[YM ≥ 1] ≤ 1/2. From the inequality above, we are able to get that Pr [ YM ≥ 2M |E(S, S̄)| |E| ] ≤ 1 2 just by setting k = M and t = 2. We then use the following basic fact: if we have an inequality of the form Pr[Z ≥ z] ≤ 12 , then Pr[Z ≥ z ′] ≤ 12 for any z ′ ≥ z. Let E(S) denote the set of edges connected to any vertex in S. Because |E(S)| ≤ |E|, then we have |E(S, S̄)|/|E| ≤ |E(S, S̄)|/|E(S)|. Furthermore, since we assume a connected graph, |E(S)| ≥ (d/2)|S|, and thus |E(S, S̄)|/|E(S)| ≤ |E(S, S̄)|/[(d/2)|S|]. 2 Thus using the fact above we can deduce Pr [ YM ≥ 2M |E(S, S̄)| (d/2)|S| ] ≤ 1 2 Note that |E(S, S̄)|/|S| is the conductance of the graph ϕ(G), because S was defined to be the sparsest-cut partition of G. Thus we can apply the fact again with Cheeger’s inequality to get Pr [ YM ≥ 2M(2/d) √ 2λ2 ] ≤ 1 2 And since we are interested in Pr[YM ≥ 1], we can thus set 2M √ 2λ2 ≤ 1 to get a necessary condition for M , from which we achieve M ≤ d 25/2 √ λ2 This completes the proof. 2It is important to note that this specific dependency of |E(S)| on d requires G to be a d-regular graph. If the theorem is to be expanded to more general cases, one may use the simple inequality |E(S)| ≥ |S|. A.2.2 PROOF OF THEOREM 4.2 Proof. The quantity φ′ is a transformation of φ that retains all the information contained in φ while still being orthogonal to the all-ones vector 1, so that we can apply Cheeger’s inequality. This orthogonalization is rather standard and can be found in (Spielman, 2015). Let s = |S|/|V (G)|. Note that s ∈ [0, 1], and without loss of generality we can assume that s ≤ 1/2. We observe that the vth coordinate of the vector φ′ corresponds to the mapping φ′(v) = { 1− s v ∈ S −s v /∈ S (25) This ensures that φ′ is orthogonal to 1, as φ′⊤1 = n∑ i=1 φ′(vi) = |S| ( 1− |S| |V | ) + (|V | − |S|) ( − |S| |V | ) = |S| − |V | ( |S| |V | ) = 0. We then note that ∥φ′∥22 = ∑n i=1 φ(v) 2 is equal to s(1− s)|V |, and we can infer |S|/2 ≤ ∥φ′∥22 ≤ |S|; the first inequality holds since s ≤ 1/2. The number of edges |E(S, S̄)| crossing the labelling-partition is equal to φ′⊤Lφ′, as φ′⊤Lφ′ = ∑ (u,v)∈E ((φ(u)− s)− (φ(v)− s))2 = |E(S, S̄)| where L is the Laplacian matrix of G. Thus the quantity 2M |E(S,S̄)||E(S)| ≤ 2M φ′⊤Lφ′ |E(S)| ≤ 2M φ′⊤Lφ′ (d/2)|S| . We are able to get the second inequality because we know |E(S)| ≥ (d/2)|S|. Because we know that |S| ≥ ∥φ′∥2, we can then upper bound this further by 2M φ ′TLφ′ (d/2)∥φ′∥22 . Substituting this quantity in the proof of Theorem 4.1, we achieve the desired bound for M . B EXPERIMENTAL METHODOLOGY AND RESULTS B.1 IN-DISTRIBUTION EXPERIMENTS The datasets used are a combination of synthetic (Erdos-Renyi and Stochastic Block Model) and real-world graphs (IMDBBINARY and IMDBMULTI of data from the Internet Movie Database, and COLLAB, a dataset of academic collaborations), and a bioinformatics dataset, PROTEINS, from (Yanardag & Vishwanathan, 2015). Two different GCN network depths of of l = 4 and l = 6 were used. We use the following formulae for the generalization bound from (Liao et al., 2020) and our new bound, using an explicit constant factor of 42 from (Liao et al., 2020). GenGap(B, d, l, {Wi}li=1) = √√√√ 42 · B2dl−1l2 ln(4lh) ∏l i=1 ∥Wi∥22 ∑l i=1 ∥Wi∥2F ∥Wi∥22 γ2m (26) Similarly, the formula used for the new PAC-Bayes generalization bound is GenGap(B, d, l, {Wi}li=1) = √√√√ 42 · B2dl2(h+ ln(l)) ∏l i=1 ∥Wi∥22 ∑l i=1 ∥Wi∥2F ∥Wi∥22 γ2m (27) We remove an additive O(logm) term in the numerator within the square root after validating that it was numerically negligible. Tables below are for calculated bounds in the case of 4 layers (Table 1) and 6 layers (Table 2). B.2 OUT-OF-DISTRIBUTION EXPERIMENTS B.2.1 METHODOLOGY Experiments were performed to measure the effectiveness size generalization of GCN models when applied to the size generalization learning case described in Section 4, where the learning task is classifying the most common node label in sub-communities of a large underlying network. For each of the synthetic graphs, we calculate an upper bound for M set in the out-of-distribution inequalities we have derived. Since the graphs examined are all not d-regular, we calculate a value of α as φ ⊤Lφ φ⊤Dφ , where L is the graph Laplacian matrix and D is the diagonal degree matrix, to apply to the formula set in Theorem 4.2. Furthermore, we use a more permissive value of δ = 0.75. Similar upper bounds for M were computed for the real-world cases, but the values were too small for experimental use. In this case, we just set N = 10 and M = 50 to attempt to gain insight about the size generalization task’s general feasibility in real-world cases. All experiments were performed with use of the Adam optimizer (Kingma & Ba, 2015), with a constant learning rate 0.01. Models were trained for 10 epochs, with a batch size 32 randomly selected. The models used are different parameterizations of the Graph Convolutional Network as implemented by the library pytorch-geometric (Fey & Lenssen, 2019). For synthetic experiments, which used smaller graphs with generally smaller degree, the parameterization was 3 layers with a hidden dimension of 5, and for the real-world data case, the parameterization was 10 layers with a hidden dimension of 32. For each underlying graph, we generate three train/validation sets (of size N random walks) and test sets (of size M random walks) and we record the loss and accuracy as the average of the three runs. B.2.2 SYNTHETIC GRAPH EXPERIMENTS A large underlying synthetic graph was generated using the stochastic block model, with some adjustment to ensure that the randomly-generated graph had a single connected component. By controlling the intra- and inter-block connection probability values, we are able to control the homophily of the generated graph, which we validate by measuring the value of λ2, as well as calculating the sparsest cut via “Cheeger rounding” (Spielman, 2015) and subsequently the conductance of the graph with respect to this partition. In the experiments, we generated a graph with approximately 2000 nodes, with in-block connectivity probability set to 8/1000 and inter-block connectivity set to 6/105. Node features are generated from a mixture of multivariate Gaussian distributions with dimension 3, mean (−0.5,−0.5,−0.5) for one block, and mean (0.5, 0.5, 0.5) for the other; the covariance matrix is a diagonal matrix (each coordinate is independent) of variance either 2, 4, or 8. Experiments were also performed on non-homophilic synthetic graphs. Like the homophilic synthetic graphs they are generated with the stochastic block model with about 2000 nodes, about 1000 of each label, and with the same mixture-of-Gaussian node features. However the parameters used for the generation of connection are crucially different. The probabilities of connection between nodes of the same block and nodes of a different block are set to be equal, with both being set to 8/1000. These settings ensure that a node’s label is independent from the labels of its neighbors, so the homophily property is not exhibited. Contrasting with the results shown for the homophilic synthetic graphs, the non-homophilic graph results show that the out-of-distribution test accuracy is less than the training accuracy. This further illustrates the association between homophily and size generalization. B.2.3 REAL-WORLD GRAPH EXPERIMENTS Since the node features are indicators, we encoded the node feature information by using the positional encoding mechanism introduced in the Transformer model (Vaswani et al., 2017). For each node, each of their integer indicators was encoded via positional embedding and aggregated via sum.
1. What is the focus and contribution of the paper regarding graph neural networks? 2. What are the strengths and weaknesses of the proposed theoretical bound on generalization error? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are some potential improvements or extensions that the reviewer suggests for future research?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper focuses on the generalization ability of graph neural networks and derives the generalization error bound based on PAC-Bayes framework. The new theoretical bound improves the state-of-the-art result, and empirical studies show that the proposed model can help to address size generalization problem on graphs. Strengths And Weaknesses Pros: The paper is well motivated and focuses on an important and active research problem in the graph ML community. The theory results seem correct and reasonable though I didn't carefully check the proof in the appendix. The experiment results verify the theoretical argument. Cons: The analysis is built on several assumptions that may violate the practical settings, like the graph data generation assumption. More discussions and justification are needed. The proposed theory seems to require the homophily assumption of graph structures. How it behaves for heterophilic graphs? The discussed distribution shifts only cover the size variation, which is quite limited in contrast with the various distribution shift types in practice. For example, cross-domain transfer in multi-graph generalization and temporal generalization in dynamic graphs [1], subgroup generalization across majority and minority feature groups [2], motif-structure bias of spurious correlation [3], and substructure-aware distribution shift in molecular property prediction [4], etc. More discussions on how the theory in this paper could shed lights on these practical OOD learning settings can definitely help to strenghthen the paper. The experiments are only conducted on the size generalization task. And, similarly, more experiments to cover the more OOD types, such as the above-mentioned settings, which can be more challenging and closer to the real cases could increase the diversity and strengthen the contributions. [1] Handling distribution shifts on graphs: an invariance perspective, ICLR22 [2] Subgroup generalization and fairness of graph neural networks, NeurIPS21 [3] Discovering Invariant Rationales for Graph Neural Networks, ICLR22 [4] Learning Substructure Invariance for Out-of-Distribution Molecular Representations, NeurIPS22 Clarity, Quality, Novelty And Reproducibility Clarify: this paper is well written and organized Quality: the quality is overall good though I didn't carefully check the proof Novelty: the algorithmic novelty is limited especially the scope is limited in size generalization which is a particular and simple OOD setting on graphs
ICLR
Title In-distribution and Out-of-distribution Generalization for Graph Neural Networks Abstract Graph neural networks (GNNs) are models that allow learning with structured data of varying size. Despite their popularity, theoretical understanding of the generalization of GNNs is an under-explored topic. In this work, we expand the theoretical understanding of both in-distribution and out-of-distribution generalization of GNNs. Firstly, we improve upon the state-of-the-art PAC-Bayes (in-distribution) generalization bound primarily by reducing an exponential dependency on the node degree to a linear dependency. Secondly, utilizing tools from spectral graph theory, we prove some rigorous guarantees about the out-of-distribution (OOD) size generalization of GNNs, where graphs in the training set have different numbers of nodes and edges from those in the test set. To empirically verify our theoretical findings, we conduct experiments on both synthetic and real-world graph datasets. Our computed generalization gaps for the in-distribution case significantly improve the state-of-the-art PAC-Bayes results. For the OOD case, experiments on community classification tasks in large social networks show that GNNs achieve strong size generalization performance in cases guaranteed by our theory. 1 INTRODUCTION Graph neural networks (GNNs), firstly proposed in Scarselli et al. (2008), generalize artificial neural networks from processing fixed-size data to processing arbitrary graph-structured or relational data, which can vary in terms of the number of nodes, the number of edges, and so on. GNNs and their modern variants (Bronstein et al., 2017; Battaglia et al., 2018) have achieved state-of-the-art results in a wide range of application domains, including social networks (Hamilton et al., 2017), material sciences (Xie & Grossman, 2018), drug discovery (Wieder et al., 2020), autonomous driving (Liang et al., 2020), quantum chemistry (Gilmer et al., 2020), and particle physics (Shlomi et al., 2020). Despite their empirical successes, the theoretical understanding of GNNs are somewhat limited. Existing works largely focus on analyzing the expressiveness of GNNs. In particular, Xu et al. (2018) show that GNNs are as powerful as the Weisfeiler-Lehman (WL) graph isomorphism test (Weisfeiler & Leman, 1968) in distinguishing graphs. Chen et al. (2019) further demonstrate an equivalence between graph isomorphism testing and universal approximation of permutation-invariant functions. Loukas (2019) show that GNNs with certain conditions (e.g., on depth and width) are Turing universal. Chen et al. (2020) and Xu et al. (2020a) respectively examine whether GNNs can count substructures and perform algorithmic reasoning. In the vein of statistical learning theory, generalization analyses for GNNs have been developed to bound the gap between training and testing errors using VC-dimension (Vapnik & Chervonenkis, 1971), Rademacher complexity (Bartlett & Mendelson, 2002), algorithmic stability (Bousquet & Elisseeff, 2002), and PACBayes (McAllester, 2003) (a Bayesian extension of PAC learning (Valiant, 1984)). Depending on whether the problem setup is in-distribution (ID) or out-of-distribution (OOD), i.e., whether test data comes from the same distribution as training data, we categorize the literature into two groups. ID Generalization Bounds. Scarselli et al. (2018) provide a VC-dimension based generalization bound for GNNs whereas Verma & Zhang (2019) present the stability-based generalization analysis for singlelayer graph convolutional networks (GCNs) (Kipf & Welling, 2016). Both consider node classification and assume the node features are independent and identically-distributed (IID), which conflicts with the common relational learning setup (e.g., semi-supervised node classification) at which GNNs excel. Relying on the neural tangent kernel (NTK) approach (Jacot et al., 2018), Du et al. (2019) characterize the generalization bound of infinite-width GNNs on graph classification. Garg et al. (2020) derive the Rademacher complexity based bound for message passsing GNNs on graph classification. Lv (2021) establish results for GCNs on node classification using Rademacher complexity as well. Based on PAC-Bayes, Liao et al. (2020) obtain a tighter bound for both GCNs and message passsing GNNs on graph classification compared to (Garg et al., 2020; Scarselli et al., 2018). Subsequently, Ma et al. (2021) also leverage PAC-Bayes and show generalization guarantees of GNNs on subgroups of nodes for node classification. More recently, Li et al. (2022) study the effect of graph subsampling in the generalization of GCNs. OOD Generalization Yehudai et al. (2021) study size generalization for GNNs — this is a specific OOD setting where training and testing graphs differ in the number of nodes and edges. They show negative results that specific GNNs can perfectly fit training graphs but fails on OOD testing ones. Baranwal et al. (2021) consider specific graph generative models, i.e., the contextual stochastic block model (CSBM) (Deshpande et al., 2018), where CSBMs during training and testing are of the same means but different number of nodes, intra-, and inter-class edge probabilities. They present generalization guarantees for single-layer GCNs on binary node classification tasks. Later, Maskey et al. (2022) assume yet another class of graph generative models, i.e., graphons, where the kernel is shared across training and testing but the number of nodes and edges could vary. They obtain generalization bounds of message passing GNNs on graph classification and regression that depend on the Minkowski dimension of the node feature space. Relying on a connection of over-parameterized networks and neural tangent kernel, Xu et al. (2020b) find that taskspecific architecture/feature designs help GNNs extrapolate to OOD algorithmic tasks. Wu et al. (2022a) propose explore-to-extrapolate risk minimization framework, for which the solution is proven to provide an optimal OOD model under the invariance and heterogeneity assumptions. Yang et al. (2022) propose a two-stage model that both infers the latent environment and makes predictions to generalize to OOD data. Empirical studies suggest it works well on real-world molecule datasets. Wu et al. (2022b) study a new objective that can learn invariant and causal graph features that generalize well to OOD data empirically. All above works follow the spirit of invariant risk minimization (Arjovsky et al., 2019) and focus on designing new learning objectives. Instead, we provide generalization bound analysis from the traditional statistical learning theory perspective. Our Contributions. In this paper, we study both in-distribution and out-of-distribution generalization for GNNs. For in-distribution graph classification tasks, we significantly improve the previous state-of-the-art PAC-Bayes results in (Liao et al., 2020) by decreasing an exponential dependency on the maximum node degree to a linear dependency. For OOD node classification tasks, we do not assume any known graph generative models which is in sharp contrast to the existing work. We instead assume GNNs are trained and tested on subgraphs that are sampled via random walks from a single large underlying graph, as an efficient means to generate a connected subgraph. We identify interesting cases where a graph classification task is theoretically guaranteed to perform well at size generalization, and derive generalization bounds. We validate our theoretical results by conducting experiments on synthetic graphs, and also explore size generalization on a collection of real-world social network datasets. In the in-distribution case, we observe an improvement of several orders of magnitude in numerical calculations of the generalization bound. In the out-of-distribution case, we validate that, in cases where the theory guarantees that size generalization works well, the prediction accuracy on large subgraphs is always comparable to the accuracy on small subgraphs, and in many cases is actually better. (a) An example of a small expander graph. Any labelling of its nodes cannot exhibit homophily. (b) Example of a small barbell graph. If a labelling is exactly differentiated between the two groups, then it exhibits homophily. 2 BACKGROUND INFORMATION A graph G is an abstract mathematical model for pairwise relationships, with a set of vertices V and a set of edges E ⊆ V × V . Two vertices v1, v2 are said to be connected if (v1, v2) ∈ E. For a given graph G ∈ G we can also denote its vertices by V (G) and edges E(G). Unless otherwise specified, we assume graphs are undirected and without multi-edges. In machine learning, a graph (or graph-structured data) typically come with a set of node features. Common graph based machine learning tasks include node classification (or regression) and graph classification (or regression). We use the following notation. • Graph data {Gi = (Vi, Ei)}mi=1 ∈ G, where G is the set of all graphs. The neighborhood of a vertex v is denoted N (v) = {u ∈ V (Gi) : (v, u) ∈ E(Gi)}. • Node feature xv : V → X , with X being the feature space, e.g., X = Rdv . • Node labels y : V → Y , with Y being the set of labels, e.g., Y = [n]. Graph neural networks (GNNs). GNNs generalize regular neural networks to process data with varying structures and dependencies. GNNs achieve this flexibility via a message passing computational process. In particular, at the k-th step (or layer) of message passing, we update the representation h(k+1)u of node u as follows, h(k+1)u = UPDATE(h (k) u ,AGGREGATE({h(k)v |v ∈ N (u)})). (1) This update happens for all nodes in parallel within each message passing step. Moreover, the UPDATE and AGGREGATE operators are shared by all nodes, which enables the same GNN to process varyingsized graphs. Once we have finished the finite-step message passing process, we can use the output node representations to make predictions on nodes, edges, and the graph via additionally parameterized readout functions. This message passing framework is quite general since one can instantiate the UPDATE and AGGREGATE operators by different neural networks. For example, the widely used Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016), which are the main interest of our work, have the form h(k+1)u = σ Wk ∑ v∈N (u)∪{u} h (k) v√ |N (u)| √ |N (v)| (2) where one applies a linear transformation (Wk) to all node representations, a weighted-sum over the neighborhood, and an element-wise nonlinearity (e.g., ReLU activation). Note that the learnable weights Wk are different from layer to layer. Homophily. A concept studied in network science, homophily (McPherson et al., 2001) is the property that similar nodes group together. For node classification (or node labelling), this means that neighbouring nodes tend to have the same label. Size generalization is plausible when the labelling of the nodes exhibits homophily. The presence of a homophilic graph labelling implies that the labels of the nodes are unlikely to change during the course of a long random walk on the graph. It is important to note that homophily is also a concept that relates to the graph topology, as not every possible graph structure can be given a labelling that exhibits homophilic properties. An example of one such topology where homophily is impossible is an expander graph (Hoory et al., 2006), as shown in Figure 1a, where nodes have either random or random-like edges connected to a constant number of other nodes in the entire graph. In this case, any labelling of the nodes is far from homophilic, as can be shown using the expansion property. A setting with more homophily is akin to a barbell graph, as shown in Figure 1b, where there are two densely connected components, and comparatively few edges connecting the two dense regions. If the graph labelling of interest lines up with these divisions inherent in the topology, then it is natural to say that it exhibits a homophilic property. Cheeger’s Inequality. A mathematical description of homophily can be given using concepts from spectral graph theory. Cheeger’s inequality (Hoory et al., 2006) is a theorem that pertains to partitions of graphs, or equivalently binary-valued labellings on graphs (one side of the partition is labelled 0, the other 1). A crucial definition is the conductance, defined by ϕ(S) = |E(S, S̄)| |S| ∀S ⊆ V and ϕ(G) = min |S|≤ |V |2 ϕ(S). Here E(S, S̄) is the set of edges connecting a node in S to a node outside of S. Cheeger’s inequality states λ2/2 ≤ ϕ(G) ≤ √ 2λ2, where λ2 is the second-smallest eigenvalue of the normalized Laplacian1 L̃. This inequality links the realvalued quantity λ2 to the concept of homophily. If λ2 is small then the conductance of G must also be low, by Cheeger’s inequality. If a labelling on graph nodes f : V (G) → {0, 1} roughly agrees with a low-conductance partition (i.e., one side of the partition S is generally labelled 0 and the complement S̄ is generally labelled 1) then the labelling f exhibits homophily. 3 IMPROVEMENT OF IN-DISTRIBUTION PAC-BAYES BOUND The state-of-the-art generalization bounds for GNNs in the in-distribution case were formulated by Liao et al. (2020) using the PAC-Bayes theory. Specifically, they build upon the PAC-Bayes theorem in (Neyshabur et al., 2018) that pertains to homogeneous feedforward neural networks. We denote one sample as z = (X,A, y) where X ∈ X , A ∈ G, and y ∈ Y are the node features, the adjacency matrix, and the graph label respectively. Each sample is drawn from some unknown data distribution D (with support X ×G ×Y) in an i.i.d. fashion. Since both training and testing samples are drawn from the same distribution, this is the in-distribution setup. Following (Liao et al., 2020), we consider a margin loss for multi-class graph classifications as below, LD,γ = LD,γ(fw) = Pz∼D ( fw(X,A)[y] ≤ γ +max j ̸=y fw(X,A)[j] ) (3) where γ > 0 is the margin parameter and fw is the model (hypothesis) parameterized by weights w. Since D is unknown, we can not compute this true loss (risk). We instead minimize the empirical loss (risk) that is defined on the sampled training set S as below, LS,γ = LS,γ(fw) = 1 m ∑ z∈S 1 ( fw(Xi, Ai)[y] ≤ γ +max j ̸=y fw(Xi, Ai)[j] ) , (4) 1Here L̃ = D−1/2(D−A)D−1/2, where D is the diagonal matrix of vertex degrees and A is the adjacency matrix. where m is the number of training samples. For simplicity, we abbreviate LD,γ(fw) and LS,γ(fw) as LD,γ and LS,γ respectively from now on. Our main in-distribution result bounds the gap between true and empirical risks for GCNs, shown in the following theorem. The proof is in Appendix A.1. Theorem 3.1. For any B > 0, l > 1, let fw ∈ H : X × G → Rk be an l-layer GCN. Then with probability ≥ 1− δ over the choice of an iid size-m training set S from the data distribution D, we have for any w: LD,0 ≤ LS,γ +O √√√√B2 d l2 (h+ ln l) ∏li=1 ∥Wi∥22∑li=1 (∥Wi∥2F /∥Wi∥22) + ln mδ γ2m (5) Here d equals to one plus the maximum node degree that can be achieved by the data distribution. l is the depth, i.e., the number of layers, of GCNs. Wi is the weight matrix of GCNs in the i-th layer. B is the radius of the minimal ℓ2 ball that contains all node features, i.e., ∀v, ∥xv∥2 ≤ B. This improves the bound in (Liao et al., 2020), which is provided below for a better comparison, LD,0 ≤ LS,γ +O √√√√B2 dl−1 l2h log(lh) ∏li=1 ∥Wi∥22∑li=1(∥Wi∥2F /∥Wi∥22) + log mlδ γ2m . (6) The proof of the theorem from (Liao et al., 2020) is an induction over the l layers, in which the spectral norm of the weights and a maximum degree term is multiplied at each step. We observe that it is possible to avoid passing the maximum degree term via a refined argument. This leads to a tightening of one of the main inequalities used in the induction proof, thus in turn resulting in substantial improvements to the overall bound. As can be seen above, we reduce the exponential term dl−1 to a linear term d, which is a significant improvement for graphs even with small node degrees. 4 TOWARDS DEVELOPING A THEORY FOR SIZE GENERALIZATON In this section, we develop an out-of-distribution (OOD) generalization theory for GNNs. Since we adopt a statistical learning viewpoint, there must necessarily be some assumptions relating the training and testing graphs (otherwise the No-Free Lunch theorem applies). There is a tradeoff between assumptions that are practically relevant, and those for which rigorous guarantees are provable. We have chosen assumptions that we believe strike a balance between those objectives, at least for applications like social networks. Size Generalization Assumptions. We consider the following setup. First, we assume that there exists an extremely large graph G like the user network in Twitter so that one needs to sample subgraphs (e.g., via random walks) for training and testing machine learning models. This is akin to the practical setups of (Grover & Leskovec, 2016; Hamilton et al., 2017). To generate training and testing subgraphs, we run random walks of length N and M respectively on this single large graph, where M ≫ N , and collect the subgraphs induced by these walks. GNNs are then trained on the subgraphs induced by the shorter (length-N ) walks. In testing, we assume a procedure where a length-M random walk induced subgraph is sampled from the large subgraph. Random walks are initiated by choosing an initial node uniformly at random from all the nodes in the graph, and at each step there is an equal probability of selecting any of the current node’s neighbors. This is an interesting OOD problem where training and testing graphs come from different distributions determined by the underlying large graph and the random walk sampling with specific length. We consider the graph classification problem and assume that the graph label is determined by the majority of node labels within the graph, which is reasonable for many applications that involve homophilic graphs. For the node labeling, we assume it is binary but have no assumptions on how labels are generated. Crucially, we assume nothing about the underlying large graph. Therefore, our setup has advantages over some OOD setups in the literature where a generative model of graphs and labels is explicitly assumed. Relation with In-Distribution Result. We know the relationship between true error defined on the unknown data distribution D and empirical error defined on the size-m training set S. Specifically, for any GCN f , with probability at least 1− δ, we have a general bound as follows, LD,0 ≤ LS,γ +A(f, δ,m), (7) where we abbreviate the bound as A(f, δ,m) and omit specific parameters like maximum node degree d. In the size generalization problem, we use random walks with lengths N and M for collecting training and testing subgraphs (data) respectively. We are interested in proving a statement of the following form: for any GCN f , we have with probability at least 1− δ, LDM ,0 ≤ LSN ,γ + B(f, δ,m,M,N). (8) The key detail is that DM is the distribution of subgraphs induced by random walks with length M and SN is the training set of subgraphs induced by random walks with length N . Comparing these two losses is the essence of our OOD result. The final term B(f, δ,m,M,N) is a general bound involving these parameters. Based on the in-distribution result like in Theorem 3.1, we can similarly obtain, LDN ,0 ≤ LSN ,γ +AN (f, δ,m), (9) where DN is the distribution of subgraphs induced by random walks with length N and AN is the general bound. The key question boils down to: what is the relationship between LDN ,0 to LDM ,0? This question will be answered in the following sections. 4.1 A PROBABILITY BOUND FOR PARTITION CROSSES The above size generalization problem involves the distributions of random-walk-induced subgraphs from a large graph G with two lengths: N for training and M for testing. Also, M is much larger than N . Before we state our results, we would like to explain the simple intuition that motivates our theory: If the random walk always stays within the same partition, then the graph label of the random-walk-induced subgraph can be well predicted, no matter how long the random walk is. Here a partition means the subset of nodes with the same node label. The goal of this section is to find bounds on M for which we can provide OOD guarantees. We begin by considering a special labelling. Special Node Labeling: Sparsest Cut. A set S that minimizes ϕ(S) (and has |S| ≤ |V |/2) is called a sparsest cut. For simplicity assume that S is unique. Using Cheeger’s inequality, we first prove the following probability bounds related to this sampling procedure, thereby identifying the length M for which a random walk is likely to stay within the sparsest cut for d-regular graphs. The theorems are as follows. Theorem 4.1. Let UM = [u1, u2, . . . , uM ] be a length-M random walk over a connected, d-regular graph G, with u1 chosen from the stationary distribution of the nodes of G. If M ≤ d/(25/2 √ λ2), then the probability that UM crosses the sparsest-cut partition at least once is under 1/2. Here crossing the sparsest-cut partition S means that there exists an edge (u, v) of the random walk satisfies u ∈ S and v ∈ S̄. λ2 is the second-smallest eigenvalue of the normalized Laplacian. We can easily generalize the previous theorem to an arbitrary probability δ > 0 as below. Corollary 4.1.1. If M ≤ (δd)/23/2 √ λ2, the probability of the above random walk UM crossing over the sparsest-cut partition at least once is at most δ. General Node Labeling. Theorem 4.1 is restrictive in that it requires the partition S to be the sparsest cut. We now modify the proof to yield a quantity that can work for any node labelling f . Specifically, let φ be any boolean (i.e., {0, 1}-valued) labelling on the vertices of the graph. Let the positive node labelling of φ be S = {v ∈ V (G) : φ(v) = 1}. We are interested in bounding the probability that a random walk of length M includes an edge that crosses the positive node labelling S, i.e., an edge (u, v) satisfies u ∈ S and v ∈ S̄. Theorem 4.2. Let φ be a boolean labelling on the nodes of a connected, d-regular graph G with positive node labelling S (0-1 valued vector with φ[i] = 1 if vi ∈ S). Let UM = [u1, u2, . . . , uM ] be a length-M random walk over G, with u1 chosen from the stationary distribution of the nodes of G. Let Xi be the indicator variable of the event that the i-th edge of UM crosses S, i.e., Xi = 1 [ ui ∈ S, ui+1 ∈ S̄ ] and Yk = ∑k i=1 Xi is the number of times that UM crosses S in the first k steps. Let φ ′ = φ− 1(|S|/|V |) and α = φ′⊤Lφ′/∥φ′∥22. The conclusion is that: if M ≤ d 25/2 √ α then Pr [YM ≥ 1] ≤ 1 2 . Corollary 4.2.1. If M ≤ (δd)/23/2 √ α, the probability of the above random walk UM at least crosses over the positive node labelling of f once is at most δ, i.e., Pr [YM ≥ 1] ≤ δ. The formula for α arises from an alternative formulation of Cheeger’s inequality which expresses λ2 using a Rayleigh quotient (Spielman, 2015), in which y may be viewed as a real-valued labelling on the vertices. λ2 = min y⊥d (y⊤Ly)/(y⊤Dy) 4.2 SIZE GENERALIZATION ERROR Recall that, in the size generalization setup, we first train a GNN model f on subgraphs induced by many length-N random walks on G. Then during testing, given a large testing subgraph GM induced by a lengthM random walk on G, we sample a subgraph GN via a length-N random walk on GM and feed it to f to compute the empirical (classification) error for GM . If all nodes of GM are within a single positive node labelling, then all of their labels are the same. Therefore, no matter which subgraph GN is sampled, the generalization error (i.e., the probability of making a wrong prediction) for GM should be the same as the one for GN . Based on this reasoning, we have the following result. Theorem 4.3 (Size Generalization Error). For any δ ∈ [0, 1), if we restrict M , the size of the large random walk-induced subgraph, such that M ≤ (δd)/23/2 √ α, then the in-distribution generalization error LDM ,0, i.e., the probability of a wrong prediction on length-M -random-walk induced subgraphs, satisfies LDM ,0 ≤ δ + LDN ,0. (10) where LDN ,0 is the in-distribution generalization error of f on length-N random-walk-induced subgraphs. Note that this theorem explicitly constrains M , whereas the only condition on N is that LDN ,0 is small. Proof. Observe that, for any events F and E, we have Pr [F ] ≤ Pr [E] + Pr [ F |Ē ] . Let E be the event that a length-M random walk crosses the positive node labelling of the ground truth labels, and let F be the event that we make a wrong prediction on the induced subgraph GM . Theorem 3.1 bounds the second term, Pr [ F |Ē ] , because the generalization error on GM is the same as the one on GN (subgraphs induced by length-N random walks) when GM does not cross the positive node labelling. Corollary 4.2.1 bounds the first term. Substituting the values from the previous two theorems yields the claimed inequality. We already know the bound of the in-distribution generalization error LDN ,0 due to Theorem 3.1 — let us call this quantity δ̂. Using this we can obtain the final result for GCNs under our OOD setup. Theorem 4.3 simply states that, if the length M ≤ (δd)/23/2 √ α, with probability at least 1− δ̂, the OOD generalization error on large subgraphs (induced by length-M random walks) is the sum of error δ and the in-distribution generalization bound on small subgraphs (induced by length-N random walks). 5 EXPERIMENTS 5.1 IN-DISTRIBUTION: NUMERICAL PAC-BAYES BOUND COMPUTATION We conduct multi-class graph classification experiments to compare our improved bound to the original PAC-Bayes bound in (Liao et al., 2020). We use the same GCN model, adopt the same datasets, i.e., 6 synthetic datasets obtained from random graph models and 3 real world graph datasets used in (Yanardag & Vishwanathan, 2015), and follow the same experimental protocol. After training a GCN on each dataset, we compute the theoretical bounds using final model. The numerical comparisons of log bound values are shown in Figure 2. It is clear that our new bounds are significantly tighter and reduce the bound values by several orders of magnitude. The gap is further increased as the depth increases. The tables of bound values and the specific equations to compute them are provided in Appendix B.1. 5.2 OUT-OF-DISTRIBUTION: EFFICACY OF SIZE GENERALIZATION We performed OOD experiments to validate the values of the upper bound on the size of large subgraphs M that was set in Theorem 4.1 and its related theorems, for synthetic graphs. We also performed experiments on synthetic graphs that were non-homophilic with the same values of M and N , to examine size generalization in this case. We also examined the general feasibility of size generalization in real-world social network data. For synthetic graphs, we calculated this theoretical value for the upper bound, and selected large subgraph size M and small subgraph size N ≪ M accordingly. For the real-world case, we chose constant values of N = 10 and M = 50. For each subgraph, we assign as its graph label the label observed most often among its nodes. After sampling datasets of subgraphs of sizes M and N , we train GCN models on the dataset with N -length random walks and measure their performance on the training set, the validation set (a smaller data set generated the same way as the train set), and the testing set (a set of subgraphs inuced by length-M random walks). On the test set we record both the performance when inputting the whole large subgraph (Test error), as well as when performing the sampling procedure used for Theorem 4.3, in which we sample an induced subgraph from an N -length random walk for each data item (Sampling-test error). Synthetic Graphs. We adopt the CSBMs (Deshpande et al., 2018) to generate graphs that exhibit the homophily property. We use two blocks with much higher probability of connections inside the same block than between blocks, which leads to barbell-like graphs. In the non-homophilic case, we set these probabilities to be equal. We generate binary node labellings via the sparsest cut. CSBMs generate node features via a Gaussian mixture where individual choices of the component are determined by the node label. Real-world Graphs. We used social network data for Twitch streamers from (Rozemberczki et al., 2019). Each node is a streamer (Twitch user), and nodes are connected to mutual friendships. Node features are 3,169 different binary indicators of a wide array of attributes, including games liked, location, etc. Each node is labelled with a boolean value of whether the livestreamer has indicated that they use explicit language. In all cases, the GCN model achieves OOD test accuracy on large-subgraph that was comparable to ID accuracy on small-subgraph if not outright better. This is even the case when some of the constraints are violated: no d-regularity constraint was imposed for any of the datasets, and performance was still good for the test error which did not involve further subgraph sampling. This indicates that the theory is promising in practice for more general forms of size generalization. The accuracy on the train set, test set with subgraph sampling, and unaltered test set are shown in Figure 2, and the numerical values are in Appendix B.2. For many cases including all real-world cases, the test accuracy was actually higher than the training accuracy. This could potentially indicate that in the cases where size generalization can be guaranteed to work well, the GCN model benefits significantly from extra node information. It is also possible that because of the sampling procedure, there is overlap in nodes between the training and test sets, since they come from random-walk sampling procedures that naively select a uniformly random node as the initial node. 6 DISCUSSION In this work we have expanded the theoretical understanding of the generalizations of GNNs in both indistribution and out-of-distribution settings, deriving new theoretical guarantees in each setting. The results for in-distribution learning improve upon the state-of-the art PAC-Bayes bounds in (Liao et al., 2020), and the results for out-of-distribution learning provide insight into a practical learning setting under which GNNs are guaranteed to perform effective size generalization. Future directions for the in-distribution understanding would involve lowering the dependencies of other variables like the spectral norm of weights. Generalizing the results to other problems like node classification would also be interesting. In the out-of-distribution case, a number of different observations in experimentation indicate that the theory can still be very much expanded. We have identified cases in real-world datasets where well beyond the bounds on size set forth in the theory, and in all experiments the d-regularity assumption is violated, yet GCN size generalization is still effective in these cases. Expansions to the theory, including generalizing to non-d-regular graphs, can be explored to explain cases like these. A MATHEMATICAL PROOFS A.1 PROOF OF THEOREM 3.1 The proof is as follows, and makes up the remainder of the chapter. A.1.1 IMPROVEMENT ON DEGREE DEPENDENCY In (Liao et al., 2020), a generalization bound is attained on graph convolutional networks; this bound is dependent on a bound on the maximum perturbation of the function value when a perturbation U is applied to the weights W , presented in that paper’s Lemma 3.1. The bound is as follows |fw+u(X,A)− fw(X,A)|2 ≤ eBd l−1 2 ( l∏ i=1 ∥Wi∥2 ) l∑ k=1 ∥Uk∥2 ∥Wk∥2 (11) The primary goal of this set of improvements is to reduce the factor of d l−1 2 . For each layer, let Hi ∈ R|V |×h be the matrix containing the hidden embeddings of all of the nodes in its rows, with h being the hidden dimension. In the process of the proof of Theorem 3.1, we are able to show the following: Φj = max i |Hj [i, :]|2 ≤ d j 2B j∏ i=1 ∥Wi∥2 (12) Ψj = max i |H ′j [i, :]−Hj [i, :]|2 ≤ Bd j 2 ( j∏ i=1 ∥Wi∥2 ) j∑ k=1 ∥Uk∥2 ∥Wk∥2 ( 1 + 1 l )j−k (13) |∆l|2 = ∣∣∣∣ 1n1nH ′l−1(Wl + Ul)− 1n1nHl−1Wl ∣∣∣∣ 2 ≤ eBd l−1 2 ( l∏ i=1 ∥Wi∥2 )[ l∑ k=1 ∥Uk∥2 ∥Wk∥2 ] (11) We begin to simplify these bounds by removing the dependency on d j 2 , replacing it instead with a fixed power of d1/2 that remains constant for every layer, and thus in the final result of Equation 11 as well. Theorem A.1. For all 1 ≤ j ≤ l − 1, we have: Φj ≤ √ d B k∏ i=1 ∥Wi∥2 (14) Ψj ≤ ( 1 + ( 1 + 1 l )j) B √ d ( j∏ i=1 ∥Wi∥2 ) (15) Finally, |fw+u(X,A)− fw(X,A)|2 = |∆l|2 ≤ ( e+ 1 + 2 l ) B √ d l∏ i=1 ∥Wi∥2 (16) The proof follows from a lemma about the 2-norm of any node representation at any layer: Lemma A.1.1. We have, for all k ∈ [n] and for j ∈ [l]: |Hj [u, :]|2 ≤ B √ deg(u) ( j∏ i=1 ∥Wi∥2 ) (17) Proof. We prove this by induction. By definition |H0[u, :]|2 ≤ B and thus |H0[u]| ≤ √ deg(u)B 0∏ k=1 ∥Wk∥2. We assume that for all u, we have Hj−1[u, :] ≤ √ deg(u)B j−1∏ k=1 ∥Wi∥2. From these statements we are able to deduce |Hj [u, :]| ≤ ∑ v∈Nu L̃[u, v]|Hj−1[v, :]|2∥Wj∥2 ≤ ∑ v∈Nu 1√ deg(u)deg(v) [√ deg(v)B j−1∏ k=1 ∥Wk∥2 ] ∥Wj∥2 = ∑ v∈Nu 1√ deg(u) B ( j−1∏ k=1 ∥Wk∥2 ) ∥Wj∥2 = deg(u)√ deg(u) B j∏ k=1 ∥Wk∥2 = √ deg(u)B j∏ k=1 ∥Wk∥2 (18) In these inequalities we use the fact that L̃[i, j] = (A + I)ij/ √ deg(i)deg(j), and we assume the simple case where there are unweighted edges so that (A+ I)ij is 1 if and only if nodes i and j are connected and 0 otherwise. By Lemma A.1.1, we have that Φj = maxi |Hj [i, :]|2 ≤ √ dB ∏j i=1 ∥Wi∥2, which is exactly the result of equation (14). Claim A.1. For all v ∈ [n], |∆j [v, :]|2 ≤ B √ deg(v) ( 1 + 1l )j (∏j i=1 ∥Wi∥ )(∑j i=1 ∥Ui∥ ∥Wi∥ ) Proof. Proof: We use induction assuming this is true for ∆j−1. We then have |∆j [v, :]|2 ≤ ∑ u∈N (v) L̃[v, u]|H ′j−1[u, :]−Hj−1[u, :]|2∥Wj + Uj∥2 + ∑ u∈N (v) L̃[v, u]|Hj−1[u, :]|2∥Uj∥2 ≤ [ B ( 1 + 1 l )j−1(j−1∏ i=1 ∥Wi∥ )( j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) ∥Wj + Uj∥+B∥Uj∥ j−1∏ i=1 ∥Wi∥ ] (19) ∑ u∈N (v) L̃[v, u] √ deg(u) = B √ deg(v) j−1∏ i=1 ∥Wi∥ [ ∥Wj + Uj∥ ( 1 + 1 l )j−1(j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥ ] = B √ deg(v) j∏ i=1 ∥Wi∥ [ ∥Wj + Uj∥2 ∥Wj∥2 ( 1 + 1 l )j−1(j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥2 ∥Wj∥2 ] ≤ B √ deg(v) j∏ i=1 ∥Wi∥ [( 1 + 1 l )j (j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥2 ∥Wj∥2 ] ≤ B √ deg(v) j∏ i=1 ∥Wi∥ ( 1 + 1 l )j ( j∑ i=1 ∥Ui∥2 ∥Wi∥2 ) (20) ∆l has a slightly different formulation but it has a very similar bound: |∆l|2 = ∣∣∣∣ 1n1n ( L̃H ′l−1(Wl + Ul)− 1 n 1nL̃Hl−1(Wl) )∣∣∣∣ 2 = 1 n ∣∣∣1nL̃(H ′l−1 −Hl−1)(Wl + Ul) + 1nL̃Hl−1(Ul)∣∣∣ 2 ≤ 1 n n∑ i=1 |∆l−1[i, :]|2∥Wl + Ul∥2 + 1 n n∑ i=1 |Hl−1[i, :]|2∥Ul∥2 ≤ B √ d l−1∏ i=1 ∥Wi∥ ( 1 + 1 l )l−1( l−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) ∥Wl + Ul∥ +B √ d∥Ul∥2 l−1∏ i=1 ∥Wi∥2 ≤ B √ d l∏ i=1 ∥Wi∥ [( 1 + 1 l )l( l−1∑ i=1 ∥Ui∥ ∥Wi∥ ) + ∥Ul∥ ∥Wl∥ ] ≤ B √ d l∏ i=1 ∥Wi∥ ( 1 + 1 l )l( l∑ i=1 ∥Ui∥ ∥Wi∥ ) ≤ eB √ d l∏ i=1 ∥Wi∥ ( l∑ i=1 ∥Ui∥ ∥Wi∥ ) (21) From this we have proven a tighter bound on the final output of the GNN under perturbation, which we will use to calculate probabilistic and generalization bounds. A.1.2 IMPROVEMENT ON PROBABILISTIC BOUNDS USING RANDOM MATRIX THEORY In (Liao et al., 2020), for all i ∈ [l], with l being the number of layers, the prior and the distribution of the perturbations Ui ∈ Rdi+1×di ,, where all hidden dimensions di are upper-bounded by a value h, were generated by a normal distribution N (0, σ2I), and give probabilistic bounds on the operator norms ∥Ui∥ as P (∀i, ∥Ui∥ ≤ t) with probability greater than 1 − 2lh exp−t2/2hσ2. We improve these bounds using theorems on random matrices from work on high-dimensional probability, namely (Vershynin, 2018). Theorem A.2 (Theorem 4.4.5 in (Vershynin, 2018)). Let A be a matrix in Rm×n, where the entries Aij are independent, mean-zero, sub-Gaussian random variables. Then, for all t > 0 we have ∥A∥ ≤ CK( √ m+ √ n+ t) with probability ≥ 1− exp(−t2), where K = maxi,j ∥Aij∥ψ2 and C is some constant. In the above theorem the norm ∥X∥ψ2 is defined as inf{t : E[exp(X2/t2)] ≤ 2}. In Example 2.5.8 in (V ershynin, 2018), it is shown that if X ∼ N (0, σ2) then it has ∥X∥ψ2 ≤ Cσ. Corollary A.2.1. If U ∈ Rm×n is a random matrix generated with the distribution N (0, σ2I) (i.e. all entries are independent and identically distributed Gaussian random variables), then we have ∥U∥ ≤ σ( √ m+ √ n+ t) with probability at least 1− 2 exp(−t2). With a change of variable, we are able to calculate the following: P (∀i.∥Ui∥2 ≤ t) ≥ 1− P (∃i, ∥Ui∥ > t) ≥ 1− l∑ i=1 P (∥Ui∥ > t) ≥ 1− 2l exp (( t Cσ − 2 √ h )2) And by setting the right-hand side to 1/2, we obtain: t = Cσ(2 √ h+ √ ln(4l)) Using the above equation combined with our bound we are able to get |fw+u(X,A)− fw(X,A)|2 ≤ eB √ dl ( l∏ i=1 ∥Wi∥2 ) l∑ k=1 ∥Uk∥2 ∥Wk∥2 = eB √ dβll l∑ k=1 ∥Uk∥2 β ≤ eB √ dβl−1l(σ(2 √ h+ √ ln(4l))) ≤ e2B √ dβ̃l−1(σ(2 √ h+ √ ln(4l))) ≤ γ 4 (22) Here β̃ is an estimated of β such that |β − β̃| ≤ β/l that can be generated a priori; we discuss this in a later subsection. We can set σ = γ 4e2Bβ̃ √ dC ( 2 √ h+ √ ln(4l) ) to satisfy the final inequality. From this we can calculate the KL-divergence between the posterior and the prior: KL(Q∥P ) = |w| 2 2 2σ2 = 16e4B2dl2β2(l−1) ( 2 √ h+ √ ln(4l) )2 2γ2 l∑ i=1 ∥Wi∥F ≤ O ( B2dβ2ll2(h+ ln(l)) γ2 l∑ i=1 ∥Wi∥2F β2 ) ≤ O ( B2dl2 (h+ ln(l)) ∏l i=1 ∥Wi∥2 γ2 l∑ i=1 ∥Wi∥2F ∥Wi∥2 ) (23) From this we are able to calculate the generalization bound and thus prove the theorem. LD,0 ≤ LS,γ +O √√√√B2dl2(h+ ln(l))∏li=1 ∥Wi∥22∑li=1 ∥Wi∥2F∥Wi∥22 + ln mδ γ2m (24) A.1.3 SELECTING PARAMETER β̃ The prior normal distribution’s variance parameter σ2 is dependent on β, but β cannot be used in its calculation because that information is only known after model training. Instead, we can select a parameter β̂ such that |β − β̂| ≤ 1l β and thus 1 eβ l−1 ≤ β̂l−1 ≤ eβl−1 (as per equation 33 in (Liao et al., 2020)). As in (Liao et al., 2020) we only have to consider values of β in the range ( γ 2B √ d )1/l ≤ β ≤ ( γ √ m 2B √ d )1/l as otherwise the generalization bound holds trivially because LD,0 ≤ 1 by definition. If we consider values of β̂ that cover this interval then by union bound we are still able to get a high probability; the covering C needs to have |C| = l2 (m 1 2l − 1). A.2 PROOFS OF OUT-OF-DISTRIBUTION PROBABILITY BOUNDS A.2.1 PROOF OF THEOREM 4.1 Proof. Because u1 is chosen from the stationary distribution (uniform over vertices, because G is connected and d-regular), then for all i ≥ 1 the distribution for ui, ui+1 follows the distribution Unif[E], where E is the edge set of the graph. Let S be the sparsest-cut partition of G. Let Xi be the indicator of the event that the vertex pair is in the set of edges crossing the partition, namely 1{(ui, ui+1) ∈ E(S, S̄)}. By linearity of expectation, this means that E[Xi] = |E(S, S̄)|/|E|. Furthermore, let Yk be the cumulative number of edges crossing the partition along the first k steps of the random walk. This is expressed nicely as Yk = ∑k i=1 Xi. Thus E[Yk] = k |E(S,S̄)| |E| . Applying Markov’s inequality, we get Pr[Yk ≥ tk|E(S, S̄)|/|E|] ≤ 1/t. Suppose we wish to examine under what conditions we can ensure that we do not cross over the partition at all in M steps, i.e. Pr[YM ≥ 1] ≤ 1/2. From the inequality above, we are able to get that Pr [ YM ≥ 2M |E(S, S̄)| |E| ] ≤ 1 2 just by setting k = M and t = 2. We then use the following basic fact: if we have an inequality of the form Pr[Z ≥ z] ≤ 12 , then Pr[Z ≥ z ′] ≤ 12 for any z ′ ≥ z. Let E(S) denote the set of edges connected to any vertex in S. Because |E(S)| ≤ |E|, then we have |E(S, S̄)|/|E| ≤ |E(S, S̄)|/|E(S)|. Furthermore, since we assume a connected graph, |E(S)| ≥ (d/2)|S|, and thus |E(S, S̄)|/|E(S)| ≤ |E(S, S̄)|/[(d/2)|S|]. 2 Thus using the fact above we can deduce Pr [ YM ≥ 2M |E(S, S̄)| (d/2)|S| ] ≤ 1 2 Note that |E(S, S̄)|/|S| is the conductance of the graph ϕ(G), because S was defined to be the sparsest-cut partition of G. Thus we can apply the fact again with Cheeger’s inequality to get Pr [ YM ≥ 2M(2/d) √ 2λ2 ] ≤ 1 2 And since we are interested in Pr[YM ≥ 1], we can thus set 2M √ 2λ2 ≤ 1 to get a necessary condition for M , from which we achieve M ≤ d 25/2 √ λ2 This completes the proof. 2It is important to note that this specific dependency of |E(S)| on d requires G to be a d-regular graph. If the theorem is to be expanded to more general cases, one may use the simple inequality |E(S)| ≥ |S|. A.2.2 PROOF OF THEOREM 4.2 Proof. The quantity φ′ is a transformation of φ that retains all the information contained in φ while still being orthogonal to the all-ones vector 1, so that we can apply Cheeger’s inequality. This orthogonalization is rather standard and can be found in (Spielman, 2015). Let s = |S|/|V (G)|. Note that s ∈ [0, 1], and without loss of generality we can assume that s ≤ 1/2. We observe that the vth coordinate of the vector φ′ corresponds to the mapping φ′(v) = { 1− s v ∈ S −s v /∈ S (25) This ensures that φ′ is orthogonal to 1, as φ′⊤1 = n∑ i=1 φ′(vi) = |S| ( 1− |S| |V | ) + (|V | − |S|) ( − |S| |V | ) = |S| − |V | ( |S| |V | ) = 0. We then note that ∥φ′∥22 = ∑n i=1 φ(v) 2 is equal to s(1− s)|V |, and we can infer |S|/2 ≤ ∥φ′∥22 ≤ |S|; the first inequality holds since s ≤ 1/2. The number of edges |E(S, S̄)| crossing the labelling-partition is equal to φ′⊤Lφ′, as φ′⊤Lφ′ = ∑ (u,v)∈E ((φ(u)− s)− (φ(v)− s))2 = |E(S, S̄)| where L is the Laplacian matrix of G. Thus the quantity 2M |E(S,S̄)||E(S)| ≤ 2M φ′⊤Lφ′ |E(S)| ≤ 2M φ′⊤Lφ′ (d/2)|S| . We are able to get the second inequality because we know |E(S)| ≥ (d/2)|S|. Because we know that |S| ≥ ∥φ′∥2, we can then upper bound this further by 2M φ ′TLφ′ (d/2)∥φ′∥22 . Substituting this quantity in the proof of Theorem 4.1, we achieve the desired bound for M . B EXPERIMENTAL METHODOLOGY AND RESULTS B.1 IN-DISTRIBUTION EXPERIMENTS The datasets used are a combination of synthetic (Erdos-Renyi and Stochastic Block Model) and real-world graphs (IMDBBINARY and IMDBMULTI of data from the Internet Movie Database, and COLLAB, a dataset of academic collaborations), and a bioinformatics dataset, PROTEINS, from (Yanardag & Vishwanathan, 2015). Two different GCN network depths of of l = 4 and l = 6 were used. We use the following formulae for the generalization bound from (Liao et al., 2020) and our new bound, using an explicit constant factor of 42 from (Liao et al., 2020). GenGap(B, d, l, {Wi}li=1) = √√√√ 42 · B2dl−1l2 ln(4lh) ∏l i=1 ∥Wi∥22 ∑l i=1 ∥Wi∥2F ∥Wi∥22 γ2m (26) Similarly, the formula used for the new PAC-Bayes generalization bound is GenGap(B, d, l, {Wi}li=1) = √√√√ 42 · B2dl2(h+ ln(l)) ∏l i=1 ∥Wi∥22 ∑l i=1 ∥Wi∥2F ∥Wi∥22 γ2m (27) We remove an additive O(logm) term in the numerator within the square root after validating that it was numerically negligible. Tables below are for calculated bounds in the case of 4 layers (Table 1) and 6 layers (Table 2). B.2 OUT-OF-DISTRIBUTION EXPERIMENTS B.2.1 METHODOLOGY Experiments were performed to measure the effectiveness size generalization of GCN models when applied to the size generalization learning case described in Section 4, where the learning task is classifying the most common node label in sub-communities of a large underlying network. For each of the synthetic graphs, we calculate an upper bound for M set in the out-of-distribution inequalities we have derived. Since the graphs examined are all not d-regular, we calculate a value of α as φ ⊤Lφ φ⊤Dφ , where L is the graph Laplacian matrix and D is the diagonal degree matrix, to apply to the formula set in Theorem 4.2. Furthermore, we use a more permissive value of δ = 0.75. Similar upper bounds for M were computed for the real-world cases, but the values were too small for experimental use. In this case, we just set N = 10 and M = 50 to attempt to gain insight about the size generalization task’s general feasibility in real-world cases. All experiments were performed with use of the Adam optimizer (Kingma & Ba, 2015), with a constant learning rate 0.01. Models were trained for 10 epochs, with a batch size 32 randomly selected. The models used are different parameterizations of the Graph Convolutional Network as implemented by the library pytorch-geometric (Fey & Lenssen, 2019). For synthetic experiments, which used smaller graphs with generally smaller degree, the parameterization was 3 layers with a hidden dimension of 5, and for the real-world data case, the parameterization was 10 layers with a hidden dimension of 32. For each underlying graph, we generate three train/validation sets (of size N random walks) and test sets (of size M random walks) and we record the loss and accuracy as the average of the three runs. B.2.2 SYNTHETIC GRAPH EXPERIMENTS A large underlying synthetic graph was generated using the stochastic block model, with some adjustment to ensure that the randomly-generated graph had a single connected component. By controlling the intra- and inter-block connection probability values, we are able to control the homophily of the generated graph, which we validate by measuring the value of λ2, as well as calculating the sparsest cut via “Cheeger rounding” (Spielman, 2015) and subsequently the conductance of the graph with respect to this partition. In the experiments, we generated a graph with approximately 2000 nodes, with in-block connectivity probability set to 8/1000 and inter-block connectivity set to 6/105. Node features are generated from a mixture of multivariate Gaussian distributions with dimension 3, mean (−0.5,−0.5,−0.5) for one block, and mean (0.5, 0.5, 0.5) for the other; the covariance matrix is a diagonal matrix (each coordinate is independent) of variance either 2, 4, or 8. Experiments were also performed on non-homophilic synthetic graphs. Like the homophilic synthetic graphs they are generated with the stochastic block model with about 2000 nodes, about 1000 of each label, and with the same mixture-of-Gaussian node features. However the parameters used for the generation of connection are crucially different. The probabilities of connection between nodes of the same block and nodes of a different block are set to be equal, with both being set to 8/1000. These settings ensure that a node’s label is independent from the labels of its neighbors, so the homophily property is not exhibited. Contrasting with the results shown for the homophilic synthetic graphs, the non-homophilic graph results show that the out-of-distribution test accuracy is less than the training accuracy. This further illustrates the association between homophily and size generalization. B.2.3 REAL-WORLD GRAPH EXPERIMENTS Since the node features are indicators, we encoded the node feature information by using the positional encoding mechanism introduced in the Transformer model (Vaswani et al., 2017). For each node, each of their integer indicators was encoded via positional embedding and aggregated via sum.
1. What is the focus of the paper regarding GNNs? 2. What are the strengths and weaknesses of the proposed theoretical analyses? 3. Do you have any concerns about the setup and scope of the analysis? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any related works that the reviewer thinks the authors should discuss?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this paper, the authors propose theoretical analyses for the generalization of GNNs. For the in-distribution case, the authors propose an improved bound regarding graph classification compared to the existing PAC-Bayes results. For the out-of-distribution generalization, the authors propose an analysis for node classification by using random walks instead of assuming a ground-truth generative model. Strengths And Weaknesses Pros: The generalization, specifically out-of-distribution generalization, of GNNs, is an important and trending research direction, and its theoretical analysis is not well studied. The proposed in-distribution bound seems to improve over the existing analysis. Cons and questions: For OOD generalization, this paper focuses on a specific setup, i.e., graph classification where the graphs are sampled from a giant graph, and the graph label is determined by homophily. Though I acknowledge that such an assumption may be unavoidable for rigorous theoretical analysis, it seems to be impractical and thus greatly limits the scope of the proposed analysis. For example, many works on OOD generalization for graph classification focus on molecule classification, where training and testing molecules are collected in different environments/with different backbone structures, which is a completely different scenario and the proposed method seems unable to fit. Following the above comment, the authors observe in experiments that “the GCN model achieves OOD test accuracy on large-subgraph that was comparable to ID accuracy on small-subgraph if not outright better”, which contradicts previous works such as Yehudai et al. (2021). This may also suggest that the assumed setting is not practical. A previous analysis shows that the GNN architectures and downstream graph tasks can greatly affect the generalization of GNNs [1]. I think this work is highly related and a proper discussion should be added. [1] How Neural Networks Extrapolate From Feedforward to Graph Neural Networks, ICLR 2021. It would also make the paper stronger if the authors can briefly point out how the proposed analysis can inspire improving GNNs, which is crucial for GNN practitioners. Minor: (1) Figure 2 is a bit vague (vector graphics are recommended). ===after rebuttal=== I have read the rebuttal and thank the authors for the clarifications. Similar to other reviewers, I think the paper makes interesting theoretical analyeses, but rely on very strong assumptions, so I am not entirely sure whether such contributions meet the bar of ICLR. All things considered, I have increased my score to 6, i.e., slightly positive. Clarity, Quality, Novelty And Reproducibility The clarity and quality of the paper are good, but the novelty seems limited (see comments above). The authors have provided detailed hyper-parameters in the appendix, so the reproducibility should be okay (though providing the source codes will be better).
ICLR
Title In-distribution and Out-of-distribution Generalization for Graph Neural Networks Abstract Graph neural networks (GNNs) are models that allow learning with structured data of varying size. Despite their popularity, theoretical understanding of the generalization of GNNs is an under-explored topic. In this work, we expand the theoretical understanding of both in-distribution and out-of-distribution generalization of GNNs. Firstly, we improve upon the state-of-the-art PAC-Bayes (in-distribution) generalization bound primarily by reducing an exponential dependency on the node degree to a linear dependency. Secondly, utilizing tools from spectral graph theory, we prove some rigorous guarantees about the out-of-distribution (OOD) size generalization of GNNs, where graphs in the training set have different numbers of nodes and edges from those in the test set. To empirically verify our theoretical findings, we conduct experiments on both synthetic and real-world graph datasets. Our computed generalization gaps for the in-distribution case significantly improve the state-of-the-art PAC-Bayes results. For the OOD case, experiments on community classification tasks in large social networks show that GNNs achieve strong size generalization performance in cases guaranteed by our theory. 1 INTRODUCTION Graph neural networks (GNNs), firstly proposed in Scarselli et al. (2008), generalize artificial neural networks from processing fixed-size data to processing arbitrary graph-structured or relational data, which can vary in terms of the number of nodes, the number of edges, and so on. GNNs and their modern variants (Bronstein et al., 2017; Battaglia et al., 2018) have achieved state-of-the-art results in a wide range of application domains, including social networks (Hamilton et al., 2017), material sciences (Xie & Grossman, 2018), drug discovery (Wieder et al., 2020), autonomous driving (Liang et al., 2020), quantum chemistry (Gilmer et al., 2020), and particle physics (Shlomi et al., 2020). Despite their empirical successes, the theoretical understanding of GNNs are somewhat limited. Existing works largely focus on analyzing the expressiveness of GNNs. In particular, Xu et al. (2018) show that GNNs are as powerful as the Weisfeiler-Lehman (WL) graph isomorphism test (Weisfeiler & Leman, 1968) in distinguishing graphs. Chen et al. (2019) further demonstrate an equivalence between graph isomorphism testing and universal approximation of permutation-invariant functions. Loukas (2019) show that GNNs with certain conditions (e.g., on depth and width) are Turing universal. Chen et al. (2020) and Xu et al. (2020a) respectively examine whether GNNs can count substructures and perform algorithmic reasoning. In the vein of statistical learning theory, generalization analyses for GNNs have been developed to bound the gap between training and testing errors using VC-dimension (Vapnik & Chervonenkis, 1971), Rademacher complexity (Bartlett & Mendelson, 2002), algorithmic stability (Bousquet & Elisseeff, 2002), and PACBayes (McAllester, 2003) (a Bayesian extension of PAC learning (Valiant, 1984)). Depending on whether the problem setup is in-distribution (ID) or out-of-distribution (OOD), i.e., whether test data comes from the same distribution as training data, we categorize the literature into two groups. ID Generalization Bounds. Scarselli et al. (2018) provide a VC-dimension based generalization bound for GNNs whereas Verma & Zhang (2019) present the stability-based generalization analysis for singlelayer graph convolutional networks (GCNs) (Kipf & Welling, 2016). Both consider node classification and assume the node features are independent and identically-distributed (IID), which conflicts with the common relational learning setup (e.g., semi-supervised node classification) at which GNNs excel. Relying on the neural tangent kernel (NTK) approach (Jacot et al., 2018), Du et al. (2019) characterize the generalization bound of infinite-width GNNs on graph classification. Garg et al. (2020) derive the Rademacher complexity based bound for message passsing GNNs on graph classification. Lv (2021) establish results for GCNs on node classification using Rademacher complexity as well. Based on PAC-Bayes, Liao et al. (2020) obtain a tighter bound for both GCNs and message passsing GNNs on graph classification compared to (Garg et al., 2020; Scarselli et al., 2018). Subsequently, Ma et al. (2021) also leverage PAC-Bayes and show generalization guarantees of GNNs on subgroups of nodes for node classification. More recently, Li et al. (2022) study the effect of graph subsampling in the generalization of GCNs. OOD Generalization Yehudai et al. (2021) study size generalization for GNNs — this is a specific OOD setting where training and testing graphs differ in the number of nodes and edges. They show negative results that specific GNNs can perfectly fit training graphs but fails on OOD testing ones. Baranwal et al. (2021) consider specific graph generative models, i.e., the contextual stochastic block model (CSBM) (Deshpande et al., 2018), where CSBMs during training and testing are of the same means but different number of nodes, intra-, and inter-class edge probabilities. They present generalization guarantees for single-layer GCNs on binary node classification tasks. Later, Maskey et al. (2022) assume yet another class of graph generative models, i.e., graphons, where the kernel is shared across training and testing but the number of nodes and edges could vary. They obtain generalization bounds of message passing GNNs on graph classification and regression that depend on the Minkowski dimension of the node feature space. Relying on a connection of over-parameterized networks and neural tangent kernel, Xu et al. (2020b) find that taskspecific architecture/feature designs help GNNs extrapolate to OOD algorithmic tasks. Wu et al. (2022a) propose explore-to-extrapolate risk minimization framework, for which the solution is proven to provide an optimal OOD model under the invariance and heterogeneity assumptions. Yang et al. (2022) propose a two-stage model that both infers the latent environment and makes predictions to generalize to OOD data. Empirical studies suggest it works well on real-world molecule datasets. Wu et al. (2022b) study a new objective that can learn invariant and causal graph features that generalize well to OOD data empirically. All above works follow the spirit of invariant risk minimization (Arjovsky et al., 2019) and focus on designing new learning objectives. Instead, we provide generalization bound analysis from the traditional statistical learning theory perspective. Our Contributions. In this paper, we study both in-distribution and out-of-distribution generalization for GNNs. For in-distribution graph classification tasks, we significantly improve the previous state-of-the-art PAC-Bayes results in (Liao et al., 2020) by decreasing an exponential dependency on the maximum node degree to a linear dependency. For OOD node classification tasks, we do not assume any known graph generative models which is in sharp contrast to the existing work. We instead assume GNNs are trained and tested on subgraphs that are sampled via random walks from a single large underlying graph, as an efficient means to generate a connected subgraph. We identify interesting cases where a graph classification task is theoretically guaranteed to perform well at size generalization, and derive generalization bounds. We validate our theoretical results by conducting experiments on synthetic graphs, and also explore size generalization on a collection of real-world social network datasets. In the in-distribution case, we observe an improvement of several orders of magnitude in numerical calculations of the generalization bound. In the out-of-distribution case, we validate that, in cases where the theory guarantees that size generalization works well, the prediction accuracy on large subgraphs is always comparable to the accuracy on small subgraphs, and in many cases is actually better. (a) An example of a small expander graph. Any labelling of its nodes cannot exhibit homophily. (b) Example of a small barbell graph. If a labelling is exactly differentiated between the two groups, then it exhibits homophily. 2 BACKGROUND INFORMATION A graph G is an abstract mathematical model for pairwise relationships, with a set of vertices V and a set of edges E ⊆ V × V . Two vertices v1, v2 are said to be connected if (v1, v2) ∈ E. For a given graph G ∈ G we can also denote its vertices by V (G) and edges E(G). Unless otherwise specified, we assume graphs are undirected and without multi-edges. In machine learning, a graph (or graph-structured data) typically come with a set of node features. Common graph based machine learning tasks include node classification (or regression) and graph classification (or regression). We use the following notation. • Graph data {Gi = (Vi, Ei)}mi=1 ∈ G, where G is the set of all graphs. The neighborhood of a vertex v is denoted N (v) = {u ∈ V (Gi) : (v, u) ∈ E(Gi)}. • Node feature xv : V → X , with X being the feature space, e.g., X = Rdv . • Node labels y : V → Y , with Y being the set of labels, e.g., Y = [n]. Graph neural networks (GNNs). GNNs generalize regular neural networks to process data with varying structures and dependencies. GNNs achieve this flexibility via a message passing computational process. In particular, at the k-th step (or layer) of message passing, we update the representation h(k+1)u of node u as follows, h(k+1)u = UPDATE(h (k) u ,AGGREGATE({h(k)v |v ∈ N (u)})). (1) This update happens for all nodes in parallel within each message passing step. Moreover, the UPDATE and AGGREGATE operators are shared by all nodes, which enables the same GNN to process varyingsized graphs. Once we have finished the finite-step message passing process, we can use the output node representations to make predictions on nodes, edges, and the graph via additionally parameterized readout functions. This message passing framework is quite general since one can instantiate the UPDATE and AGGREGATE operators by different neural networks. For example, the widely used Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016), which are the main interest of our work, have the form h(k+1)u = σ Wk ∑ v∈N (u)∪{u} h (k) v√ |N (u)| √ |N (v)| (2) where one applies a linear transformation (Wk) to all node representations, a weighted-sum over the neighborhood, and an element-wise nonlinearity (e.g., ReLU activation). Note that the learnable weights Wk are different from layer to layer. Homophily. A concept studied in network science, homophily (McPherson et al., 2001) is the property that similar nodes group together. For node classification (or node labelling), this means that neighbouring nodes tend to have the same label. Size generalization is plausible when the labelling of the nodes exhibits homophily. The presence of a homophilic graph labelling implies that the labels of the nodes are unlikely to change during the course of a long random walk on the graph. It is important to note that homophily is also a concept that relates to the graph topology, as not every possible graph structure can be given a labelling that exhibits homophilic properties. An example of one such topology where homophily is impossible is an expander graph (Hoory et al., 2006), as shown in Figure 1a, where nodes have either random or random-like edges connected to a constant number of other nodes in the entire graph. In this case, any labelling of the nodes is far from homophilic, as can be shown using the expansion property. A setting with more homophily is akin to a barbell graph, as shown in Figure 1b, where there are two densely connected components, and comparatively few edges connecting the two dense regions. If the graph labelling of interest lines up with these divisions inherent in the topology, then it is natural to say that it exhibits a homophilic property. Cheeger’s Inequality. A mathematical description of homophily can be given using concepts from spectral graph theory. Cheeger’s inequality (Hoory et al., 2006) is a theorem that pertains to partitions of graphs, or equivalently binary-valued labellings on graphs (one side of the partition is labelled 0, the other 1). A crucial definition is the conductance, defined by ϕ(S) = |E(S, S̄)| |S| ∀S ⊆ V and ϕ(G) = min |S|≤ |V |2 ϕ(S). Here E(S, S̄) is the set of edges connecting a node in S to a node outside of S. Cheeger’s inequality states λ2/2 ≤ ϕ(G) ≤ √ 2λ2, where λ2 is the second-smallest eigenvalue of the normalized Laplacian1 L̃. This inequality links the realvalued quantity λ2 to the concept of homophily. If λ2 is small then the conductance of G must also be low, by Cheeger’s inequality. If a labelling on graph nodes f : V (G) → {0, 1} roughly agrees with a low-conductance partition (i.e., one side of the partition S is generally labelled 0 and the complement S̄ is generally labelled 1) then the labelling f exhibits homophily. 3 IMPROVEMENT OF IN-DISTRIBUTION PAC-BAYES BOUND The state-of-the-art generalization bounds for GNNs in the in-distribution case were formulated by Liao et al. (2020) using the PAC-Bayes theory. Specifically, they build upon the PAC-Bayes theorem in (Neyshabur et al., 2018) that pertains to homogeneous feedforward neural networks. We denote one sample as z = (X,A, y) where X ∈ X , A ∈ G, and y ∈ Y are the node features, the adjacency matrix, and the graph label respectively. Each sample is drawn from some unknown data distribution D (with support X ×G ×Y) in an i.i.d. fashion. Since both training and testing samples are drawn from the same distribution, this is the in-distribution setup. Following (Liao et al., 2020), we consider a margin loss for multi-class graph classifications as below, LD,γ = LD,γ(fw) = Pz∼D ( fw(X,A)[y] ≤ γ +max j ̸=y fw(X,A)[j] ) (3) where γ > 0 is the margin parameter and fw is the model (hypothesis) parameterized by weights w. Since D is unknown, we can not compute this true loss (risk). We instead minimize the empirical loss (risk) that is defined on the sampled training set S as below, LS,γ = LS,γ(fw) = 1 m ∑ z∈S 1 ( fw(Xi, Ai)[y] ≤ γ +max j ̸=y fw(Xi, Ai)[j] ) , (4) 1Here L̃ = D−1/2(D−A)D−1/2, where D is the diagonal matrix of vertex degrees and A is the adjacency matrix. where m is the number of training samples. For simplicity, we abbreviate LD,γ(fw) and LS,γ(fw) as LD,γ and LS,γ respectively from now on. Our main in-distribution result bounds the gap between true and empirical risks for GCNs, shown in the following theorem. The proof is in Appendix A.1. Theorem 3.1. For any B > 0, l > 1, let fw ∈ H : X × G → Rk be an l-layer GCN. Then with probability ≥ 1− δ over the choice of an iid size-m training set S from the data distribution D, we have for any w: LD,0 ≤ LS,γ +O √√√√B2 d l2 (h+ ln l) ∏li=1 ∥Wi∥22∑li=1 (∥Wi∥2F /∥Wi∥22) + ln mδ γ2m (5) Here d equals to one plus the maximum node degree that can be achieved by the data distribution. l is the depth, i.e., the number of layers, of GCNs. Wi is the weight matrix of GCNs in the i-th layer. B is the radius of the minimal ℓ2 ball that contains all node features, i.e., ∀v, ∥xv∥2 ≤ B. This improves the bound in (Liao et al., 2020), which is provided below for a better comparison, LD,0 ≤ LS,γ +O √√√√B2 dl−1 l2h log(lh) ∏li=1 ∥Wi∥22∑li=1(∥Wi∥2F /∥Wi∥22) + log mlδ γ2m . (6) The proof of the theorem from (Liao et al., 2020) is an induction over the l layers, in which the spectral norm of the weights and a maximum degree term is multiplied at each step. We observe that it is possible to avoid passing the maximum degree term via a refined argument. This leads to a tightening of one of the main inequalities used in the induction proof, thus in turn resulting in substantial improvements to the overall bound. As can be seen above, we reduce the exponential term dl−1 to a linear term d, which is a significant improvement for graphs even with small node degrees. 4 TOWARDS DEVELOPING A THEORY FOR SIZE GENERALIZATON In this section, we develop an out-of-distribution (OOD) generalization theory for GNNs. Since we adopt a statistical learning viewpoint, there must necessarily be some assumptions relating the training and testing graphs (otherwise the No-Free Lunch theorem applies). There is a tradeoff between assumptions that are practically relevant, and those for which rigorous guarantees are provable. We have chosen assumptions that we believe strike a balance between those objectives, at least for applications like social networks. Size Generalization Assumptions. We consider the following setup. First, we assume that there exists an extremely large graph G like the user network in Twitter so that one needs to sample subgraphs (e.g., via random walks) for training and testing machine learning models. This is akin to the practical setups of (Grover & Leskovec, 2016; Hamilton et al., 2017). To generate training and testing subgraphs, we run random walks of length N and M respectively on this single large graph, where M ≫ N , and collect the subgraphs induced by these walks. GNNs are then trained on the subgraphs induced by the shorter (length-N ) walks. In testing, we assume a procedure where a length-M random walk induced subgraph is sampled from the large subgraph. Random walks are initiated by choosing an initial node uniformly at random from all the nodes in the graph, and at each step there is an equal probability of selecting any of the current node’s neighbors. This is an interesting OOD problem where training and testing graphs come from different distributions determined by the underlying large graph and the random walk sampling with specific length. We consider the graph classification problem and assume that the graph label is determined by the majority of node labels within the graph, which is reasonable for many applications that involve homophilic graphs. For the node labeling, we assume it is binary but have no assumptions on how labels are generated. Crucially, we assume nothing about the underlying large graph. Therefore, our setup has advantages over some OOD setups in the literature where a generative model of graphs and labels is explicitly assumed. Relation with In-Distribution Result. We know the relationship between true error defined on the unknown data distribution D and empirical error defined on the size-m training set S. Specifically, for any GCN f , with probability at least 1− δ, we have a general bound as follows, LD,0 ≤ LS,γ +A(f, δ,m), (7) where we abbreviate the bound as A(f, δ,m) and omit specific parameters like maximum node degree d. In the size generalization problem, we use random walks with lengths N and M for collecting training and testing subgraphs (data) respectively. We are interested in proving a statement of the following form: for any GCN f , we have with probability at least 1− δ, LDM ,0 ≤ LSN ,γ + B(f, δ,m,M,N). (8) The key detail is that DM is the distribution of subgraphs induced by random walks with length M and SN is the training set of subgraphs induced by random walks with length N . Comparing these two losses is the essence of our OOD result. The final term B(f, δ,m,M,N) is a general bound involving these parameters. Based on the in-distribution result like in Theorem 3.1, we can similarly obtain, LDN ,0 ≤ LSN ,γ +AN (f, δ,m), (9) where DN is the distribution of subgraphs induced by random walks with length N and AN is the general bound. The key question boils down to: what is the relationship between LDN ,0 to LDM ,0? This question will be answered in the following sections. 4.1 A PROBABILITY BOUND FOR PARTITION CROSSES The above size generalization problem involves the distributions of random-walk-induced subgraphs from a large graph G with two lengths: N for training and M for testing. Also, M is much larger than N . Before we state our results, we would like to explain the simple intuition that motivates our theory: If the random walk always stays within the same partition, then the graph label of the random-walk-induced subgraph can be well predicted, no matter how long the random walk is. Here a partition means the subset of nodes with the same node label. The goal of this section is to find bounds on M for which we can provide OOD guarantees. We begin by considering a special labelling. Special Node Labeling: Sparsest Cut. A set S that minimizes ϕ(S) (and has |S| ≤ |V |/2) is called a sparsest cut. For simplicity assume that S is unique. Using Cheeger’s inequality, we first prove the following probability bounds related to this sampling procedure, thereby identifying the length M for which a random walk is likely to stay within the sparsest cut for d-regular graphs. The theorems are as follows. Theorem 4.1. Let UM = [u1, u2, . . . , uM ] be a length-M random walk over a connected, d-regular graph G, with u1 chosen from the stationary distribution of the nodes of G. If M ≤ d/(25/2 √ λ2), then the probability that UM crosses the sparsest-cut partition at least once is under 1/2. Here crossing the sparsest-cut partition S means that there exists an edge (u, v) of the random walk satisfies u ∈ S and v ∈ S̄. λ2 is the second-smallest eigenvalue of the normalized Laplacian. We can easily generalize the previous theorem to an arbitrary probability δ > 0 as below. Corollary 4.1.1. If M ≤ (δd)/23/2 √ λ2, the probability of the above random walk UM crossing over the sparsest-cut partition at least once is at most δ. General Node Labeling. Theorem 4.1 is restrictive in that it requires the partition S to be the sparsest cut. We now modify the proof to yield a quantity that can work for any node labelling f . Specifically, let φ be any boolean (i.e., {0, 1}-valued) labelling on the vertices of the graph. Let the positive node labelling of φ be S = {v ∈ V (G) : φ(v) = 1}. We are interested in bounding the probability that a random walk of length M includes an edge that crosses the positive node labelling S, i.e., an edge (u, v) satisfies u ∈ S and v ∈ S̄. Theorem 4.2. Let φ be a boolean labelling on the nodes of a connected, d-regular graph G with positive node labelling S (0-1 valued vector with φ[i] = 1 if vi ∈ S). Let UM = [u1, u2, . . . , uM ] be a length-M random walk over G, with u1 chosen from the stationary distribution of the nodes of G. Let Xi be the indicator variable of the event that the i-th edge of UM crosses S, i.e., Xi = 1 [ ui ∈ S, ui+1 ∈ S̄ ] and Yk = ∑k i=1 Xi is the number of times that UM crosses S in the first k steps. Let φ ′ = φ− 1(|S|/|V |) and α = φ′⊤Lφ′/∥φ′∥22. The conclusion is that: if M ≤ d 25/2 √ α then Pr [YM ≥ 1] ≤ 1 2 . Corollary 4.2.1. If M ≤ (δd)/23/2 √ α, the probability of the above random walk UM at least crosses over the positive node labelling of f once is at most δ, i.e., Pr [YM ≥ 1] ≤ δ. The formula for α arises from an alternative formulation of Cheeger’s inequality which expresses λ2 using a Rayleigh quotient (Spielman, 2015), in which y may be viewed as a real-valued labelling on the vertices. λ2 = min y⊥d (y⊤Ly)/(y⊤Dy) 4.2 SIZE GENERALIZATION ERROR Recall that, in the size generalization setup, we first train a GNN model f on subgraphs induced by many length-N random walks on G. Then during testing, given a large testing subgraph GM induced by a lengthM random walk on G, we sample a subgraph GN via a length-N random walk on GM and feed it to f to compute the empirical (classification) error for GM . If all nodes of GM are within a single positive node labelling, then all of their labels are the same. Therefore, no matter which subgraph GN is sampled, the generalization error (i.e., the probability of making a wrong prediction) for GM should be the same as the one for GN . Based on this reasoning, we have the following result. Theorem 4.3 (Size Generalization Error). For any δ ∈ [0, 1), if we restrict M , the size of the large random walk-induced subgraph, such that M ≤ (δd)/23/2 √ α, then the in-distribution generalization error LDM ,0, i.e., the probability of a wrong prediction on length-M -random-walk induced subgraphs, satisfies LDM ,0 ≤ δ + LDN ,0. (10) where LDN ,0 is the in-distribution generalization error of f on length-N random-walk-induced subgraphs. Note that this theorem explicitly constrains M , whereas the only condition on N is that LDN ,0 is small. Proof. Observe that, for any events F and E, we have Pr [F ] ≤ Pr [E] + Pr [ F |Ē ] . Let E be the event that a length-M random walk crosses the positive node labelling of the ground truth labels, and let F be the event that we make a wrong prediction on the induced subgraph GM . Theorem 3.1 bounds the second term, Pr [ F |Ē ] , because the generalization error on GM is the same as the one on GN (subgraphs induced by length-N random walks) when GM does not cross the positive node labelling. Corollary 4.2.1 bounds the first term. Substituting the values from the previous two theorems yields the claimed inequality. We already know the bound of the in-distribution generalization error LDN ,0 due to Theorem 3.1 — let us call this quantity δ̂. Using this we can obtain the final result for GCNs under our OOD setup. Theorem 4.3 simply states that, if the length M ≤ (δd)/23/2 √ α, with probability at least 1− δ̂, the OOD generalization error on large subgraphs (induced by length-M random walks) is the sum of error δ and the in-distribution generalization bound on small subgraphs (induced by length-N random walks). 5 EXPERIMENTS 5.1 IN-DISTRIBUTION: NUMERICAL PAC-BAYES BOUND COMPUTATION We conduct multi-class graph classification experiments to compare our improved bound to the original PAC-Bayes bound in (Liao et al., 2020). We use the same GCN model, adopt the same datasets, i.e., 6 synthetic datasets obtained from random graph models and 3 real world graph datasets used in (Yanardag & Vishwanathan, 2015), and follow the same experimental protocol. After training a GCN on each dataset, we compute the theoretical bounds using final model. The numerical comparisons of log bound values are shown in Figure 2. It is clear that our new bounds are significantly tighter and reduce the bound values by several orders of magnitude. The gap is further increased as the depth increases. The tables of bound values and the specific equations to compute them are provided in Appendix B.1. 5.2 OUT-OF-DISTRIBUTION: EFFICACY OF SIZE GENERALIZATION We performed OOD experiments to validate the values of the upper bound on the size of large subgraphs M that was set in Theorem 4.1 and its related theorems, for synthetic graphs. We also performed experiments on synthetic graphs that were non-homophilic with the same values of M and N , to examine size generalization in this case. We also examined the general feasibility of size generalization in real-world social network data. For synthetic graphs, we calculated this theoretical value for the upper bound, and selected large subgraph size M and small subgraph size N ≪ M accordingly. For the real-world case, we chose constant values of N = 10 and M = 50. For each subgraph, we assign as its graph label the label observed most often among its nodes. After sampling datasets of subgraphs of sizes M and N , we train GCN models on the dataset with N -length random walks and measure their performance on the training set, the validation set (a smaller data set generated the same way as the train set), and the testing set (a set of subgraphs inuced by length-M random walks). On the test set we record both the performance when inputting the whole large subgraph (Test error), as well as when performing the sampling procedure used for Theorem 4.3, in which we sample an induced subgraph from an N -length random walk for each data item (Sampling-test error). Synthetic Graphs. We adopt the CSBMs (Deshpande et al., 2018) to generate graphs that exhibit the homophily property. We use two blocks with much higher probability of connections inside the same block than between blocks, which leads to barbell-like graphs. In the non-homophilic case, we set these probabilities to be equal. We generate binary node labellings via the sparsest cut. CSBMs generate node features via a Gaussian mixture where individual choices of the component are determined by the node label. Real-world Graphs. We used social network data for Twitch streamers from (Rozemberczki et al., 2019). Each node is a streamer (Twitch user), and nodes are connected to mutual friendships. Node features are 3,169 different binary indicators of a wide array of attributes, including games liked, location, etc. Each node is labelled with a boolean value of whether the livestreamer has indicated that they use explicit language. In all cases, the GCN model achieves OOD test accuracy on large-subgraph that was comparable to ID accuracy on small-subgraph if not outright better. This is even the case when some of the constraints are violated: no d-regularity constraint was imposed for any of the datasets, and performance was still good for the test error which did not involve further subgraph sampling. This indicates that the theory is promising in practice for more general forms of size generalization. The accuracy on the train set, test set with subgraph sampling, and unaltered test set are shown in Figure 2, and the numerical values are in Appendix B.2. For many cases including all real-world cases, the test accuracy was actually higher than the training accuracy. This could potentially indicate that in the cases where size generalization can be guaranteed to work well, the GCN model benefits significantly from extra node information. It is also possible that because of the sampling procedure, there is overlap in nodes between the training and test sets, since they come from random-walk sampling procedures that naively select a uniformly random node as the initial node. 6 DISCUSSION In this work we have expanded the theoretical understanding of the generalizations of GNNs in both indistribution and out-of-distribution settings, deriving new theoretical guarantees in each setting. The results for in-distribution learning improve upon the state-of-the art PAC-Bayes bounds in (Liao et al., 2020), and the results for out-of-distribution learning provide insight into a practical learning setting under which GNNs are guaranteed to perform effective size generalization. Future directions for the in-distribution understanding would involve lowering the dependencies of other variables like the spectral norm of weights. Generalizing the results to other problems like node classification would also be interesting. In the out-of-distribution case, a number of different observations in experimentation indicate that the theory can still be very much expanded. We have identified cases in real-world datasets where well beyond the bounds on size set forth in the theory, and in all experiments the d-regularity assumption is violated, yet GCN size generalization is still effective in these cases. Expansions to the theory, including generalizing to non-d-regular graphs, can be explored to explain cases like these. A MATHEMATICAL PROOFS A.1 PROOF OF THEOREM 3.1 The proof is as follows, and makes up the remainder of the chapter. A.1.1 IMPROVEMENT ON DEGREE DEPENDENCY In (Liao et al., 2020), a generalization bound is attained on graph convolutional networks; this bound is dependent on a bound on the maximum perturbation of the function value when a perturbation U is applied to the weights W , presented in that paper’s Lemma 3.1. The bound is as follows |fw+u(X,A)− fw(X,A)|2 ≤ eBd l−1 2 ( l∏ i=1 ∥Wi∥2 ) l∑ k=1 ∥Uk∥2 ∥Wk∥2 (11) The primary goal of this set of improvements is to reduce the factor of d l−1 2 . For each layer, let Hi ∈ R|V |×h be the matrix containing the hidden embeddings of all of the nodes in its rows, with h being the hidden dimension. In the process of the proof of Theorem 3.1, we are able to show the following: Φj = max i |Hj [i, :]|2 ≤ d j 2B j∏ i=1 ∥Wi∥2 (12) Ψj = max i |H ′j [i, :]−Hj [i, :]|2 ≤ Bd j 2 ( j∏ i=1 ∥Wi∥2 ) j∑ k=1 ∥Uk∥2 ∥Wk∥2 ( 1 + 1 l )j−k (13) |∆l|2 = ∣∣∣∣ 1n1nH ′l−1(Wl + Ul)− 1n1nHl−1Wl ∣∣∣∣ 2 ≤ eBd l−1 2 ( l∏ i=1 ∥Wi∥2 )[ l∑ k=1 ∥Uk∥2 ∥Wk∥2 ] (11) We begin to simplify these bounds by removing the dependency on d j 2 , replacing it instead with a fixed power of d1/2 that remains constant for every layer, and thus in the final result of Equation 11 as well. Theorem A.1. For all 1 ≤ j ≤ l − 1, we have: Φj ≤ √ d B k∏ i=1 ∥Wi∥2 (14) Ψj ≤ ( 1 + ( 1 + 1 l )j) B √ d ( j∏ i=1 ∥Wi∥2 ) (15) Finally, |fw+u(X,A)− fw(X,A)|2 = |∆l|2 ≤ ( e+ 1 + 2 l ) B √ d l∏ i=1 ∥Wi∥2 (16) The proof follows from a lemma about the 2-norm of any node representation at any layer: Lemma A.1.1. We have, for all k ∈ [n] and for j ∈ [l]: |Hj [u, :]|2 ≤ B √ deg(u) ( j∏ i=1 ∥Wi∥2 ) (17) Proof. We prove this by induction. By definition |H0[u, :]|2 ≤ B and thus |H0[u]| ≤ √ deg(u)B 0∏ k=1 ∥Wk∥2. We assume that for all u, we have Hj−1[u, :] ≤ √ deg(u)B j−1∏ k=1 ∥Wi∥2. From these statements we are able to deduce |Hj [u, :]| ≤ ∑ v∈Nu L̃[u, v]|Hj−1[v, :]|2∥Wj∥2 ≤ ∑ v∈Nu 1√ deg(u)deg(v) [√ deg(v)B j−1∏ k=1 ∥Wk∥2 ] ∥Wj∥2 = ∑ v∈Nu 1√ deg(u) B ( j−1∏ k=1 ∥Wk∥2 ) ∥Wj∥2 = deg(u)√ deg(u) B j∏ k=1 ∥Wk∥2 = √ deg(u)B j∏ k=1 ∥Wk∥2 (18) In these inequalities we use the fact that L̃[i, j] = (A + I)ij/ √ deg(i)deg(j), and we assume the simple case where there are unweighted edges so that (A+ I)ij is 1 if and only if nodes i and j are connected and 0 otherwise. By Lemma A.1.1, we have that Φj = maxi |Hj [i, :]|2 ≤ √ dB ∏j i=1 ∥Wi∥2, which is exactly the result of equation (14). Claim A.1. For all v ∈ [n], |∆j [v, :]|2 ≤ B √ deg(v) ( 1 + 1l )j (∏j i=1 ∥Wi∥ )(∑j i=1 ∥Ui∥ ∥Wi∥ ) Proof. Proof: We use induction assuming this is true for ∆j−1. We then have |∆j [v, :]|2 ≤ ∑ u∈N (v) L̃[v, u]|H ′j−1[u, :]−Hj−1[u, :]|2∥Wj + Uj∥2 + ∑ u∈N (v) L̃[v, u]|Hj−1[u, :]|2∥Uj∥2 ≤ [ B ( 1 + 1 l )j−1(j−1∏ i=1 ∥Wi∥ )( j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) ∥Wj + Uj∥+B∥Uj∥ j−1∏ i=1 ∥Wi∥ ] (19) ∑ u∈N (v) L̃[v, u] √ deg(u) = B √ deg(v) j−1∏ i=1 ∥Wi∥ [ ∥Wj + Uj∥ ( 1 + 1 l )j−1(j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥ ] = B √ deg(v) j∏ i=1 ∥Wi∥ [ ∥Wj + Uj∥2 ∥Wj∥2 ( 1 + 1 l )j−1(j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥2 ∥Wj∥2 ] ≤ B √ deg(v) j∏ i=1 ∥Wi∥ [( 1 + 1 l )j (j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥2 ∥Wj∥2 ] ≤ B √ deg(v) j∏ i=1 ∥Wi∥ ( 1 + 1 l )j ( j∑ i=1 ∥Ui∥2 ∥Wi∥2 ) (20) ∆l has a slightly different formulation but it has a very similar bound: |∆l|2 = ∣∣∣∣ 1n1n ( L̃H ′l−1(Wl + Ul)− 1 n 1nL̃Hl−1(Wl) )∣∣∣∣ 2 = 1 n ∣∣∣1nL̃(H ′l−1 −Hl−1)(Wl + Ul) + 1nL̃Hl−1(Ul)∣∣∣ 2 ≤ 1 n n∑ i=1 |∆l−1[i, :]|2∥Wl + Ul∥2 + 1 n n∑ i=1 |Hl−1[i, :]|2∥Ul∥2 ≤ B √ d l−1∏ i=1 ∥Wi∥ ( 1 + 1 l )l−1( l−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) ∥Wl + Ul∥ +B √ d∥Ul∥2 l−1∏ i=1 ∥Wi∥2 ≤ B √ d l∏ i=1 ∥Wi∥ [( 1 + 1 l )l( l−1∑ i=1 ∥Ui∥ ∥Wi∥ ) + ∥Ul∥ ∥Wl∥ ] ≤ B √ d l∏ i=1 ∥Wi∥ ( 1 + 1 l )l( l∑ i=1 ∥Ui∥ ∥Wi∥ ) ≤ eB √ d l∏ i=1 ∥Wi∥ ( l∑ i=1 ∥Ui∥ ∥Wi∥ ) (21) From this we have proven a tighter bound on the final output of the GNN under perturbation, which we will use to calculate probabilistic and generalization bounds. A.1.2 IMPROVEMENT ON PROBABILISTIC BOUNDS USING RANDOM MATRIX THEORY In (Liao et al., 2020), for all i ∈ [l], with l being the number of layers, the prior and the distribution of the perturbations Ui ∈ Rdi+1×di ,, where all hidden dimensions di are upper-bounded by a value h, were generated by a normal distribution N (0, σ2I), and give probabilistic bounds on the operator norms ∥Ui∥ as P (∀i, ∥Ui∥ ≤ t) with probability greater than 1 − 2lh exp−t2/2hσ2. We improve these bounds using theorems on random matrices from work on high-dimensional probability, namely (Vershynin, 2018). Theorem A.2 (Theorem 4.4.5 in (Vershynin, 2018)). Let A be a matrix in Rm×n, where the entries Aij are independent, mean-zero, sub-Gaussian random variables. Then, for all t > 0 we have ∥A∥ ≤ CK( √ m+ √ n+ t) with probability ≥ 1− exp(−t2), where K = maxi,j ∥Aij∥ψ2 and C is some constant. In the above theorem the norm ∥X∥ψ2 is defined as inf{t : E[exp(X2/t2)] ≤ 2}. In Example 2.5.8 in (V ershynin, 2018), it is shown that if X ∼ N (0, σ2) then it has ∥X∥ψ2 ≤ Cσ. Corollary A.2.1. If U ∈ Rm×n is a random matrix generated with the distribution N (0, σ2I) (i.e. all entries are independent and identically distributed Gaussian random variables), then we have ∥U∥ ≤ σ( √ m+ √ n+ t) with probability at least 1− 2 exp(−t2). With a change of variable, we are able to calculate the following: P (∀i.∥Ui∥2 ≤ t) ≥ 1− P (∃i, ∥Ui∥ > t) ≥ 1− l∑ i=1 P (∥Ui∥ > t) ≥ 1− 2l exp (( t Cσ − 2 √ h )2) And by setting the right-hand side to 1/2, we obtain: t = Cσ(2 √ h+ √ ln(4l)) Using the above equation combined with our bound we are able to get |fw+u(X,A)− fw(X,A)|2 ≤ eB √ dl ( l∏ i=1 ∥Wi∥2 ) l∑ k=1 ∥Uk∥2 ∥Wk∥2 = eB √ dβll l∑ k=1 ∥Uk∥2 β ≤ eB √ dβl−1l(σ(2 √ h+ √ ln(4l))) ≤ e2B √ dβ̃l−1(σ(2 √ h+ √ ln(4l))) ≤ γ 4 (22) Here β̃ is an estimated of β such that |β − β̃| ≤ β/l that can be generated a priori; we discuss this in a later subsection. We can set σ = γ 4e2Bβ̃ √ dC ( 2 √ h+ √ ln(4l) ) to satisfy the final inequality. From this we can calculate the KL-divergence between the posterior and the prior: KL(Q∥P ) = |w| 2 2 2σ2 = 16e4B2dl2β2(l−1) ( 2 √ h+ √ ln(4l) )2 2γ2 l∑ i=1 ∥Wi∥F ≤ O ( B2dβ2ll2(h+ ln(l)) γ2 l∑ i=1 ∥Wi∥2F β2 ) ≤ O ( B2dl2 (h+ ln(l)) ∏l i=1 ∥Wi∥2 γ2 l∑ i=1 ∥Wi∥2F ∥Wi∥2 ) (23) From this we are able to calculate the generalization bound and thus prove the theorem. LD,0 ≤ LS,γ +O √√√√B2dl2(h+ ln(l))∏li=1 ∥Wi∥22∑li=1 ∥Wi∥2F∥Wi∥22 + ln mδ γ2m (24) A.1.3 SELECTING PARAMETER β̃ The prior normal distribution’s variance parameter σ2 is dependent on β, but β cannot be used in its calculation because that information is only known after model training. Instead, we can select a parameter β̂ such that |β − β̂| ≤ 1l β and thus 1 eβ l−1 ≤ β̂l−1 ≤ eβl−1 (as per equation 33 in (Liao et al., 2020)). As in (Liao et al., 2020) we only have to consider values of β in the range ( γ 2B √ d )1/l ≤ β ≤ ( γ √ m 2B √ d )1/l as otherwise the generalization bound holds trivially because LD,0 ≤ 1 by definition. If we consider values of β̂ that cover this interval then by union bound we are still able to get a high probability; the covering C needs to have |C| = l2 (m 1 2l − 1). A.2 PROOFS OF OUT-OF-DISTRIBUTION PROBABILITY BOUNDS A.2.1 PROOF OF THEOREM 4.1 Proof. Because u1 is chosen from the stationary distribution (uniform over vertices, because G is connected and d-regular), then for all i ≥ 1 the distribution for ui, ui+1 follows the distribution Unif[E], where E is the edge set of the graph. Let S be the sparsest-cut partition of G. Let Xi be the indicator of the event that the vertex pair is in the set of edges crossing the partition, namely 1{(ui, ui+1) ∈ E(S, S̄)}. By linearity of expectation, this means that E[Xi] = |E(S, S̄)|/|E|. Furthermore, let Yk be the cumulative number of edges crossing the partition along the first k steps of the random walk. This is expressed nicely as Yk = ∑k i=1 Xi. Thus E[Yk] = k |E(S,S̄)| |E| . Applying Markov’s inequality, we get Pr[Yk ≥ tk|E(S, S̄)|/|E|] ≤ 1/t. Suppose we wish to examine under what conditions we can ensure that we do not cross over the partition at all in M steps, i.e. Pr[YM ≥ 1] ≤ 1/2. From the inequality above, we are able to get that Pr [ YM ≥ 2M |E(S, S̄)| |E| ] ≤ 1 2 just by setting k = M and t = 2. We then use the following basic fact: if we have an inequality of the form Pr[Z ≥ z] ≤ 12 , then Pr[Z ≥ z ′] ≤ 12 for any z ′ ≥ z. Let E(S) denote the set of edges connected to any vertex in S. Because |E(S)| ≤ |E|, then we have |E(S, S̄)|/|E| ≤ |E(S, S̄)|/|E(S)|. Furthermore, since we assume a connected graph, |E(S)| ≥ (d/2)|S|, and thus |E(S, S̄)|/|E(S)| ≤ |E(S, S̄)|/[(d/2)|S|]. 2 Thus using the fact above we can deduce Pr [ YM ≥ 2M |E(S, S̄)| (d/2)|S| ] ≤ 1 2 Note that |E(S, S̄)|/|S| is the conductance of the graph ϕ(G), because S was defined to be the sparsest-cut partition of G. Thus we can apply the fact again with Cheeger’s inequality to get Pr [ YM ≥ 2M(2/d) √ 2λ2 ] ≤ 1 2 And since we are interested in Pr[YM ≥ 1], we can thus set 2M √ 2λ2 ≤ 1 to get a necessary condition for M , from which we achieve M ≤ d 25/2 √ λ2 This completes the proof. 2It is important to note that this specific dependency of |E(S)| on d requires G to be a d-regular graph. If the theorem is to be expanded to more general cases, one may use the simple inequality |E(S)| ≥ |S|. A.2.2 PROOF OF THEOREM 4.2 Proof. The quantity φ′ is a transformation of φ that retains all the information contained in φ while still being orthogonal to the all-ones vector 1, so that we can apply Cheeger’s inequality. This orthogonalization is rather standard and can be found in (Spielman, 2015). Let s = |S|/|V (G)|. Note that s ∈ [0, 1], and without loss of generality we can assume that s ≤ 1/2. We observe that the vth coordinate of the vector φ′ corresponds to the mapping φ′(v) = { 1− s v ∈ S −s v /∈ S (25) This ensures that φ′ is orthogonal to 1, as φ′⊤1 = n∑ i=1 φ′(vi) = |S| ( 1− |S| |V | ) + (|V | − |S|) ( − |S| |V | ) = |S| − |V | ( |S| |V | ) = 0. We then note that ∥φ′∥22 = ∑n i=1 φ(v) 2 is equal to s(1− s)|V |, and we can infer |S|/2 ≤ ∥φ′∥22 ≤ |S|; the first inequality holds since s ≤ 1/2. The number of edges |E(S, S̄)| crossing the labelling-partition is equal to φ′⊤Lφ′, as φ′⊤Lφ′ = ∑ (u,v)∈E ((φ(u)− s)− (φ(v)− s))2 = |E(S, S̄)| where L is the Laplacian matrix of G. Thus the quantity 2M |E(S,S̄)||E(S)| ≤ 2M φ′⊤Lφ′ |E(S)| ≤ 2M φ′⊤Lφ′ (d/2)|S| . We are able to get the second inequality because we know |E(S)| ≥ (d/2)|S|. Because we know that |S| ≥ ∥φ′∥2, we can then upper bound this further by 2M φ ′TLφ′ (d/2)∥φ′∥22 . Substituting this quantity in the proof of Theorem 4.1, we achieve the desired bound for M . B EXPERIMENTAL METHODOLOGY AND RESULTS B.1 IN-DISTRIBUTION EXPERIMENTS The datasets used are a combination of synthetic (Erdos-Renyi and Stochastic Block Model) and real-world graphs (IMDBBINARY and IMDBMULTI of data from the Internet Movie Database, and COLLAB, a dataset of academic collaborations), and a bioinformatics dataset, PROTEINS, from (Yanardag & Vishwanathan, 2015). Two different GCN network depths of of l = 4 and l = 6 were used. We use the following formulae for the generalization bound from (Liao et al., 2020) and our new bound, using an explicit constant factor of 42 from (Liao et al., 2020). GenGap(B, d, l, {Wi}li=1) = √√√√ 42 · B2dl−1l2 ln(4lh) ∏l i=1 ∥Wi∥22 ∑l i=1 ∥Wi∥2F ∥Wi∥22 γ2m (26) Similarly, the formula used for the new PAC-Bayes generalization bound is GenGap(B, d, l, {Wi}li=1) = √√√√ 42 · B2dl2(h+ ln(l)) ∏l i=1 ∥Wi∥22 ∑l i=1 ∥Wi∥2F ∥Wi∥22 γ2m (27) We remove an additive O(logm) term in the numerator within the square root after validating that it was numerically negligible. Tables below are for calculated bounds in the case of 4 layers (Table 1) and 6 layers (Table 2). B.2 OUT-OF-DISTRIBUTION EXPERIMENTS B.2.1 METHODOLOGY Experiments were performed to measure the effectiveness size generalization of GCN models when applied to the size generalization learning case described in Section 4, where the learning task is classifying the most common node label in sub-communities of a large underlying network. For each of the synthetic graphs, we calculate an upper bound for M set in the out-of-distribution inequalities we have derived. Since the graphs examined are all not d-regular, we calculate a value of α as φ ⊤Lφ φ⊤Dφ , where L is the graph Laplacian matrix and D is the diagonal degree matrix, to apply to the formula set in Theorem 4.2. Furthermore, we use a more permissive value of δ = 0.75. Similar upper bounds for M were computed for the real-world cases, but the values were too small for experimental use. In this case, we just set N = 10 and M = 50 to attempt to gain insight about the size generalization task’s general feasibility in real-world cases. All experiments were performed with use of the Adam optimizer (Kingma & Ba, 2015), with a constant learning rate 0.01. Models were trained for 10 epochs, with a batch size 32 randomly selected. The models used are different parameterizations of the Graph Convolutional Network as implemented by the library pytorch-geometric (Fey & Lenssen, 2019). For synthetic experiments, which used smaller graphs with generally smaller degree, the parameterization was 3 layers with a hidden dimension of 5, and for the real-world data case, the parameterization was 10 layers with a hidden dimension of 32. For each underlying graph, we generate three train/validation sets (of size N random walks) and test sets (of size M random walks) and we record the loss and accuracy as the average of the three runs. B.2.2 SYNTHETIC GRAPH EXPERIMENTS A large underlying synthetic graph was generated using the stochastic block model, with some adjustment to ensure that the randomly-generated graph had a single connected component. By controlling the intra- and inter-block connection probability values, we are able to control the homophily of the generated graph, which we validate by measuring the value of λ2, as well as calculating the sparsest cut via “Cheeger rounding” (Spielman, 2015) and subsequently the conductance of the graph with respect to this partition. In the experiments, we generated a graph with approximately 2000 nodes, with in-block connectivity probability set to 8/1000 and inter-block connectivity set to 6/105. Node features are generated from a mixture of multivariate Gaussian distributions with dimension 3, mean (−0.5,−0.5,−0.5) for one block, and mean (0.5, 0.5, 0.5) for the other; the covariance matrix is a diagonal matrix (each coordinate is independent) of variance either 2, 4, or 8. Experiments were also performed on non-homophilic synthetic graphs. Like the homophilic synthetic graphs they are generated with the stochastic block model with about 2000 nodes, about 1000 of each label, and with the same mixture-of-Gaussian node features. However the parameters used for the generation of connection are crucially different. The probabilities of connection between nodes of the same block and nodes of a different block are set to be equal, with both being set to 8/1000. These settings ensure that a node’s label is independent from the labels of its neighbors, so the homophily property is not exhibited. Contrasting with the results shown for the homophilic synthetic graphs, the non-homophilic graph results show that the out-of-distribution test accuracy is less than the training accuracy. This further illustrates the association between homophily and size generalization. B.2.3 REAL-WORLD GRAPH EXPERIMENTS Since the node features are indicators, we encoded the node feature information by using the positional encoding mechanism introduced in the Transformer model (Vaswani et al., 2017). For each node, each of their integer indicators was encoded via positional embedding and aggregated via sum.
1. What is the focus of the paper regarding generalization bounds for GNNs? 2. What are the strengths and weaknesses of the proposed approach, particularly in tightening the bounds for in-distribution and analyzing size generalization? 3. Do you have any questions or concerns regarding the experimental results and their interpretation? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor comments or suggestions for improving the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The objective of this paper is to provided bounds on the generalization of GNNs in both the in-distribution and out-of-distribution setting. For the in-distribution case, the authors tighten the bounds provided in Liao et al. (2020) by scaling down two separate terms in the PAC-Bayes bound. For the out of distribution setting, the authors analyze the specific issue of size generalization (GNNs trained and tested on subgraphs of differing sizes); the authors provide a bound on the generalization error motivated by the fact that for homophilous graphs, an increase in size does not affect the graph classification with a certain probability. The setup for the in-distribution generalization error is as follows: there exists a distribution over the set of graphs, node features and graph labels. A GCN is trained on m samples from the distribution, and then the task is to bound the gap between the true risk (probability the graph label is off by a margin gamma) and the empirical risk (the training error). Liao et al. (2020) provides a bound for the above gap. The present paper tightens two terms. First, the authors claim that the gap does not grow exponentially with the maximum degree but linearly. The authors show that when performing induction over the layers of the GCN, due to normalization, it is possible to maintain a linear dependency on degree (pages 13-14). Second, the authors tighten a separate term by utilizing a random matrix theory theorem from (Vershynin, 2018). The setup for size generalization (out of distribution) is as follows: there exists a very large graph G. For training, a GCN is trained on subgraphs of G generated by performing random walks of length N and taking the induced subgraph. For testing, however, the test subgraph is generated by a random walk of length M >> N. The goal is still to perform graph classification. Every node in the graph has a label and the subgraph label is simply the most common ground truth or predicted node label. The core tool needed to bound the size generalization error is the observation that due to homophily it is possible to bound the probability that a random walk of size M reaches a node of a label differing from the label of the initial node. In the case where the random walk does not reach a node of a differing label, the length of the random walk does not impact the subgraph label. Strengths And Weaknesses Pros: The paper tackles an important topic in better understanding the theoretical generalization bounds of GNNs. Further, the paper is comprehensive in looking at both in and out of distribution generalization. The paper is clearly written and understandable for a reader familiar with GNN/GCNs but not with theoretical generalization bounds of these models. Cons: For the in-distribution bounds it is not fully clear what is the significance / level of contribution of tightening the bounds in Liao et al. To me this could be better shown with a more detailed analysis of the experimental results in sec. 5.1 in particular for the real-world graphs. As is the log-generalization gap values are difficult to interpret and put into perspective. It would be helpful to have more specification for when the expression for M in Theorem 4.3 is compatible with the condition that M >> N. In analyzing the size generalization, it can be argued that the paper translates an out-of-distribution question into an in-distribution one. Specifically the paper largely side steps situations in which it is desirable to test the GNN on the large subgraph itself instead of sampling from the test subgraph, for instance the setting considered in “From Local Structures to Size Generalization in Graph Neural Networks”. It is helpful that the authors explain the surprising findings in Figures 2c and 2f; however, it would be helpful to also include cases where the test accuracy is not higher than the training accuracy. When the test accuracy is always higher the results preclude the need to understand out-of-distribution generalization. Minor comments: On page five when describing the size generalization setup in the following sentence: “In testing, we assume a procedure where a length-N random walk induced subgraph is sampled from the large subgraph”, it was not clear to me that the large subgraph is the length-M random-walk induced graph until later. Clarity, Quality, Novelty And Reproducibility I can attest to the clarity and novelty of the work. The paper is written without many assumptions of prior knowledge of GNN generalizability. The improved in-distribution bounds are an original contribution building on recent work in the literature. My major concerns are the size of contribution which are detailed in the main review.
ICLR
Title In-distribution and Out-of-distribution Generalization for Graph Neural Networks Abstract Graph neural networks (GNNs) are models that allow learning with structured data of varying size. Despite their popularity, theoretical understanding of the generalization of GNNs is an under-explored topic. In this work, we expand the theoretical understanding of both in-distribution and out-of-distribution generalization of GNNs. Firstly, we improve upon the state-of-the-art PAC-Bayes (in-distribution) generalization bound primarily by reducing an exponential dependency on the node degree to a linear dependency. Secondly, utilizing tools from spectral graph theory, we prove some rigorous guarantees about the out-of-distribution (OOD) size generalization of GNNs, where graphs in the training set have different numbers of nodes and edges from those in the test set. To empirically verify our theoretical findings, we conduct experiments on both synthetic and real-world graph datasets. Our computed generalization gaps for the in-distribution case significantly improve the state-of-the-art PAC-Bayes results. For the OOD case, experiments on community classification tasks in large social networks show that GNNs achieve strong size generalization performance in cases guaranteed by our theory. 1 INTRODUCTION Graph neural networks (GNNs), firstly proposed in Scarselli et al. (2008), generalize artificial neural networks from processing fixed-size data to processing arbitrary graph-structured or relational data, which can vary in terms of the number of nodes, the number of edges, and so on. GNNs and their modern variants (Bronstein et al., 2017; Battaglia et al., 2018) have achieved state-of-the-art results in a wide range of application domains, including social networks (Hamilton et al., 2017), material sciences (Xie & Grossman, 2018), drug discovery (Wieder et al., 2020), autonomous driving (Liang et al., 2020), quantum chemistry (Gilmer et al., 2020), and particle physics (Shlomi et al., 2020). Despite their empirical successes, the theoretical understanding of GNNs are somewhat limited. Existing works largely focus on analyzing the expressiveness of GNNs. In particular, Xu et al. (2018) show that GNNs are as powerful as the Weisfeiler-Lehman (WL) graph isomorphism test (Weisfeiler & Leman, 1968) in distinguishing graphs. Chen et al. (2019) further demonstrate an equivalence between graph isomorphism testing and universal approximation of permutation-invariant functions. Loukas (2019) show that GNNs with certain conditions (e.g., on depth and width) are Turing universal. Chen et al. (2020) and Xu et al. (2020a) respectively examine whether GNNs can count substructures and perform algorithmic reasoning. In the vein of statistical learning theory, generalization analyses for GNNs have been developed to bound the gap between training and testing errors using VC-dimension (Vapnik & Chervonenkis, 1971), Rademacher complexity (Bartlett & Mendelson, 2002), algorithmic stability (Bousquet & Elisseeff, 2002), and PACBayes (McAllester, 2003) (a Bayesian extension of PAC learning (Valiant, 1984)). Depending on whether the problem setup is in-distribution (ID) or out-of-distribution (OOD), i.e., whether test data comes from the same distribution as training data, we categorize the literature into two groups. ID Generalization Bounds. Scarselli et al. (2018) provide a VC-dimension based generalization bound for GNNs whereas Verma & Zhang (2019) present the stability-based generalization analysis for singlelayer graph convolutional networks (GCNs) (Kipf & Welling, 2016). Both consider node classification and assume the node features are independent and identically-distributed (IID), which conflicts with the common relational learning setup (e.g., semi-supervised node classification) at which GNNs excel. Relying on the neural tangent kernel (NTK) approach (Jacot et al., 2018), Du et al. (2019) characterize the generalization bound of infinite-width GNNs on graph classification. Garg et al. (2020) derive the Rademacher complexity based bound for message passsing GNNs on graph classification. Lv (2021) establish results for GCNs on node classification using Rademacher complexity as well. Based on PAC-Bayes, Liao et al. (2020) obtain a tighter bound for both GCNs and message passsing GNNs on graph classification compared to (Garg et al., 2020; Scarselli et al., 2018). Subsequently, Ma et al. (2021) also leverage PAC-Bayes and show generalization guarantees of GNNs on subgroups of nodes for node classification. More recently, Li et al. (2022) study the effect of graph subsampling in the generalization of GCNs. OOD Generalization Yehudai et al. (2021) study size generalization for GNNs — this is a specific OOD setting where training and testing graphs differ in the number of nodes and edges. They show negative results that specific GNNs can perfectly fit training graphs but fails on OOD testing ones. Baranwal et al. (2021) consider specific graph generative models, i.e., the contextual stochastic block model (CSBM) (Deshpande et al., 2018), where CSBMs during training and testing are of the same means but different number of nodes, intra-, and inter-class edge probabilities. They present generalization guarantees for single-layer GCNs on binary node classification tasks. Later, Maskey et al. (2022) assume yet another class of graph generative models, i.e., graphons, where the kernel is shared across training and testing but the number of nodes and edges could vary. They obtain generalization bounds of message passing GNNs on graph classification and regression that depend on the Minkowski dimension of the node feature space. Relying on a connection of over-parameterized networks and neural tangent kernel, Xu et al. (2020b) find that taskspecific architecture/feature designs help GNNs extrapolate to OOD algorithmic tasks. Wu et al. (2022a) propose explore-to-extrapolate risk minimization framework, for which the solution is proven to provide an optimal OOD model under the invariance and heterogeneity assumptions. Yang et al. (2022) propose a two-stage model that both infers the latent environment and makes predictions to generalize to OOD data. Empirical studies suggest it works well on real-world molecule datasets. Wu et al. (2022b) study a new objective that can learn invariant and causal graph features that generalize well to OOD data empirically. All above works follow the spirit of invariant risk minimization (Arjovsky et al., 2019) and focus on designing new learning objectives. Instead, we provide generalization bound analysis from the traditional statistical learning theory perspective. Our Contributions. In this paper, we study both in-distribution and out-of-distribution generalization for GNNs. For in-distribution graph classification tasks, we significantly improve the previous state-of-the-art PAC-Bayes results in (Liao et al., 2020) by decreasing an exponential dependency on the maximum node degree to a linear dependency. For OOD node classification tasks, we do not assume any known graph generative models which is in sharp contrast to the existing work. We instead assume GNNs are trained and tested on subgraphs that are sampled via random walks from a single large underlying graph, as an efficient means to generate a connected subgraph. We identify interesting cases where a graph classification task is theoretically guaranteed to perform well at size generalization, and derive generalization bounds. We validate our theoretical results by conducting experiments on synthetic graphs, and also explore size generalization on a collection of real-world social network datasets. In the in-distribution case, we observe an improvement of several orders of magnitude in numerical calculations of the generalization bound. In the out-of-distribution case, we validate that, in cases where the theory guarantees that size generalization works well, the prediction accuracy on large subgraphs is always comparable to the accuracy on small subgraphs, and in many cases is actually better. (a) An example of a small expander graph. Any labelling of its nodes cannot exhibit homophily. (b) Example of a small barbell graph. If a labelling is exactly differentiated between the two groups, then it exhibits homophily. 2 BACKGROUND INFORMATION A graph G is an abstract mathematical model for pairwise relationships, with a set of vertices V and a set of edges E ⊆ V × V . Two vertices v1, v2 are said to be connected if (v1, v2) ∈ E. For a given graph G ∈ G we can also denote its vertices by V (G) and edges E(G). Unless otherwise specified, we assume graphs are undirected and without multi-edges. In machine learning, a graph (or graph-structured data) typically come with a set of node features. Common graph based machine learning tasks include node classification (or regression) and graph classification (or regression). We use the following notation. • Graph data {Gi = (Vi, Ei)}mi=1 ∈ G, where G is the set of all graphs. The neighborhood of a vertex v is denoted N (v) = {u ∈ V (Gi) : (v, u) ∈ E(Gi)}. • Node feature xv : V → X , with X being the feature space, e.g., X = Rdv . • Node labels y : V → Y , with Y being the set of labels, e.g., Y = [n]. Graph neural networks (GNNs). GNNs generalize regular neural networks to process data with varying structures and dependencies. GNNs achieve this flexibility via a message passing computational process. In particular, at the k-th step (or layer) of message passing, we update the representation h(k+1)u of node u as follows, h(k+1)u = UPDATE(h (k) u ,AGGREGATE({h(k)v |v ∈ N (u)})). (1) This update happens for all nodes in parallel within each message passing step. Moreover, the UPDATE and AGGREGATE operators are shared by all nodes, which enables the same GNN to process varyingsized graphs. Once we have finished the finite-step message passing process, we can use the output node representations to make predictions on nodes, edges, and the graph via additionally parameterized readout functions. This message passing framework is quite general since one can instantiate the UPDATE and AGGREGATE operators by different neural networks. For example, the widely used Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016), which are the main interest of our work, have the form h(k+1)u = σ Wk ∑ v∈N (u)∪{u} h (k) v√ |N (u)| √ |N (v)| (2) where one applies a linear transformation (Wk) to all node representations, a weighted-sum over the neighborhood, and an element-wise nonlinearity (e.g., ReLU activation). Note that the learnable weights Wk are different from layer to layer. Homophily. A concept studied in network science, homophily (McPherson et al., 2001) is the property that similar nodes group together. For node classification (or node labelling), this means that neighbouring nodes tend to have the same label. Size generalization is plausible when the labelling of the nodes exhibits homophily. The presence of a homophilic graph labelling implies that the labels of the nodes are unlikely to change during the course of a long random walk on the graph. It is important to note that homophily is also a concept that relates to the graph topology, as not every possible graph structure can be given a labelling that exhibits homophilic properties. An example of one such topology where homophily is impossible is an expander graph (Hoory et al., 2006), as shown in Figure 1a, where nodes have either random or random-like edges connected to a constant number of other nodes in the entire graph. In this case, any labelling of the nodes is far from homophilic, as can be shown using the expansion property. A setting with more homophily is akin to a barbell graph, as shown in Figure 1b, where there are two densely connected components, and comparatively few edges connecting the two dense regions. If the graph labelling of interest lines up with these divisions inherent in the topology, then it is natural to say that it exhibits a homophilic property. Cheeger’s Inequality. A mathematical description of homophily can be given using concepts from spectral graph theory. Cheeger’s inequality (Hoory et al., 2006) is a theorem that pertains to partitions of graphs, or equivalently binary-valued labellings on graphs (one side of the partition is labelled 0, the other 1). A crucial definition is the conductance, defined by ϕ(S) = |E(S, S̄)| |S| ∀S ⊆ V and ϕ(G) = min |S|≤ |V |2 ϕ(S). Here E(S, S̄) is the set of edges connecting a node in S to a node outside of S. Cheeger’s inequality states λ2/2 ≤ ϕ(G) ≤ √ 2λ2, where λ2 is the second-smallest eigenvalue of the normalized Laplacian1 L̃. This inequality links the realvalued quantity λ2 to the concept of homophily. If λ2 is small then the conductance of G must also be low, by Cheeger’s inequality. If a labelling on graph nodes f : V (G) → {0, 1} roughly agrees with a low-conductance partition (i.e., one side of the partition S is generally labelled 0 and the complement S̄ is generally labelled 1) then the labelling f exhibits homophily. 3 IMPROVEMENT OF IN-DISTRIBUTION PAC-BAYES BOUND The state-of-the-art generalization bounds for GNNs in the in-distribution case were formulated by Liao et al. (2020) using the PAC-Bayes theory. Specifically, they build upon the PAC-Bayes theorem in (Neyshabur et al., 2018) that pertains to homogeneous feedforward neural networks. We denote one sample as z = (X,A, y) where X ∈ X , A ∈ G, and y ∈ Y are the node features, the adjacency matrix, and the graph label respectively. Each sample is drawn from some unknown data distribution D (with support X ×G ×Y) in an i.i.d. fashion. Since both training and testing samples are drawn from the same distribution, this is the in-distribution setup. Following (Liao et al., 2020), we consider a margin loss for multi-class graph classifications as below, LD,γ = LD,γ(fw) = Pz∼D ( fw(X,A)[y] ≤ γ +max j ̸=y fw(X,A)[j] ) (3) where γ > 0 is the margin parameter and fw is the model (hypothesis) parameterized by weights w. Since D is unknown, we can not compute this true loss (risk). We instead minimize the empirical loss (risk) that is defined on the sampled training set S as below, LS,γ = LS,γ(fw) = 1 m ∑ z∈S 1 ( fw(Xi, Ai)[y] ≤ γ +max j ̸=y fw(Xi, Ai)[j] ) , (4) 1Here L̃ = D−1/2(D−A)D−1/2, where D is the diagonal matrix of vertex degrees and A is the adjacency matrix. where m is the number of training samples. For simplicity, we abbreviate LD,γ(fw) and LS,γ(fw) as LD,γ and LS,γ respectively from now on. Our main in-distribution result bounds the gap between true and empirical risks for GCNs, shown in the following theorem. The proof is in Appendix A.1. Theorem 3.1. For any B > 0, l > 1, let fw ∈ H : X × G → Rk be an l-layer GCN. Then with probability ≥ 1− δ over the choice of an iid size-m training set S from the data distribution D, we have for any w: LD,0 ≤ LS,γ +O √√√√B2 d l2 (h+ ln l) ∏li=1 ∥Wi∥22∑li=1 (∥Wi∥2F /∥Wi∥22) + ln mδ γ2m (5) Here d equals to one plus the maximum node degree that can be achieved by the data distribution. l is the depth, i.e., the number of layers, of GCNs. Wi is the weight matrix of GCNs in the i-th layer. B is the radius of the minimal ℓ2 ball that contains all node features, i.e., ∀v, ∥xv∥2 ≤ B. This improves the bound in (Liao et al., 2020), which is provided below for a better comparison, LD,0 ≤ LS,γ +O √√√√B2 dl−1 l2h log(lh) ∏li=1 ∥Wi∥22∑li=1(∥Wi∥2F /∥Wi∥22) + log mlδ γ2m . (6) The proof of the theorem from (Liao et al., 2020) is an induction over the l layers, in which the spectral norm of the weights and a maximum degree term is multiplied at each step. We observe that it is possible to avoid passing the maximum degree term via a refined argument. This leads to a tightening of one of the main inequalities used in the induction proof, thus in turn resulting in substantial improvements to the overall bound. As can be seen above, we reduce the exponential term dl−1 to a linear term d, which is a significant improvement for graphs even with small node degrees. 4 TOWARDS DEVELOPING A THEORY FOR SIZE GENERALIZATON In this section, we develop an out-of-distribution (OOD) generalization theory for GNNs. Since we adopt a statistical learning viewpoint, there must necessarily be some assumptions relating the training and testing graphs (otherwise the No-Free Lunch theorem applies). There is a tradeoff between assumptions that are practically relevant, and those for which rigorous guarantees are provable. We have chosen assumptions that we believe strike a balance between those objectives, at least for applications like social networks. Size Generalization Assumptions. We consider the following setup. First, we assume that there exists an extremely large graph G like the user network in Twitter so that one needs to sample subgraphs (e.g., via random walks) for training and testing machine learning models. This is akin to the practical setups of (Grover & Leskovec, 2016; Hamilton et al., 2017). To generate training and testing subgraphs, we run random walks of length N and M respectively on this single large graph, where M ≫ N , and collect the subgraphs induced by these walks. GNNs are then trained on the subgraphs induced by the shorter (length-N ) walks. In testing, we assume a procedure where a length-M random walk induced subgraph is sampled from the large subgraph. Random walks are initiated by choosing an initial node uniformly at random from all the nodes in the graph, and at each step there is an equal probability of selecting any of the current node’s neighbors. This is an interesting OOD problem where training and testing graphs come from different distributions determined by the underlying large graph and the random walk sampling with specific length. We consider the graph classification problem and assume that the graph label is determined by the majority of node labels within the graph, which is reasonable for many applications that involve homophilic graphs. For the node labeling, we assume it is binary but have no assumptions on how labels are generated. Crucially, we assume nothing about the underlying large graph. Therefore, our setup has advantages over some OOD setups in the literature where a generative model of graphs and labels is explicitly assumed. Relation with In-Distribution Result. We know the relationship between true error defined on the unknown data distribution D and empirical error defined on the size-m training set S. Specifically, for any GCN f , with probability at least 1− δ, we have a general bound as follows, LD,0 ≤ LS,γ +A(f, δ,m), (7) where we abbreviate the bound as A(f, δ,m) and omit specific parameters like maximum node degree d. In the size generalization problem, we use random walks with lengths N and M for collecting training and testing subgraphs (data) respectively. We are interested in proving a statement of the following form: for any GCN f , we have with probability at least 1− δ, LDM ,0 ≤ LSN ,γ + B(f, δ,m,M,N). (8) The key detail is that DM is the distribution of subgraphs induced by random walks with length M and SN is the training set of subgraphs induced by random walks with length N . Comparing these two losses is the essence of our OOD result. The final term B(f, δ,m,M,N) is a general bound involving these parameters. Based on the in-distribution result like in Theorem 3.1, we can similarly obtain, LDN ,0 ≤ LSN ,γ +AN (f, δ,m), (9) where DN is the distribution of subgraphs induced by random walks with length N and AN is the general bound. The key question boils down to: what is the relationship between LDN ,0 to LDM ,0? This question will be answered in the following sections. 4.1 A PROBABILITY BOUND FOR PARTITION CROSSES The above size generalization problem involves the distributions of random-walk-induced subgraphs from a large graph G with two lengths: N for training and M for testing. Also, M is much larger than N . Before we state our results, we would like to explain the simple intuition that motivates our theory: If the random walk always stays within the same partition, then the graph label of the random-walk-induced subgraph can be well predicted, no matter how long the random walk is. Here a partition means the subset of nodes with the same node label. The goal of this section is to find bounds on M for which we can provide OOD guarantees. We begin by considering a special labelling. Special Node Labeling: Sparsest Cut. A set S that minimizes ϕ(S) (and has |S| ≤ |V |/2) is called a sparsest cut. For simplicity assume that S is unique. Using Cheeger’s inequality, we first prove the following probability bounds related to this sampling procedure, thereby identifying the length M for which a random walk is likely to stay within the sparsest cut for d-regular graphs. The theorems are as follows. Theorem 4.1. Let UM = [u1, u2, . . . , uM ] be a length-M random walk over a connected, d-regular graph G, with u1 chosen from the stationary distribution of the nodes of G. If M ≤ d/(25/2 √ λ2), then the probability that UM crosses the sparsest-cut partition at least once is under 1/2. Here crossing the sparsest-cut partition S means that there exists an edge (u, v) of the random walk satisfies u ∈ S and v ∈ S̄. λ2 is the second-smallest eigenvalue of the normalized Laplacian. We can easily generalize the previous theorem to an arbitrary probability δ > 0 as below. Corollary 4.1.1. If M ≤ (δd)/23/2 √ λ2, the probability of the above random walk UM crossing over the sparsest-cut partition at least once is at most δ. General Node Labeling. Theorem 4.1 is restrictive in that it requires the partition S to be the sparsest cut. We now modify the proof to yield a quantity that can work for any node labelling f . Specifically, let φ be any boolean (i.e., {0, 1}-valued) labelling on the vertices of the graph. Let the positive node labelling of φ be S = {v ∈ V (G) : φ(v) = 1}. We are interested in bounding the probability that a random walk of length M includes an edge that crosses the positive node labelling S, i.e., an edge (u, v) satisfies u ∈ S and v ∈ S̄. Theorem 4.2. Let φ be a boolean labelling on the nodes of a connected, d-regular graph G with positive node labelling S (0-1 valued vector with φ[i] = 1 if vi ∈ S). Let UM = [u1, u2, . . . , uM ] be a length-M random walk over G, with u1 chosen from the stationary distribution of the nodes of G. Let Xi be the indicator variable of the event that the i-th edge of UM crosses S, i.e., Xi = 1 [ ui ∈ S, ui+1 ∈ S̄ ] and Yk = ∑k i=1 Xi is the number of times that UM crosses S in the first k steps. Let φ ′ = φ− 1(|S|/|V |) and α = φ′⊤Lφ′/∥φ′∥22. The conclusion is that: if M ≤ d 25/2 √ α then Pr [YM ≥ 1] ≤ 1 2 . Corollary 4.2.1. If M ≤ (δd)/23/2 √ α, the probability of the above random walk UM at least crosses over the positive node labelling of f once is at most δ, i.e., Pr [YM ≥ 1] ≤ δ. The formula for α arises from an alternative formulation of Cheeger’s inequality which expresses λ2 using a Rayleigh quotient (Spielman, 2015), in which y may be viewed as a real-valued labelling on the vertices. λ2 = min y⊥d (y⊤Ly)/(y⊤Dy) 4.2 SIZE GENERALIZATION ERROR Recall that, in the size generalization setup, we first train a GNN model f on subgraphs induced by many length-N random walks on G. Then during testing, given a large testing subgraph GM induced by a lengthM random walk on G, we sample a subgraph GN via a length-N random walk on GM and feed it to f to compute the empirical (classification) error for GM . If all nodes of GM are within a single positive node labelling, then all of their labels are the same. Therefore, no matter which subgraph GN is sampled, the generalization error (i.e., the probability of making a wrong prediction) for GM should be the same as the one for GN . Based on this reasoning, we have the following result. Theorem 4.3 (Size Generalization Error). For any δ ∈ [0, 1), if we restrict M , the size of the large random walk-induced subgraph, such that M ≤ (δd)/23/2 √ α, then the in-distribution generalization error LDM ,0, i.e., the probability of a wrong prediction on length-M -random-walk induced subgraphs, satisfies LDM ,0 ≤ δ + LDN ,0. (10) where LDN ,0 is the in-distribution generalization error of f on length-N random-walk-induced subgraphs. Note that this theorem explicitly constrains M , whereas the only condition on N is that LDN ,0 is small. Proof. Observe that, for any events F and E, we have Pr [F ] ≤ Pr [E] + Pr [ F |Ē ] . Let E be the event that a length-M random walk crosses the positive node labelling of the ground truth labels, and let F be the event that we make a wrong prediction on the induced subgraph GM . Theorem 3.1 bounds the second term, Pr [ F |Ē ] , because the generalization error on GM is the same as the one on GN (subgraphs induced by length-N random walks) when GM does not cross the positive node labelling. Corollary 4.2.1 bounds the first term. Substituting the values from the previous two theorems yields the claimed inequality. We already know the bound of the in-distribution generalization error LDN ,0 due to Theorem 3.1 — let us call this quantity δ̂. Using this we can obtain the final result for GCNs under our OOD setup. Theorem 4.3 simply states that, if the length M ≤ (δd)/23/2 √ α, with probability at least 1− δ̂, the OOD generalization error on large subgraphs (induced by length-M random walks) is the sum of error δ and the in-distribution generalization bound on small subgraphs (induced by length-N random walks). 5 EXPERIMENTS 5.1 IN-DISTRIBUTION: NUMERICAL PAC-BAYES BOUND COMPUTATION We conduct multi-class graph classification experiments to compare our improved bound to the original PAC-Bayes bound in (Liao et al., 2020). We use the same GCN model, adopt the same datasets, i.e., 6 synthetic datasets obtained from random graph models and 3 real world graph datasets used in (Yanardag & Vishwanathan, 2015), and follow the same experimental protocol. After training a GCN on each dataset, we compute the theoretical bounds using final model. The numerical comparisons of log bound values are shown in Figure 2. It is clear that our new bounds are significantly tighter and reduce the bound values by several orders of magnitude. The gap is further increased as the depth increases. The tables of bound values and the specific equations to compute them are provided in Appendix B.1. 5.2 OUT-OF-DISTRIBUTION: EFFICACY OF SIZE GENERALIZATION We performed OOD experiments to validate the values of the upper bound on the size of large subgraphs M that was set in Theorem 4.1 and its related theorems, for synthetic graphs. We also performed experiments on synthetic graphs that were non-homophilic with the same values of M and N , to examine size generalization in this case. We also examined the general feasibility of size generalization in real-world social network data. For synthetic graphs, we calculated this theoretical value for the upper bound, and selected large subgraph size M and small subgraph size N ≪ M accordingly. For the real-world case, we chose constant values of N = 10 and M = 50. For each subgraph, we assign as its graph label the label observed most often among its nodes. After sampling datasets of subgraphs of sizes M and N , we train GCN models on the dataset with N -length random walks and measure their performance on the training set, the validation set (a smaller data set generated the same way as the train set), and the testing set (a set of subgraphs inuced by length-M random walks). On the test set we record both the performance when inputting the whole large subgraph (Test error), as well as when performing the sampling procedure used for Theorem 4.3, in which we sample an induced subgraph from an N -length random walk for each data item (Sampling-test error). Synthetic Graphs. We adopt the CSBMs (Deshpande et al., 2018) to generate graphs that exhibit the homophily property. We use two blocks with much higher probability of connections inside the same block than between blocks, which leads to barbell-like graphs. In the non-homophilic case, we set these probabilities to be equal. We generate binary node labellings via the sparsest cut. CSBMs generate node features via a Gaussian mixture where individual choices of the component are determined by the node label. Real-world Graphs. We used social network data for Twitch streamers from (Rozemberczki et al., 2019). Each node is a streamer (Twitch user), and nodes are connected to mutual friendships. Node features are 3,169 different binary indicators of a wide array of attributes, including games liked, location, etc. Each node is labelled with a boolean value of whether the livestreamer has indicated that they use explicit language. In all cases, the GCN model achieves OOD test accuracy on large-subgraph that was comparable to ID accuracy on small-subgraph if not outright better. This is even the case when some of the constraints are violated: no d-regularity constraint was imposed for any of the datasets, and performance was still good for the test error which did not involve further subgraph sampling. This indicates that the theory is promising in practice for more general forms of size generalization. The accuracy on the train set, test set with subgraph sampling, and unaltered test set are shown in Figure 2, and the numerical values are in Appendix B.2. For many cases including all real-world cases, the test accuracy was actually higher than the training accuracy. This could potentially indicate that in the cases where size generalization can be guaranteed to work well, the GCN model benefits significantly from extra node information. It is also possible that because of the sampling procedure, there is overlap in nodes between the training and test sets, since they come from random-walk sampling procedures that naively select a uniformly random node as the initial node. 6 DISCUSSION In this work we have expanded the theoretical understanding of the generalizations of GNNs in both indistribution and out-of-distribution settings, deriving new theoretical guarantees in each setting. The results for in-distribution learning improve upon the state-of-the art PAC-Bayes bounds in (Liao et al., 2020), and the results for out-of-distribution learning provide insight into a practical learning setting under which GNNs are guaranteed to perform effective size generalization. Future directions for the in-distribution understanding would involve lowering the dependencies of other variables like the spectral norm of weights. Generalizing the results to other problems like node classification would also be interesting. In the out-of-distribution case, a number of different observations in experimentation indicate that the theory can still be very much expanded. We have identified cases in real-world datasets where well beyond the bounds on size set forth in the theory, and in all experiments the d-regularity assumption is violated, yet GCN size generalization is still effective in these cases. Expansions to the theory, including generalizing to non-d-regular graphs, can be explored to explain cases like these. A MATHEMATICAL PROOFS A.1 PROOF OF THEOREM 3.1 The proof is as follows, and makes up the remainder of the chapter. A.1.1 IMPROVEMENT ON DEGREE DEPENDENCY In (Liao et al., 2020), a generalization bound is attained on graph convolutional networks; this bound is dependent on a bound on the maximum perturbation of the function value when a perturbation U is applied to the weights W , presented in that paper’s Lemma 3.1. The bound is as follows |fw+u(X,A)− fw(X,A)|2 ≤ eBd l−1 2 ( l∏ i=1 ∥Wi∥2 ) l∑ k=1 ∥Uk∥2 ∥Wk∥2 (11) The primary goal of this set of improvements is to reduce the factor of d l−1 2 . For each layer, let Hi ∈ R|V |×h be the matrix containing the hidden embeddings of all of the nodes in its rows, with h being the hidden dimension. In the process of the proof of Theorem 3.1, we are able to show the following: Φj = max i |Hj [i, :]|2 ≤ d j 2B j∏ i=1 ∥Wi∥2 (12) Ψj = max i |H ′j [i, :]−Hj [i, :]|2 ≤ Bd j 2 ( j∏ i=1 ∥Wi∥2 ) j∑ k=1 ∥Uk∥2 ∥Wk∥2 ( 1 + 1 l )j−k (13) |∆l|2 = ∣∣∣∣ 1n1nH ′l−1(Wl + Ul)− 1n1nHl−1Wl ∣∣∣∣ 2 ≤ eBd l−1 2 ( l∏ i=1 ∥Wi∥2 )[ l∑ k=1 ∥Uk∥2 ∥Wk∥2 ] (11) We begin to simplify these bounds by removing the dependency on d j 2 , replacing it instead with a fixed power of d1/2 that remains constant for every layer, and thus in the final result of Equation 11 as well. Theorem A.1. For all 1 ≤ j ≤ l − 1, we have: Φj ≤ √ d B k∏ i=1 ∥Wi∥2 (14) Ψj ≤ ( 1 + ( 1 + 1 l )j) B √ d ( j∏ i=1 ∥Wi∥2 ) (15) Finally, |fw+u(X,A)− fw(X,A)|2 = |∆l|2 ≤ ( e+ 1 + 2 l ) B √ d l∏ i=1 ∥Wi∥2 (16) The proof follows from a lemma about the 2-norm of any node representation at any layer: Lemma A.1.1. We have, for all k ∈ [n] and for j ∈ [l]: |Hj [u, :]|2 ≤ B √ deg(u) ( j∏ i=1 ∥Wi∥2 ) (17) Proof. We prove this by induction. By definition |H0[u, :]|2 ≤ B and thus |H0[u]| ≤ √ deg(u)B 0∏ k=1 ∥Wk∥2. We assume that for all u, we have Hj−1[u, :] ≤ √ deg(u)B j−1∏ k=1 ∥Wi∥2. From these statements we are able to deduce |Hj [u, :]| ≤ ∑ v∈Nu L̃[u, v]|Hj−1[v, :]|2∥Wj∥2 ≤ ∑ v∈Nu 1√ deg(u)deg(v) [√ deg(v)B j−1∏ k=1 ∥Wk∥2 ] ∥Wj∥2 = ∑ v∈Nu 1√ deg(u) B ( j−1∏ k=1 ∥Wk∥2 ) ∥Wj∥2 = deg(u)√ deg(u) B j∏ k=1 ∥Wk∥2 = √ deg(u)B j∏ k=1 ∥Wk∥2 (18) In these inequalities we use the fact that L̃[i, j] = (A + I)ij/ √ deg(i)deg(j), and we assume the simple case where there are unweighted edges so that (A+ I)ij is 1 if and only if nodes i and j are connected and 0 otherwise. By Lemma A.1.1, we have that Φj = maxi |Hj [i, :]|2 ≤ √ dB ∏j i=1 ∥Wi∥2, which is exactly the result of equation (14). Claim A.1. For all v ∈ [n], |∆j [v, :]|2 ≤ B √ deg(v) ( 1 + 1l )j (∏j i=1 ∥Wi∥ )(∑j i=1 ∥Ui∥ ∥Wi∥ ) Proof. Proof: We use induction assuming this is true for ∆j−1. We then have |∆j [v, :]|2 ≤ ∑ u∈N (v) L̃[v, u]|H ′j−1[u, :]−Hj−1[u, :]|2∥Wj + Uj∥2 + ∑ u∈N (v) L̃[v, u]|Hj−1[u, :]|2∥Uj∥2 ≤ [ B ( 1 + 1 l )j−1(j−1∏ i=1 ∥Wi∥ )( j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) ∥Wj + Uj∥+B∥Uj∥ j−1∏ i=1 ∥Wi∥ ] (19) ∑ u∈N (v) L̃[v, u] √ deg(u) = B √ deg(v) j−1∏ i=1 ∥Wi∥ [ ∥Wj + Uj∥ ( 1 + 1 l )j−1(j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥ ] = B √ deg(v) j∏ i=1 ∥Wi∥ [ ∥Wj + Uj∥2 ∥Wj∥2 ( 1 + 1 l )j−1(j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥2 ∥Wj∥2 ] ≤ B √ deg(v) j∏ i=1 ∥Wi∥ [( 1 + 1 l )j (j−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) + ∥Uj∥2 ∥Wj∥2 ] ≤ B √ deg(v) j∏ i=1 ∥Wi∥ ( 1 + 1 l )j ( j∑ i=1 ∥Ui∥2 ∥Wi∥2 ) (20) ∆l has a slightly different formulation but it has a very similar bound: |∆l|2 = ∣∣∣∣ 1n1n ( L̃H ′l−1(Wl + Ul)− 1 n 1nL̃Hl−1(Wl) )∣∣∣∣ 2 = 1 n ∣∣∣1nL̃(H ′l−1 −Hl−1)(Wl + Ul) + 1nL̃Hl−1(Ul)∣∣∣ 2 ≤ 1 n n∑ i=1 |∆l−1[i, :]|2∥Wl + Ul∥2 + 1 n n∑ i=1 |Hl−1[i, :]|2∥Ul∥2 ≤ B √ d l−1∏ i=1 ∥Wi∥ ( 1 + 1 l )l−1( l−1∑ i=1 ∥Ui∥2 ∥Wi∥2 ) ∥Wl + Ul∥ +B √ d∥Ul∥2 l−1∏ i=1 ∥Wi∥2 ≤ B √ d l∏ i=1 ∥Wi∥ [( 1 + 1 l )l( l−1∑ i=1 ∥Ui∥ ∥Wi∥ ) + ∥Ul∥ ∥Wl∥ ] ≤ B √ d l∏ i=1 ∥Wi∥ ( 1 + 1 l )l( l∑ i=1 ∥Ui∥ ∥Wi∥ ) ≤ eB √ d l∏ i=1 ∥Wi∥ ( l∑ i=1 ∥Ui∥ ∥Wi∥ ) (21) From this we have proven a tighter bound on the final output of the GNN under perturbation, which we will use to calculate probabilistic and generalization bounds. A.1.2 IMPROVEMENT ON PROBABILISTIC BOUNDS USING RANDOM MATRIX THEORY In (Liao et al., 2020), for all i ∈ [l], with l being the number of layers, the prior and the distribution of the perturbations Ui ∈ Rdi+1×di ,, where all hidden dimensions di are upper-bounded by a value h, were generated by a normal distribution N (0, σ2I), and give probabilistic bounds on the operator norms ∥Ui∥ as P (∀i, ∥Ui∥ ≤ t) with probability greater than 1 − 2lh exp−t2/2hσ2. We improve these bounds using theorems on random matrices from work on high-dimensional probability, namely (Vershynin, 2018). Theorem A.2 (Theorem 4.4.5 in (Vershynin, 2018)). Let A be a matrix in Rm×n, where the entries Aij are independent, mean-zero, sub-Gaussian random variables. Then, for all t > 0 we have ∥A∥ ≤ CK( √ m+ √ n+ t) with probability ≥ 1− exp(−t2), where K = maxi,j ∥Aij∥ψ2 and C is some constant. In the above theorem the norm ∥X∥ψ2 is defined as inf{t : E[exp(X2/t2)] ≤ 2}. In Example 2.5.8 in (V ershynin, 2018), it is shown that if X ∼ N (0, σ2) then it has ∥X∥ψ2 ≤ Cσ. Corollary A.2.1. If U ∈ Rm×n is a random matrix generated with the distribution N (0, σ2I) (i.e. all entries are independent and identically distributed Gaussian random variables), then we have ∥U∥ ≤ σ( √ m+ √ n+ t) with probability at least 1− 2 exp(−t2). With a change of variable, we are able to calculate the following: P (∀i.∥Ui∥2 ≤ t) ≥ 1− P (∃i, ∥Ui∥ > t) ≥ 1− l∑ i=1 P (∥Ui∥ > t) ≥ 1− 2l exp (( t Cσ − 2 √ h )2) And by setting the right-hand side to 1/2, we obtain: t = Cσ(2 √ h+ √ ln(4l)) Using the above equation combined with our bound we are able to get |fw+u(X,A)− fw(X,A)|2 ≤ eB √ dl ( l∏ i=1 ∥Wi∥2 ) l∑ k=1 ∥Uk∥2 ∥Wk∥2 = eB √ dβll l∑ k=1 ∥Uk∥2 β ≤ eB √ dβl−1l(σ(2 √ h+ √ ln(4l))) ≤ e2B √ dβ̃l−1(σ(2 √ h+ √ ln(4l))) ≤ γ 4 (22) Here β̃ is an estimated of β such that |β − β̃| ≤ β/l that can be generated a priori; we discuss this in a later subsection. We can set σ = γ 4e2Bβ̃ √ dC ( 2 √ h+ √ ln(4l) ) to satisfy the final inequality. From this we can calculate the KL-divergence between the posterior and the prior: KL(Q∥P ) = |w| 2 2 2σ2 = 16e4B2dl2β2(l−1) ( 2 √ h+ √ ln(4l) )2 2γ2 l∑ i=1 ∥Wi∥F ≤ O ( B2dβ2ll2(h+ ln(l)) γ2 l∑ i=1 ∥Wi∥2F β2 ) ≤ O ( B2dl2 (h+ ln(l)) ∏l i=1 ∥Wi∥2 γ2 l∑ i=1 ∥Wi∥2F ∥Wi∥2 ) (23) From this we are able to calculate the generalization bound and thus prove the theorem. LD,0 ≤ LS,γ +O √√√√B2dl2(h+ ln(l))∏li=1 ∥Wi∥22∑li=1 ∥Wi∥2F∥Wi∥22 + ln mδ γ2m (24) A.1.3 SELECTING PARAMETER β̃ The prior normal distribution’s variance parameter σ2 is dependent on β, but β cannot be used in its calculation because that information is only known after model training. Instead, we can select a parameter β̂ such that |β − β̂| ≤ 1l β and thus 1 eβ l−1 ≤ β̂l−1 ≤ eβl−1 (as per equation 33 in (Liao et al., 2020)). As in (Liao et al., 2020) we only have to consider values of β in the range ( γ 2B √ d )1/l ≤ β ≤ ( γ √ m 2B √ d )1/l as otherwise the generalization bound holds trivially because LD,0 ≤ 1 by definition. If we consider values of β̂ that cover this interval then by union bound we are still able to get a high probability; the covering C needs to have |C| = l2 (m 1 2l − 1). A.2 PROOFS OF OUT-OF-DISTRIBUTION PROBABILITY BOUNDS A.2.1 PROOF OF THEOREM 4.1 Proof. Because u1 is chosen from the stationary distribution (uniform over vertices, because G is connected and d-regular), then for all i ≥ 1 the distribution for ui, ui+1 follows the distribution Unif[E], where E is the edge set of the graph. Let S be the sparsest-cut partition of G. Let Xi be the indicator of the event that the vertex pair is in the set of edges crossing the partition, namely 1{(ui, ui+1) ∈ E(S, S̄)}. By linearity of expectation, this means that E[Xi] = |E(S, S̄)|/|E|. Furthermore, let Yk be the cumulative number of edges crossing the partition along the first k steps of the random walk. This is expressed nicely as Yk = ∑k i=1 Xi. Thus E[Yk] = k |E(S,S̄)| |E| . Applying Markov’s inequality, we get Pr[Yk ≥ tk|E(S, S̄)|/|E|] ≤ 1/t. Suppose we wish to examine under what conditions we can ensure that we do not cross over the partition at all in M steps, i.e. Pr[YM ≥ 1] ≤ 1/2. From the inequality above, we are able to get that Pr [ YM ≥ 2M |E(S, S̄)| |E| ] ≤ 1 2 just by setting k = M and t = 2. We then use the following basic fact: if we have an inequality of the form Pr[Z ≥ z] ≤ 12 , then Pr[Z ≥ z ′] ≤ 12 for any z ′ ≥ z. Let E(S) denote the set of edges connected to any vertex in S. Because |E(S)| ≤ |E|, then we have |E(S, S̄)|/|E| ≤ |E(S, S̄)|/|E(S)|. Furthermore, since we assume a connected graph, |E(S)| ≥ (d/2)|S|, and thus |E(S, S̄)|/|E(S)| ≤ |E(S, S̄)|/[(d/2)|S|]. 2 Thus using the fact above we can deduce Pr [ YM ≥ 2M |E(S, S̄)| (d/2)|S| ] ≤ 1 2 Note that |E(S, S̄)|/|S| is the conductance of the graph ϕ(G), because S was defined to be the sparsest-cut partition of G. Thus we can apply the fact again with Cheeger’s inequality to get Pr [ YM ≥ 2M(2/d) √ 2λ2 ] ≤ 1 2 And since we are interested in Pr[YM ≥ 1], we can thus set 2M √ 2λ2 ≤ 1 to get a necessary condition for M , from which we achieve M ≤ d 25/2 √ λ2 This completes the proof. 2It is important to note that this specific dependency of |E(S)| on d requires G to be a d-regular graph. If the theorem is to be expanded to more general cases, one may use the simple inequality |E(S)| ≥ |S|. A.2.2 PROOF OF THEOREM 4.2 Proof. The quantity φ′ is a transformation of φ that retains all the information contained in φ while still being orthogonal to the all-ones vector 1, so that we can apply Cheeger’s inequality. This orthogonalization is rather standard and can be found in (Spielman, 2015). Let s = |S|/|V (G)|. Note that s ∈ [0, 1], and without loss of generality we can assume that s ≤ 1/2. We observe that the vth coordinate of the vector φ′ corresponds to the mapping φ′(v) = { 1− s v ∈ S −s v /∈ S (25) This ensures that φ′ is orthogonal to 1, as φ′⊤1 = n∑ i=1 φ′(vi) = |S| ( 1− |S| |V | ) + (|V | − |S|) ( − |S| |V | ) = |S| − |V | ( |S| |V | ) = 0. We then note that ∥φ′∥22 = ∑n i=1 φ(v) 2 is equal to s(1− s)|V |, and we can infer |S|/2 ≤ ∥φ′∥22 ≤ |S|; the first inequality holds since s ≤ 1/2. The number of edges |E(S, S̄)| crossing the labelling-partition is equal to φ′⊤Lφ′, as φ′⊤Lφ′ = ∑ (u,v)∈E ((φ(u)− s)− (φ(v)− s))2 = |E(S, S̄)| where L is the Laplacian matrix of G. Thus the quantity 2M |E(S,S̄)||E(S)| ≤ 2M φ′⊤Lφ′ |E(S)| ≤ 2M φ′⊤Lφ′ (d/2)|S| . We are able to get the second inequality because we know |E(S)| ≥ (d/2)|S|. Because we know that |S| ≥ ∥φ′∥2, we can then upper bound this further by 2M φ ′TLφ′ (d/2)∥φ′∥22 . Substituting this quantity in the proof of Theorem 4.1, we achieve the desired bound for M . B EXPERIMENTAL METHODOLOGY AND RESULTS B.1 IN-DISTRIBUTION EXPERIMENTS The datasets used are a combination of synthetic (Erdos-Renyi and Stochastic Block Model) and real-world graphs (IMDBBINARY and IMDBMULTI of data from the Internet Movie Database, and COLLAB, a dataset of academic collaborations), and a bioinformatics dataset, PROTEINS, from (Yanardag & Vishwanathan, 2015). Two different GCN network depths of of l = 4 and l = 6 were used. We use the following formulae for the generalization bound from (Liao et al., 2020) and our new bound, using an explicit constant factor of 42 from (Liao et al., 2020). GenGap(B, d, l, {Wi}li=1) = √√√√ 42 · B2dl−1l2 ln(4lh) ∏l i=1 ∥Wi∥22 ∑l i=1 ∥Wi∥2F ∥Wi∥22 γ2m (26) Similarly, the formula used for the new PAC-Bayes generalization bound is GenGap(B, d, l, {Wi}li=1) = √√√√ 42 · B2dl2(h+ ln(l)) ∏l i=1 ∥Wi∥22 ∑l i=1 ∥Wi∥2F ∥Wi∥22 γ2m (27) We remove an additive O(logm) term in the numerator within the square root after validating that it was numerically negligible. Tables below are for calculated bounds in the case of 4 layers (Table 1) and 6 layers (Table 2). B.2 OUT-OF-DISTRIBUTION EXPERIMENTS B.2.1 METHODOLOGY Experiments were performed to measure the effectiveness size generalization of GCN models when applied to the size generalization learning case described in Section 4, where the learning task is classifying the most common node label in sub-communities of a large underlying network. For each of the synthetic graphs, we calculate an upper bound for M set in the out-of-distribution inequalities we have derived. Since the graphs examined are all not d-regular, we calculate a value of α as φ ⊤Lφ φ⊤Dφ , where L is the graph Laplacian matrix and D is the diagonal degree matrix, to apply to the formula set in Theorem 4.2. Furthermore, we use a more permissive value of δ = 0.75. Similar upper bounds for M were computed for the real-world cases, but the values were too small for experimental use. In this case, we just set N = 10 and M = 50 to attempt to gain insight about the size generalization task’s general feasibility in real-world cases. All experiments were performed with use of the Adam optimizer (Kingma & Ba, 2015), with a constant learning rate 0.01. Models were trained for 10 epochs, with a batch size 32 randomly selected. The models used are different parameterizations of the Graph Convolutional Network as implemented by the library pytorch-geometric (Fey & Lenssen, 2019). For synthetic experiments, which used smaller graphs with generally smaller degree, the parameterization was 3 layers with a hidden dimension of 5, and for the real-world data case, the parameterization was 10 layers with a hidden dimension of 32. For each underlying graph, we generate three train/validation sets (of size N random walks) and test sets (of size M random walks) and we record the loss and accuracy as the average of the three runs. B.2.2 SYNTHETIC GRAPH EXPERIMENTS A large underlying synthetic graph was generated using the stochastic block model, with some adjustment to ensure that the randomly-generated graph had a single connected component. By controlling the intra- and inter-block connection probability values, we are able to control the homophily of the generated graph, which we validate by measuring the value of λ2, as well as calculating the sparsest cut via “Cheeger rounding” (Spielman, 2015) and subsequently the conductance of the graph with respect to this partition. In the experiments, we generated a graph with approximately 2000 nodes, with in-block connectivity probability set to 8/1000 and inter-block connectivity set to 6/105. Node features are generated from a mixture of multivariate Gaussian distributions with dimension 3, mean (−0.5,−0.5,−0.5) for one block, and mean (0.5, 0.5, 0.5) for the other; the covariance matrix is a diagonal matrix (each coordinate is independent) of variance either 2, 4, or 8. Experiments were also performed on non-homophilic synthetic graphs. Like the homophilic synthetic graphs they are generated with the stochastic block model with about 2000 nodes, about 1000 of each label, and with the same mixture-of-Gaussian node features. However the parameters used for the generation of connection are crucially different. The probabilities of connection between nodes of the same block and nodes of a different block are set to be equal, with both being set to 8/1000. These settings ensure that a node’s label is independent from the labels of its neighbors, so the homophily property is not exhibited. Contrasting with the results shown for the homophilic synthetic graphs, the non-homophilic graph results show that the out-of-distribution test accuracy is less than the training accuracy. This further illustrates the association between homophily and size generalization. B.2.3 REAL-WORLD GRAPH EXPERIMENTS Since the node features are indicators, we encoded the node feature information by using the positional encoding mechanism introduced in the Transformer model (Vaswani et al., 2017). For each node, each of their integer indicators was encoded via positional embedding and aggregated via sum.
1. What is the focus of the paper regarding graph convolutional networks? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to handle out-of-distribution generalization? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any suggestions or recommendations for improving the paper's analysis or extending its results beyond the GCN architecture?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper studies in-distribution and out-of-distribution generalization with graph convolutional networks. In the former setting, the paper improves on an existing bound in the literature by reducing dependency on maximum node degree to a linear factor. In the latter setting, the paper builds on in-distribution results, and considers the case of homophilic (but arbitrary) graph structures. More specifically, the out-of-distribution setup considers a single very large graph from which smaller training graphs and larger testing graphs are sampled using random walks. Then, the paper considers the majority voting problem, in which the binary label of a graph is the binary label of the majority of its constituent nodes. Using a probabilistic argument, the paper then shows that, for homophilic graphs, the probability of conducting a random walk that crosses the label divide, i.e., from label 0 to label 1 or vice versa, in the original large input graph can be thresholded to a quantity delta with reasonable bounds on the test graph size M and training graph size N. This probability can then be considered jointly with the probability error in the in-distribution theorem to yield an overall error probability for this out-of-distribution setting. Finally, the paper conducts an empirical analysis on a variety of synthetic and real-world datasets, showing performance levels consistent with the theory, and even demonstrating strong performance when certain assumptions are violated (degree regularity for example). Strengths And Weaknesses Strengths: The improvement of the in-distribution bound is significant and meaningful. The proof of this theorem also appears to be sound The setting proposed for out-of-distribution analysis is interesting and avoids making assumptions on the inherent graph structure, and thus offers a novel approach to studying how graph convolutional networks can generalize to larger graphs. Weaknesses: Though the out-of-distribution setup is interesting and avoids assumptions on the structure of the train and test graphs, the assumptions it introduces instead appear to be limiting. In particular, the graph labelling objective seems restrictive and specialized. Typically, graph classification objectives go far beyond node statistics (majority of labels) and explore structural graph properties, which lies outside the scope of this analysis. It is also not at all clear to me how one can move beyond this limitation: The very essence of the probabilistic argument in the paper relies on aggregating over node classes, and thresholding the probability of transitioning between classes in a random walk: Without a node-based objective, such an analysis is not relevant. Therefore, I am afraid this approach cannot be used to develop a more general (objective-agnostic) framework for out-of-distribution generalization. I understand that assumptions inevitably must be made about a connection between the train and test sets (shared structure, or in this case similar walk outcomes with respect to the objective), however I feel that simplifying assumptions on the objective function will not lead to general insights on out-of-distribution learning, as claimed in the paper. I therefore suggest that the authors clarify the limitations and scope of their analysis more extensively. On a more minor note, it would be interesting to mention how/if the results extend beyond the GCN architecture. Currently, the result and proof requires a GCN, but it would be useful to provide an intuition as to how such a framework can be applied to other models. Clarity, Quality, Novelty And Reproducibility Clarity: The intuitions provided in the main paper are clear and the arguments and intuitions are well-explained. In the appendix, the proof of Theorem 3.1 is a bit hard to follow. I regularly had to refer back to different parts of the proof to understand the respective quantities and variables. Moreover, some steps in the proof (namely, the inequality sequences) can be better explained. Quality: The paper's contributions are meaningful and its arguments all appear sound and well-motivated. Novelty: The results and perspective on out-of-distribution generalization are novel. Reproducibility: N/A to proofs. Experimental results appear to be easily reproducible.
ICLR
Title Certified Robustness for Free in Differentially Private Federated Learning Abstract Federated learning (FL) provides an efficient training paradigm to jointly train a global model leveraging data from distributed users. As the local training data comes from different users who may not be trustworthy, several studies have shown that FL is vulnerable to poisoning attacks where adversaries add malicious data during training. On the other hand, to protect the privacy of users, FL is usually trained in a differentially private manner (DPFL). Given these properties of FL, in this paper, we aim to ask: Can we leverage the innate privacy property of DPFL to provide robustness certification against poisoning attacks? Can we further improve the privacy of FL to improve such certification? To this end, we first investigate both user-level and instance-level privacy for FL, and propose novel randomization mechanisms and analysis to achieve improved differential privacy. We then provide two robustness certification criteria: certified prediction and certified attack cost for DPFL on both levels. Theoretically, given different privacy properties of DPFL, we prove their certified robustness under a bounded number of adversarial users or instances. Empirically, we conduct extensive experiments to verify our theories under different attacks on a range of datasets. We show that the global model with a tighter privacy guarantee always provides stronger robustness certification in terms of the certified attack cost, while it may exhibit tradeoffs regarding the certified prediction. We believe our work will inspire future research of developing certifiably robust DPFL based on its inherent properties. 1 INTRODUCTION Federated Learning (FL), which aims to jointly train a global model with distributed local data, has been widely applied in different applications, such as finance (Yang et al., 2019b), medical analysis (Brisimi et al., 2018), and user behavior prediction (Hard et al., 2018; Yang et al., 2018; 2019a). However, the fact that the local data and the training process are entirely controlled by the local users who may be adversarial raises great concerns from both security and privacy perspectives. In particular, recent studies show that FL is vulnerable to different types of training-time attacks, such as model poisoning (Bhagoji et al., 2019), backdoor attacks (Bagdasaryan et al., 2020; Xie et al., 2019; Wang et al., 2020), and label-flipping attacks (Fung et al., 2020). Further, privacy concerns have motivated the need to keep the raw data on local devices without sharing. However, sharing other indirect information such as gradients or model updates as part of the FL training process can also leak sensitive user information (Zhu et al., 2019; Geiping et al., 2020; Bhowmick et al., 2018; Melis et al., 2019). As a result, approaches based on differential privacy (DP) (Dwork & Roth, 2014), homomorphic encryption (Bost et al., 2015; Rouhani et al., 2018; Gilad-Bachrach et al., 2016), and secure multiparty computation (Ben-Or et al., 1988; Bonawitz et al., 2017) have been proposed to protect privacy of users in federated learning. In particular, differentially private federated learning (DPFL) provides strong information theoretic guarantees on user privacy, while causing relatively low performance overhead (Li et al., 2020b). Several defenses have been proposed to defend against poisoning attacks in FL. For instance, various robust aggregation methods (Fung et al., 2020; Pillutla et al., 2019; Blanchard et al., 2017; El Mhamdi et al., 2018; Chen et al., 2017b; Yin et al., 2018; Fu et al., 2019; Li et al., 2020a) identify and down-weight the malicious updates during aggregation or estimate a true “center” of the received updates rather than taking a weighted average. Other methods include robust federated training protocols (e.g., clipping (Sun et al., 2019), noisy perturbation (Sun et al., 2019), and additional evaluation during training (Andreina et al., 2020)) and post-training strategies (e.g., fine-tuning and pruning (Wu et al., 2020)) that repair the poisoned global model. However, as these works mainly focus on providing empirical robustness for FL, they have been shown to be vulnerable to newly proposed strong adaptive attacks (Wang et al., 2020; Xie et al., 2019; Baruch et al., 2019; Fang et al., 2020). Hence, in this paper, we aim to develop certified robustness guarantees for FL against different poisoning attacks. Further, as differentially private federated learning (DPFL) is often used to protect user privacy, we also aim to ask: Can we leverage the innate privacy property of DPFL to provide robustness certification against poisoning attacks for free? Can we further improve the privacy of FL so as to improve its certified robustness? Recent studies suggest that differential privacy (DP) is inherently related with robustness of ML models. Intuitively, DP is designed to protect the privacy of individual data, such that the output of an algorithm remains essentially unchanged when one individual input point is modified. Hence, the prediction of a DP model will be less impacted by a small amount of poisoned training data. Consequently, DP has been used to provide both theoretical and empirical defenses against evasion attacks (Lecuyer et al., 2019a) and data poisoning attacks (Ma et al., 2019; Hong et al., 2020) on centralized ML models. It has also been used as an empirical defense against backdoor attacks (Gu et al., 2019) in federated learning (Bagdasaryan et al., 2020; Sun et al., 2019), although no theoretical guarantee is provided. To the best of our knowledge, despite of the wide application of DPFL,there is no work providing certified robustness for DPFL leveraging its privacy property. In this paper, we aim to leverage the inherent privacy property of DPFL to provide robustness certification for FL against poisoning attacks for free. Our challenges include: (1) performing privacy analysis over training rounds in DPFL algorithms and (2) theoretically guaranteeing certified robustness based on DP properties under a given privacy budget. We propose two robustness certification criteria for FL: certified prediction and certified attack cost under different attack constraints. We consider both user-level DP (Agarwal et al., 2018; Geyer et al., 2017; McMahan et al., 2018; Asoodeh & Calmon, 2020; Liang et al., 2020) which is widely guaranteed in FL, and instance-level DP (Malekzadeh et al., 2021; Zhu et al., 2021) which is less explored in FL. We prove that a FL model satisfying user-level DP is certifiably robust against a bounded number of adversarial users. In addition, we propose InsDP-FedAvg algorithm to improve instance-level DP in FL, and prove that instance-level DPFL is certifiably robust against a bounded number of adversarial instances. We also study the correlation between privacy guarantee and certified robustness of FL. While stronger privacy guarantees result in greater attack cost, overly strong privacy can hurt the certified prediction by introducing too much noise in the training process. Thus, the optimal certified prediction is often achieved under a proper balance between privacy protection and utility loss. Key Contributions. Our work takes the first step to provide certified robustness in DPFL for free against poisoning attacks. We make contributions on both theoretical and empirical fronts. • We propose two criteria for certified robustness of FL against poisoning attacks (Section 4.2). • Given a FL model satisfying user-level DP, we prove that it is certifiably robust against arbitrary poisoning attacks with a bounded number of adversarial users (Section 4.2). • We propose InsDP-FedAvg algorithm to improve FL instance-level privacy guarantee (Sec- tion 5.1). We prove that instance-level DPFL is certifiably robust against the manipulation of a bounded number of instances during training (Section 5.2). • We conduct extensive experiments on image classification of MNIST, CIFAR-10 and sentiment analysis of tweets to verify our proposed certifications of two robustness criteria, and compare the certified results of different DPFL algorithms (Section 6). 2 RELATED WORK Differentially Private Federated Learning. Different approaches are proposed to guarantee the user-level privacy for FL. (Geyer et al., 2017; McMahan et al., 2018) clip the norm of each local update, add Gaussian noise on the summed update, and characterize its privacy budget via moment accountant (Abadi et al., 2016). (McMahan et al., 2018) extends (Geyer et al., 2017) to language models. In CpSGD (Agarwal et al., 2018), each user clips and quantizes the model update, and adds noise drawn from Binomial distribution, achieving both communication efficiency and DP. (Bhowmick et al., 2018) derive DP for FL via Rényi divergence (Mironov, 2017) and study its protection against data reconstruction attacks. (Liang et al., 2020) utilizes Laplacian smoothing for each local update to enhance the model utility. Instead of using moment accountant to track privacy budget over FL rounds as previous work, (Asoodeh & Calmon, 2020) derives the DP parameters by interpreting each round as a Markov kernel and quantify its impact on privacy parameters. All these works only focus on providing user-level privacy, leaving its robustness property unexplored. In terms of instance-level privacy for FL, there are only a few work (Malekzadeh et al., 2021; Zhu et al., 2021). Dopamine (Malekzadeh et al., 2021) provides instance-level privacy guarantee when each user only performs one step of DP-SGD (Abadi et al., 2016) at each FL round. However, it cannot be applied to multi-step SGD for each user, thus it cannot be extended to the general FL setting FedAvg (McMahan et al., 2017). (Zhu et al., 2021) privately aggregate the labels from users in a voting scheme, and provide DP guarantees on both user level and instance level. However, it is also not applicable to standard FL, since it does not allow aggregating the gradients or updates. Differential Privacy and Robustness. In standard (centralized) learning, Pixel-DP (Lecuyer et al., 2019a) is proposed to certify the model robsutness against evasion attacks. However, it is unclear how to leverage it to certify against poisoning attacks. To certify the robustness against poisoning attacks, (Ma et al., 2019) show that private learners are resistant to data poisoning and analyze the lower bound of attack cost against poisoning attacks for regression models. Here we certify the robustness in DPFL setting with such lower bound as one of our certification criteria and additionally derive its upper bounds. (Hong et al., 2020) show that the off-the-shelf mechanism DP-SGD (Abadi et al., 2016), which clips per-sample gradients and add Guassian noises during training, can serve as a defense against poisoning attacks empirically. In federated learning, empirical work (Bagdasaryan et al., 2020; Sun et al., 2019) show that DPFL can mitigate backdoor attacks; however, none of these work provides certified robustness guarantees for DPFL against poisoning attacks. 3 PRELIMINARIES We start by providing some background on differential privacy (DP) and federated learning (FL). Differential Privacy (DP). DP is a formal, mathematically rigorous definition (and standard) of privacy that intuitively guarantees that a randomized algorithm behaves similarly on similar inputs and that the output of the algorithm is about the same whether or not an individual’s data is included as part of the input (Dwork & Roth, 2014). Definition 1 (( , δ)-DP (Dwork et al., 2006)). A randomized mechanismM : D → Θ with domain D and range Θ satisfies ( , δ)-DP if for any pair of two adjacent datasets d, d′ ∈ D, and for any possible (measurable) output set E ⊆ Θ, it holds that Pr[M(d) ∈ E] ≤ e Pr [M (d′) ∈ E] + δ. In Definition 1, whenM is a training algorithm for ML model, domain D and range Θ represent all possible training datasets and all possible trained models respectively. Group DP for ( , δ)-DP mechanisms follows immediately from Definition 1 where the privacy guarantee drops with the size of the group. Formally, it says: Lemma 1 (Group DP). For mechanismM that satisfies ( , δ)-DP, it satisfies (k , 1−e k 1−e δ)-DP for groups of size k. That is, for any d, d′ ∈ D that differ by k individuals, and any E ⊆ Θ it holds that Pr[M(d) ∈ E] ≤ ek Pr [M (d′) ∈ E] + 1−e k 1−e δ. Federated Learning. FedAvg was introduced by (McMahan et al., 2017) for FL to train a shared global model without direct access to training data of users. Specifically, given a FL system with N users, at round t, the server sends the current global model wt−1 to users in the selected user set Ut, where |Ut| = m = qN and q is the user sampling probability. Each selected user i ∈ Ut locally updates the model for E local epochs with its dataset Di and learning rate η to obtain a new local model. Then, the user sends the local model updates ∆wit to the server. Finally, the server aggregates over the updates from all selected users into the new global model wt = wt−1 + 1m ∑ i∈Ut ∆w i t. 4 USER-LEVEL PRIVACY AND CERTIFIED ROBUSTNESS FOR FL 4.1 USER-LEVEL PRIVACY AND BACKGROUND Definition 1 leaves the definition of adjacent datasets flexible, which depends on applications. To protect user-level privacy, adjacent datasets are defined as those differing by data from one user (McMahan et al., 2018). The formal definition of User-level ( , δ)-DP (Definition 2) is omitted to Appendix A.1. Following standard DPFL (Geyer et al., 2017; McMahan et al., 2018), we introduce one of standard user-level DPFL algorithms UserDP-FedAvg (Algorithm 1 in Appendix A.1). At each round, the server first clips the update from each user with a threshold S such that its `2-sensitivity is upper bounded by S. Next, the server sums up the updates, adds Gaussian noise sampled from N (0, σ2S2), and takes the average, i.e., wt ← wt−1 + 1m (∑ i∈Ut Clip(∆w i t, S) +N ( 0, σ2S2 )) . Given the user sampling probability q, noise level σ, FL rounds T , and a δ > 0, the privacy analysis of UserDP-FedAvg satisfying ( , δ)-DP is given by Proposition 1 in Appendix A.1, which is a generalization of (Abadi et al., 2016). The aim of Proposition 1 is to analyze privacy budget in FL, which is accumulated as T increases due to the continuous access to training data. Following (Geyer et al., 2017; McMahan et al., 2018), moment accountant (Abadi et al., 2016) is used in the privacy analysis. 4.2 CERTIFIED ROBUSTNESS OF USER-LEVEL DPFL AGAINST POISONING ATTACKS Threat Model. We consider the poisoning attacks against FL, where k adversarial users have poisoned instances in local datasets, aiming to fool the trained DPFL global model. Such attacks include backdoor attacks (Gu et al., 2019; Chen et al., 2017a) and label flipping attacks (Biggio et al., 2012; Huang et al., 2011). The detailed description of these attacks is deferred to Appendix A.2. Note that our robustness certification is attack-agnostic under certain attack constraints (e.g., k), and we will verify our certification bounds with different poisoning attacks in Section 6. Next, we propose two criteria for the robustness certification in FL: certified prediction and certified attack cost. Certified Prediction. Consider the classification task with C classes. We define the classification scoring function f : (Θ,Rd) → ΥC which maps model parameters θ ∈ Θ and an input data x ∈ Rd to a confidence vector f(θ, x), and fc(θ, x) ∈ [0, 1] represents the confidence of class c. We mainly focus on the confidence after normalization, i.e., f(θ, x) ∈ ΥC = {p ∈ RC≥0 : ‖p‖1 = 1} in the probability simplex. Since the DP mechanismM is randomized and produces a stochastic FL global model θ = M(D), it is natural to resort to a probabilistic expression as a bridge for quantitative robustness certifications. Following the convention in (Lecuyer et al., 2019b; Ma et al., 2019), we use the expectation of the model’s prediction to provide a quantitative guarantee on the robustness of M. Specifically, we define the expected scoring function F : (θ,Rd)→ ΥC where Fc(M(D), x) = E[fc(M(D), x)] is the expected confidence for class c. The expectation is taken over DP training randomness, e.g., random Gaussian noise and random user subsampling. The corresponding prediction H : (θ,Rd) → [C] is defined by H(M(D), x) := arg maxc∈[C] Fc(M(D), x), which is the top-1 class based on the expected prediction confidence. We will prove that such prediction allows robustness certification against poisoning attacks. Following our threat model above and DPFL training in Algorithm 1, we denote the trained global model exposed to poisoning attacks byM(D′). When k = 1, D and D′ are user-level adjacent datasets according to Definition 2. Given that mechanismM satisfies user-level ( , δ)-DP, based on the innate DP property, the distribution of the stochastic model M(D′) is “close” to the distribution of M(D). Moreover, according to the post-processing property of DP, during testing, given a test sample x, we would expect the values of the expected confidence for each class c, i.e., Fc(M(D′), x) and Fc(M(D), x), to be close, and hence the returned most likely class to be the same, i.e., H(M(D), x) = H(M(D′), x), indicating robust prediction against poisoning attacks. Theorem 1 (Condition for Certified Prediction under One Adversarial User). Suppose a randomized mechanismM satisfies user-level ( , δ)-DP. For two user sets B and B′ that differ by one user, let D and D′ be the corresponding training datasets. For a test input x, suppose A,B ∈ [C] satisfy A = arg maxc∈[C] Fc(M(D), x) and B = arg maxc∈[C]:c 6=A Fc(M(D), x), then if FA(M(D), x) > e2 FB(M(D), x) + (1 + e )δ, (1) it is guaranteed that H(M(D′), x) = H(M(D), x) = A. When k > 1, we resort to group DP. According to Lemma 1, given mechanismM satisfying userlevel ( , δ)-DP, it also satisfies user-level (k , 1−e k 1−e δ)-DP for groups of size k. When k is smaller than a certain threshold, leveraging the group DP property, we would expect that the distribution of the stochastic modelM(D′) is not too far away from the distribution ofM(D) such that they would make the same prediction for a test sample with probabilistic guarantees. Therefore, the privacy and robustness guarantees are simultaneously met byM. Theorem 2 (Upper Bound of k for Certified Prediction). Suppose a randomized mechanism M satisfies user-level ( , δ)-DP. For two user sets B and B′ that differ by k users, let D and D′ be the corresponding training datasets. For a test input x, suppose A,B ∈ [C] satisfy A = arg maxc∈[C] Fc(M(D), x) and B = arg maxc∈[C]:c 6=A Fc(M(D), x), then H(M(D′), x) = H(M(D), x) = A, ∀k < K where K is the certified number of adversarial users: K = 1 2 log FA(M(D), x)(e − 1) + δ FB(M(D), x)(e − 1) + δ (2) The proofs of Theorems 1 and 2 are omitted to Appendix A.4. Theorems 1 and 2 reflect a tradeoff between privacy and certified prediction: (i) in Theorem 1, if is large such that the RHS of Eq (1) > 1, the robustness condition cannot be met since the expected confidence FA(M(D), x) ∈ [0, 1]. However, to achieve small , i.e., strong privacy, large noise is required during training, which would hurt model utility and thus result in small confidence margin between the top two classes (e.g., FA(M(D), x) and FB(M(D), x)), making it hard to meet the robustness condition. (ii) In Theorem 2 if we fix FA(M(D), x) and FB(M(D), x), smaller of FL can certify larger K. However, smaller also induces smaller confidence margin, thus reducing K instead. As a result, properly choosing would help to certify a large K. Certified Attack Cost. In addition to the certified prediction, we define the attack cost for attacker C : Θ → R which quantifies the difference between the poisoned model and the attack goal. In general, attacker aims to minimize the expected attack cost J(D) := E[C(M(D))], where the expectation is taken over the randomness of DP training. The cost function can be instantiated according to the concrete attack goal in different types of poisoning attacks, and we provide some examples below. Given a global FL model satisfying user-level ( , δ)-DP, we will prove the lower bound of the attack cost J(D′) when manipulating the data of at most k users. Higher lower bound of the attack cost indicates more certifiably robust global model. Example 1. (Backdoor attack (Gu et al., 2019)) C(θ) = 1n ∑n i=1 l(θ, z ∗ i ), where z ∗ i = (xi + δx, y ∗), δx is the backdoor pattern, y∗ is the target adversarial label. Minimizing J(D′) drives the prediction on any test data with the backdoor pattern δx to the target label y∗. Example 2. (Label Flipping attack (Biggio et al., 2012)) C(θ) = 1n ∑n i=1 l(θ, z ∗ i ), where z ∗ i = (xi, y ∗) and y∗ is the target adversarial label. Minimizing J(D′) drives the prediction on test data xi to the target label y∗. Example 3. (Parameter-Targeting attack (Ma et al., 2019)) C(θ) = 12‖θ − θ ?‖2, where θ? is the target model. Minimizing J(D′) drives the poisoned model to be close to the target model. Theorem 3 (Attack Cost with k Attackers). Suppose a randomized mechanismM satisfies user-level ( , δ)-DP. For two user sets B and B′ that differ k users, D and D′ are the corresponding training datasets. Let J(D) be the expected attack cost where |C(·)| ≤ C̄. Then, min{ek J(D) + e k − 1 e − 1 δC̄, C̄} ≥ J(D ′) ≥ max{e−k J(D)− 1− e −k e − 1 δC̄, 0}, if C(·) ≥ 0 min{e−k J(D) + 1− e −k e − 1 δC̄, 0} ≥ J(D ′) ≥ max{ek J(D)− e k − 1 e − 1 δC̄,−C̄}, if C(·) ≤ 0 (3) The proof is omitted to Appendix A.4. Theorem 3 provides the upper bounds and lower bounds for attack cost J(D′). The lower bounds show that to what extent the attack can reduce J(D′) by manipulating up to k users, i.e., how successful the attack can be. The lower bounds depend on the attack cost on clean model J(D), k and . When J(D) is higher, the DPFL model under poisoning attacks is more robust because the lower bounds are accordingly higher; a tighter privacy guarantee, i.e., smaller , can also lead to higher robustness certification as it increases the lower bounds; with larger k, the attacker ability grows and thus lead to lower possible J(D′). The upper bounds show the least adversarial effect brought by k attackers, i.e., how vulnerable the DPFL model is under the optimistic case (e.g., the backdoor pattern is less distinguishable). Leveraging the lower bounds in Theorem 3, we can lower-bound the minimum number of attackers required to reduce the attack cost to certain level associated with hyperparameter τ in Corollary 1. Corollary 1 (Lower Bound of k Given τ ). Suppose a randomized mechanism M satisfies userlevel ( , δ)-DP. Let attack cost function be C, the expected attack cost be J(·). In order to achieve J(D′) ≤ 1τ J(D) for τ ≥ 1 when 0 ≤ C(·) ≤ C̄, or achieve J(D ′) ≤ τJ(D) for 1 ≤ τ ≤ − C̄J(D) when −C̄ ≤ C(·) ≤ 0, the number of adversarial users should satisfy: k ≥ 1 log (e − 1) J(D)τ + C̄δτ (e − 1) J(D) + C̄δτ or k ≥ 1 log (e − 1) J(D)τ − C̄δ (e − 1) J(D)− C̄δ respectively. (4) The proof is omitted to Appendix A.4. Corollary 1 shows that stronger privacy guarantee (i.e., smaller ) requires more attackers to achieve the same effectiveness of attack, indicating higher robustness. 5 INSTANCE-LEVEL PRIVACY AND CERTIFIED ROBUSTNESS FOR FL 5.1 INSTANCE-LEVEL PRIVACY In this section, we introduce the instance-level DP definition, the corresponding algorithm, and the privacy analysis for FL. When DP is used to protect the privacy of individual instance, the trained stochastic FL model should not differ much if one instance is modified. Hence, the adjacent datasets in instance-level DP are defined as those differing by one instance. The formal definition of Instance-level ( , δ)-DP (Definition 3) is omitted to Appendix A.1. Dopamine (Malekzadeh et al., 2021) provides the first instance-level privacy guarantee under FedSGD (McMahan et al., 2017). However, it has two limitations. First, its privacy bound is loose. Although FedSGD performs both user and batch sampling during training, Dopamine ignores the privacy gain provided by random user sampling. In this section, we improve the privacy guarantee under FedSGD with privacy amplification via user sampling (Bassily et al., 2014; Abadi et al., 2016). This improvement leads to algorithm InsDP-FedSGD, to achieve tighter privacy analysis. We defer the algorithm (Algorithm 2) as well as its privacy guarantee to Appendix A.1. Besides the loose privacy bound, Dopamine (Malekzadeh et al., 2021) only allows users to perform one step of DP-SGD (Abadi et al., 2016) during each FL round. This restriction limits the efficiency of the algorithm and increases the communication overhead. In practice, users in FL are typically allowed to update their local models for many steps before submitting updates to reduce the communication cost. To solve this problem, we further improve InsDP-FedSGD to support multiple local steps during each round. Specifically, we propose a novel instance-level DPFL algorithm InsDP-FedAvg (Algorithm 3 in Appendix A.1) allowing users to train multiple local SGD steps before submitting the updates. In InsDP-FedAvg, each user i performs local DP-SGD so that the local training mechanismMi satisfies instance-level DP. Then, the server aggregates the updates. We prove that the global mechanismM preserves instance-level DP using DP parallel composition theorem (Dwork & Lei, 2009) and moment accountant (Abadi et al., 2016). Algorithm 3 formally presents the InsDP-FedAvg algorithm and the calculation of its privacy budget . Specifically, at first, local privacy cost i0 is initialized as 0 before FL training. At round t, if user i is not selected, its local privacy cost is kept unchanged it ← it−1. Otherwise user i updates local model by running DP-SGD for V local steps with batch sampling probability p, noise level σ and clipping threshold S, and it is accumulated upon i t−1 via its local moment accountant. Next, the server aggregates the updates from selected users, and leverages { it}i∈[N ] and the parallel composition in Theorem 4 to calculate the global privacy cost t. After T rounds, the mechanismM that outputs the FL global model in Algorithm 3 is instance-level ( T , δ)-DP. Theorem 4 (InsDP-FedAvg Privacy Guarantee). In Algorithm 3, during round t, if the local mechanismMi satisfies ( it, δ)-DP, then the global mechanismM satisfies ( maxi∈[N ] i t, δ ) -DP. The idea behind Theorem 4 is that when D′ and D differ in one instance, the modified instance only falls into one local dataset, and thus parallel composition theorem (Dwork & Lei, 2009) can be applied. Then the privacy guarantee corresponds to the worst-case, and is obtained by taking the maximum local privacy cost across all the users. The detailed proof is given in Appendix A.1. 5.2 CERTIFIED ROBUSTNESS OF INSTANCE-LEVEL DPFL AGAINST POISONING ATTACKS Threat Model. We consider poisoning attacks under the presence of k poisoned instances. These instances could be controlled by the same or multiple adversarial users. Our robustness certification is agnostic to the attack methods as long as the number of poisoned instances is constrained. According to the group DP property (Lemma 1) and the post-processing property for FL model with instance-level ( , δ)-DP, we prove that our robust certification results proposed for user-level DP are also applicable to instance-level DP. Below is the formal theorem (proof is given in Appendix A.4). Theorem 5. Suppose D and D′ differ by k instances, and the randomized mechanismM satisfies instance-level ( , δ)-DP. The results in Theorems 1, 2,and 3, and Corollary 1 hold forM, D, and D′. Comparison with existing certified prediction methods in centralized setting. The form of Theorem 1 is similar with the robustness condition against test-time attack in Proposition 1 of (Lecuyer et al., 2019a). This is because the derived robustness conditions are both rooted in the DP properties, but ours focus on the robustness against training-time attacks in FL, which is more challenging considering the distributed nature and the model training dynamics, i.e., the analysis of the privacy budget over training rounds. Our Theorem 1 is also different from previous randomized smoothingbased certifiably robust centralized learning against backdoor (Weber et al., 2020) and label flipping (Rosenfeld et al., 2020). First, our randomness comes from the inherent training randomness of user/instance-level ( , δ)-DP, e.g., user subsampling and Gaussian noise. Thus, the certified robustness for free in DPFL means that the DPFL learning algorithmM itself is randomized, and such randomness can lead to the robustness certification with non-trivial quantitative measurement of the randomness. On the contrary, robustness in randomized smoothing-based methods comes from explicitly making the classification process randomized via adding noise in training datasets (Weber et al., 2020; Rosenfeld et al., 2020), or test samples (Lecuyer et al., 2019a; Cohen et al., 2019) which is easier to measure. Second, our Theorem 1, 2 hold no matter how is achieved, which means that we can add different types of noise, leverage different subsampling strategies or even different FL training protocols to achieve user/instance-level . However, in (Weber et al., 2020; Rosenfeld et al., 2020) different certifications require different types of noise (Laplacian, Gaussian, etc.). Additionally, DP is suitable to characterize the robustness against poisoning since DP composition theorems can be leveraged to track privacy cost , which captures the training dynamics of ML model parameters without additional assumptions. Otherwise one may need to track the deviations of model parameters by analyzing SGD over training, which is theoretically knotty and often requires strong assumptions on Lipschitz continuity, smoothness or convexity for the trained models. 6 EXPERIMENTS We present evaluations for robustness certifications, expecially Thm. 2, 3 and Cor. 1. We find that 1) there is a tradeoff between certified prediction and privacy on certain datasets; 2) a tighter privacy guarantee always provides stronger certified robustness in terms of the certified attack cost; 3) our lower bounds of certified attack cost are generally tight when k is small. When k is large, they are tight under strong attacks (e.g., large local poisoning ratio α). Stronger attacks or tighter certification are requried to further tighten the gap between the emprical robustness and theoretical bounds. Data and Model. We evaluate our robustness certification results with three datasets: image classfication on MNIST, CIFAR-10 and text sentiment analysis task on tweets from Sentiment140 (Go et al.) (Sent140), which involves classifying Twitter posts as positive or negative. For image datasets, we use corresponding standard CNN architectures in the differential privacy library (opa, 2021) of PyTorch; for Sent140, we use a LSTM classifier. Following previous work on DP ML (Jagielski et al., 2020; Ma et al., 2019) and backdoor attacks (Tran et al., 2018; Weber et al., 2020) which evaluate with two classes, we focus on binary classification for MNIST (digit 0 and 1) and CIFAR-10 (airplane and bird), and defer the 10-class results to Appendix A.3. We train FL model following Algorithm 1 for user-level privacy and Algorithm 3 for instance-level privacy. We refer the readers to Appendix A.3 for details about the datasets, networks, parameter setups. Poisoning Attacks. We evaluate several state-of-the-art poisoning attacks against the proposed UserDP-FedAvg and InsDP-FedAvg. We first consider backdoor attacks (BKD) (Bagdasaryan et al., 2020) and label flipping attacks (LF) (Fung et al., 2020). For InsDP-FedAvg, we consider the worst case where k backdoored or lable-flipped instances are fallen into the dataset of one user. For UserDP-FedAvg, we additionally evaluate distributed backdoor attack (DBA) (Xie et al., 2019), which is claimed to be a more stealthy backdoor attack against FL. Moreover, we consider BKD, LF and DBA via model replacement approach (Bagdasaryan et al., 2020) where k attackers train the local models using local datasets with α fraction of poisoned instances, and scale the malicious updates with hyperparameter γ, i.e., ∆wit ← γ∆wit, before sending them to the sever. This way, the malicious updates would have a stronger impact on the FL model. Note that even when attackers perform scaling, after server clipping, the sensitivity of updates is still upper-bounded by the clipping threshold S. So the privacy guarantee in Proposition 1 still holds under poisoning attacks via model replacement. Detailed attack setups are presented in Appendix A.3. Evaluation Metrics and Setup. We consider two evaluation metrics based on our robustness certification criteria. The first metric is certified accuracy, which is the fraction of the test set for which the poisoned FL model makes correct and consistent predictions compared with the clean FL model. Given a test set of size n, for i-th test sample, the ground truth label is yi, the output prediction is ci , and the certified number of adversarial users/instances is Ki. We calculate the certified accuracy at k as 1n ∑n i=1 1{ci = yi and Ki ≥ k}. The second metric is the lower bound of attack cost in Theorem 3: J(D′) = max{e−k J(B)− 1−e −k e −1 δC̄, 0}. We evaluate the tightness of J(D′) by comparing it with empirical attack cost J(D′). To quantify the robustness, we evaluate the expected class confidence Fc(M(D), x) for class c via Monte-Carlo sampling. We run the private FL algorithms for M =1000 times, with class confidence fsc = fc(M(D), x) for each time. We compute its expectation to estimate Fc(M(D), x) ≈ 1M ∑M s=1 f s c and use it to evaluate Theorem 2. In addition, we use Hoeffding’s inequality (Hoeffding, 1994) to calibrates the empirical estimation with confidence level parameter ψ, and results are deferred to Appendix A.3. In terms of the attack cost, we use Example 1, 2 as the definitions of cost function C for backdoor attacks and label flipping attacks respectively. We follow similar protocol to estimate J(D′) for Theorem 3 and Corollary 1. 6.1 ROBUSTNESS EVALUATION OF USER-LEVEL DPFL Certified Prediction. Figure 1(a)(b) present the user-level certified accuracy under different by training DPFL models with different noise scale σ. The results on Sent140 dataset is presented in Figure 13 of Appendix. A.3.8. We observe that the largest k can be certified when is around 0.6298 in MNIST, 0.1451 in CIFAR-10, and 0.2247 in Sent140 which verifies the tradeoff between and certified accuracy as we discussed in Section 4.2. Advanced DP protocols that requires less noise while achieving similar level of privacy are favored to improve the privacy, utility, and certified accuracy simultaneously. Furthermore, we compare the certified accuracy of four different user-level DPFL methods (McMahan et al., 2018; Geyer et al., 2017) given the same privacy budget . As shown in Figure 14 and Figure 15 of Appendix. A.3.9, the models trained by different DPFL algorithms satisfying same have different certified robustness results. This is because even under the same , different DPFL algorithmsM produce trained modelsM(D) with different model performance, thus leading to different certified robustness. More discussion could be found in Appendix. A.3.9. Certified Attack Cost. In order to evaluate Theorem 3 and characterize the tightness of our theoretical lower bound J(D′), we compare it with the empirical attack cost J(D′) under different local poison fraction α , attack methods and scale factor γ in Figure 2. Note that when k = 0, the model is benign so the empirical cost equals to the certified one. We find that 1) when k increases, the attack ability grows, and both the empirical attack cost and theoretical lower bound decreases. 2) In Figure 2 row 1, given the same k, higher α, i.e., poisoning more local instances for each attacker, achieves a stronger attack, under which lower empirical J(D) can be achieved and is more close to the certified lower bound. This indicates that the lower bound appears tighter when the poisoning attack is stronger. 3) In Figure 2 row 2, we fix α = 100% and evaluate UserDP-FedAvg under different γ and attack methods. It turns out that DP serves as a strong defense empirically for FL, given that J(D) did not vary much under different γ(1, 50, 100) and different attack methods (BKD, DBA, LF). This is because the clipping operation restricts the magnitude of malicious updates, rendering the model replacement ineffective; the Gaussian noise perturbs the malicious updates and makes the DPFL model stable, and thus the FL model is less likely to memorize the poisoning instances. 4) In both rows, the lower bounds are tight when k is small. When k is large, there remains a gap between our theoretical lower bounds and empirical attack costs under different attacks, which will inspire more effective poisoning attacks or tighter robustness certification. Certified Attack Cost under Different . Here we further explore the impacts of different factors on the certified attack cost. Figure 3 presents the empirical attack cost and the certified attack cost lower bound given different on user-level DP. It is shown that as the privacy guarantee becomes stronger, i.e. smaller , the model is more robust achieving higher J(D′) and J(D′). In Figure 5 (a)(b), we train user-level ( , δ) DPFL models, calculate corresponding J(D), and plot the lower bound of k given different attack effectiveness hyperparameter τ according to Corollary 1. It shows that 1) when the required attack effectiveness is higher, i.e., τ is larger, more number of attackers is required. 2) To achieve the same effectiveness of attack, fewer number of attackers is needed for larger , which means that DPFL model with weaker privacy is more vulnerable to poisoning attacks. 6.2 ROBUSTNESS EVALUATION OF INSTANCE-LEVEL DPFL Certified Prediction. Figure 1(c)(d) show the instance-level certified accuracy under different . The optimal for K is around 0.3593 for MNIST and 0.6546 for CIFAR-10, which is aligned with our observation of the tradeoff between certified accuracy and privacy on user-level DPFL (Section 6.1). Certified Attack Cost. Figure 4 show the certified attack cost on CIFAR-10. From Figure 4 (a)(b), poisoning more instances (i.e., larger k) induces lower theoretical and empirical attack cost. From Figure 4 (c)(d), it is clear that instance-level DPFL with stronger privacy guarantee provides higher attack cost both empirically and theoretically, meaning that it is more robust against poisoning attacks. Results on MNIST are deferred to Appendix A.3. Figure 5 (c)(d) show the lower bound of k under different instance-level given different τ . Fewer poisoned instances are required to reduce the J(D′) to the similar level for a less private DPFL model, indicating that the model is easier to be attacked. 7 CONCLUSION In this paper, we present the first work on deriving certified robustness in DPFL for free against poisoning attacks. We propose two robustness certification criteria, based on which we prove that a FL model satisfying user-level (instance-level) DP is certifiably robust against a bounded number of adversarial users (instances). Our theoretical analysis characterizes the inherent relation between certified robustness and differential privacy of FL on both user and instance levels, which are empirically verified with extensive experiments. Our results can be used to improve the trustworthiness of DPFL. Ethics Statement. Our work study the robustness guarantee of differentially private federated learning models from theoretical and empirical perspectives. All the datasets and packages we use are open-sourced. We do not have ethical concerns in our paper. Reproducibility Statement. Our source code is available as the supplemental material for reproducibility purpose. Our experiments can be reproduced following our detailed training and evaluation setups in Appendix A.3. The complete proofs of privacy analysis and certified robustness analysis can be found in the Appendix A.1 and Appendix A.4, respectively. A APPENDIX The Appendix is organized as follows: • Appendix A.1 provides the DP definitions and the DPFL algorithms on both user and instance levels, and the proofs for corresponding privacy guarantees. • Appendix A.2 specifies our threat models. • Appendix A.3 provides more details on experimental setups for training and evaluation, the addition experimental results on certified accuracy with confidence level, robustness evaluation of InsDP-FedAvg on MNIST, robustness evaluation on 10-class classification, DP bound comparison between InsDP-FedSGD and Dopamine, certified accuracy of UserDP-FedAvg on Sent140 and certified accuracy comparison of different user-level DPFL algorithms. • Appendix A.4 provides the proofs for the certified robustness related analysis, including Lemma 1, Theorem 1, 2, 3, 5 and Corollary 1. • Appendix A.5 provides the comparison to related work (Lecuyer et al., 2019a; Ma et al., 2019). A.1 DIFFERENTIALLY PRIVATE FEDERATED LEARNING A.1.1 USERDP-FEDAVG Definition 2 (User-level ( , δ)-DP). Let B,B′ be two user sets with size N . Let D and D′ be the datasets that are the union of local training examples from all users inB andB′ respectively. Then,D and D′ are adjacent if B and B′ differ by one user. The mechanismM satisfies user-level ( , δ)-DP if it meets Definition 1 with D and D′ as adjacent datasets. Algorithm 1: UserDP-FedAvg. Input: Initial model w0, user sampling probability q, privacy parameter δ, clipping threshold S, noise level σ, local datasets D1, ..., DN , local epochs E, learning rate η. Output: FL model wT and privacy cost Server executes: for each round t = 1 to T do m← max(q ·N, 1); Ut ← (random subset of m users); for each user i ∈ Ut in parallel do ∆wit ← UserUpdate(i, wt−1) ; wt ← wt−1+ 1m (∑ i∈Ut Clip(∆w i t, S) +N ( 0, σ2S2 )) ; M.accum priv spending(σ, q, δ) ; =M.get privacy spent() ; return wT , Procedure UserUpdate(i, wt−1) w ← wt−1 ; for local epoch e = 1 to E do for batch b ∈ local dataset Di do w ← w − η∇l(w; b) ∆wit ← w − wt−1 ; return ∆wit Procedure Clip(∆, S) return ∆/max ( 1, ‖∆‖2 S ) In Algorithm 1,M.accum priv spending() andM.get privacy spent() are the calls on the moments accountantM refer to the API of (Abadi et al., 2016). Given the user sampling probability q, noise level σ, FL rounds T , and a δ > 0, UserDP-FedAvg satisfies ( , δ)-DP as below, which is a generalization of (Abadi et al., 2016). The aim is to analyze privacy budget , which is accumulated as T increases due to the continuous access to training data. Proposition 1 (UserDP-FedAvg Privacy Guarantee). There exist constants c1 and c2 so that given user sampling probability q, and FL rounds T , for any ε < c1q2T , if σ ≥ c2 q √ T log(1/δ) , the randomized mechanismM in Algorithm 1 is ( , δ)-DP for any δ > 0. Proof. The proof follows the proof of Theorem 1 in (Abadi et al., 2016), while the notations have slightly different meanings under FL settings. In Proposition 1, we use q to represent user-level sampling probability and T to represent FL training rounds. Note that the above privacy analysis can be further improved by Rényi Differential Privacy (Mironov et al., 2019). Discussion (Li et al., 2020b) divide the user-level privacy into global privacy (Geyer et al., 2017; McMahan et al., 2018) and local privacy (Agarwal et al., 2018). In both local and global privacy, the norm of each update is clipped. The difference lies in that the noise is added on the aggregated model updates in global privacy because a trusted server is assumed, while the noise is added on each local update in local privacy because it assumes that the central server might be malicious. Algorithm 1 belongs to global privacy. A.1.2 INSDP-FEDSGD Definition 3 (Instance-level ( , δ)-DP). Let D be the dataset that is the union of local training examples from all users. Then, D and D′ are adjacent if they differ by one instance. The mechanism M is instance-level ( , δ)-DP if it meets Definition 1 with D and D′ as adjacent datasets. Algorithm 2: InsDP-FedSGD. Input: Initial model w0, user sampling probability q, privacy parameter δ, local clipping threshold S, local noise level σ, local datasets D1, ..., DN , learning rate η, batch sampling probability p. Output: FL model wT and privacy cost Server executes: for each round t = 1 to T do m← max(q ·N, 1); Ut ← (random subset of m clients); for each user i ∈ Ut in parallel do ∆wit ← UserUpdate(i, wt−1) ; wt ← wt−1 + 1m ∑ i∈Ut ∆w i t ; M.accum priv spending( √ mσ, pq, δ) =M.get privacy spent() ; return wT , Procedure UserUpdate(i, wt−1) w ← wt−1 ; bit ←(uniformly sample a batch fromDi with probability p = L/|Di|); for each xj ∈ bit do g(xj)← ∇l(w;xj); ḡ(xj)← Clip(g(xj), S) ; g̃ ← 1L (∑ j ḡ(xj) +N ( 0, σ2S2 )) ; w ← w − ηg̃ ; ∆wit ← w − wt−1 ; return ∆wit Procedure Clip(∆, S) return ∆/max ( 1, ‖∆‖2 S ) Under FedSGD, when each local model performs one step of DP-SGD (Abadi et al., 2016), the randomized mechanismM that outputs the global model preserves the instance-level DP. We can regard the one-step update for the global model in Algorithm 2 as: wt ← wt−1 − 1 m ∑ i∈Ut η L ∑ xj∈bit ḡ(xj) +N ( 0, σ2S2 ) (5) Proposition 2 (InsDP-FedSGD Privacy Guarantee). There exist constants c1 and c2 so that given batch sampling probability p, and user sampling probability q, the number of selected users each round m, and FL rounds T , for any ε < c1(pq)2T , if σ ≥ c2 pq √ T log(1/δ) √ m , the randomized mechanismM in Algorithm 2 is ( , δ)-DP for any δ > 0. Proof. i) In instance-level DP, we consider the sampling probability of each instance under the combination of user-level sampling and batch-level sampling. Since the user-level sampling probability is q and the batch-level sampling probablity is p, each instance is sampled with probability pq. ii) Additionally, since the sensitivity of instance-wise gradient w.r.t one instance is S, after local gradient descent and server FL aggregation, the equivalent sensitivity of global model w.r.t one instance is S′ = ηSLm according to Eq (5). iii) Moreover, since the local noise is ni ∼ N (0, σ 2S2) , then the “virtual” global noise is n = ηmL ∑ i∈Ut ni according to Eq (5), so n ∼ N (0, η2σ2S2 mL2 ). Let η2σ2S2 mL2 = σ ′2S′ 2 such that n ∼ N (0, σ′2S′2). Because S′ = ηSLm , the equivalent global noise level is σ′2 = σ2m, i.e., σ′ = σ √ m. In Proposition 2, we use pq to represent instance-level sampling probability, T to represent FL training rounds, σ √ m to represent the equivalent global noise level. The rest of the proof follows the proof of Theorem 1 in (Abadi et al., 2016). We defer the DP bound evaluation comparison between InsDP-FedSGD and Dopamine to Appendix A.3.7. A.1.3 INSDP-FEDAVG Algorithm 3: InsDP-FedAvg. Input: Initial model w0, user sampling probability q, privacy parameter δ, local clipping threshold S, local noise level σ, local datasets D1, ..., DN , local steps V , learning rate η, batch sampling probability p. Output: FL model wT and privacy cost Server executes: for each round t = 1 to T do m← max(q ·N, 1); Ut ← (random subset of m users); for each user i ∈ Ut in parallel do ∆wit, i t ← UserUpdate(i, wt−1) ; for each user i /∈ Ut do it ← it−1 ; wt ← wt−1 + 1m ∑ i∈Ut ∆w i t ; t =M.parallel composition({ it}i∈[N ]) = T ; return wT , Procedure UserUpdate(i, wt−1) w ← wt−1 ; for each local step v = 1 to V do b ←(uniformly sample a batch from Di with probability p = L/|Di|); for each xj ∈ b do g(xj)← ∇l(w;xj); ḡ(xj)← Clip(g(xj), S) ; g̃ ← 1L ( ∑ j ḡ(xj) +N ( 0, σ2S2 ) ); w ← w − ηg̃ ; Mi.accum priv spending(σ, p, δ) ; it =Mi.get privacy spent() ; ∆wit ← w − wt−1 ; return ∆wit, it Procedure Clip(∆, S) return ∆/max ( 1, ‖∆‖2 S ) Lemma 2 (InsDP-FedAvg Privacy Guarantee when T = 1). In Algorithm 3, when T = 1, suppose local mechanismMi satisfies ( i, δ)-DP, then global mechanismM satisfies (maxi∈[N ] i, δ)-DP. Proof. We can regard federated learning as partitioning a dataset D into N disjoint subsets {D1, D2, . . . , DN}. N mechanisms {M1, . . . ,MN} are operated on these N parts separately and eachMi satisfies its own i-DP for i ∈ [1, N ]. Note that if i-th user is not selected , i = 0 because local dataset Di is not accessed and there is no privacy cost. Without loss of generality, we assume the modified data sample x′ (x → x′ causes D → D′) is in the local dataset of k-th client Dk. Let D,D′ be two neighboring datasets (Dk, D′k are also two neighboring datasets). M is randomized mechanism that outputs the global model, andMi is the randomized mechanism that outputs the local model update ∆wi. Suppose w0 is the initialized and deterministic global model, and {z1, . . . , zN} are randomized local updates. We have a sequence of computations {z1 = M1(D1), z2 = M2(D2; z1), z3 = M3(D3; z1, z2) . . .} and z = M(D) = w0 + ∑N i=1 zi. Note that if i-th user is not selected , zi = 0. According to the parallel composition (Tu), we have Pr[M(D) = z] = Pr[M1(D1) = z1] Pr[M2(D2; z1) = z2] . . .Pr[MN (DN ; z1, . . . , zN−1) = zN ] ≤ exp( k) Pr[Mk(D′k; z1, . . . , zk−1) = zk] ∏ i6=k Pr[Mi(Di; z1, . . . , zi−1) = zi] = exp( k) Pr[M(D′) = z] SoM satisfies k-DP when the modified data sample lies in the subset Dk. Consider the worst case of where the modified data sample could fall in, we know thatM satisfies (maxi∈[N ] i)-DP. We recall Theorem 4. Theorem 4 (InsDP-FedAvg Privacy Guarantee). In Algorithm 3, during round t, if the local mechanismMi satisfies ( it, δ)-DP, then the global mechanismM satisfies ( maxi∈[N ] i t, δ ) -DP. Proof. Again, without loss of generality, we assume the modified data sample x′ (x → x′ causes D → D′) is in the local dataset of k-th user Dk. We first consider the case when all users are selected. At each round t, N mechanisms are operated on N disjoint parts and eachMit satisfies own i-DP where i is the privacy cost for accessing the local dataset Di for one round (not accumulating over previous rounds). Let D,D′ be two neighboring datasets (Dk, D′k are also two neighboring datasets). Suppose z0 = Mt−1(D) is the aggregated randomized global model at round t − 1, and {z1, . . . , zN} are the randomized local updates at round t, we have a sequence of computations {z1 = M1t (D1; z0), z2 = M2t (D2; z0, z1), z3 = M3t (D3; z0, z1, z2) . . .} and z =Mt(D) = z0 + ∑N i zi. We first consider the sequential composition (Dwork & Roth, 2014) to accumulate the privacy cost over FL rounds. According to parallel composition, we have Pr[Mt(D) = z] = Pr[Mt−1(D) = z0] N∏ i=1 Pr[Mit(Di; z0, z1, . . . , zi−1) = zi] = Pr[Mt−1(D) = z0] Pr[Mkt (Dk; z0, z1, . . . , zk−1) = zk] ∏ i 6=k Pr[Mit(Di; z0, z1, . . . , zi−1) = zi] ≤ exp( t−1) Pr[Mt−1(D′) = z0] exp( k) Pr[Mkt (D′k; z0, z1, . . . , zk−1) = zk] ∏ i 6=k Pr[Mit(Di; z0, z1, . . . , zi−1) = zi] = exp( t−1 + k) Pr[Mt(D′) = z] Therefore,Mt satisfies t-DP, where t = t−1 + k. Because the modified data sample always lies in Dk over t rounds and 0 = 0, we can have t = t k, which means that the privacy guarantee of global mechanismMt is only determined by the local mechanism of k-th user over t rounds. Moreover, moment accountant (Abadi et al., 2016) is known to reduce the privacy cost from O(t) to O( √ t). We can use the more advanced composition, i.e., moment accountant, instead of the sequential composition, to accumulate the privacy cost for local mechanismMk over t FL rounds. In addition, we consider user subsampling. As described in Algorithm 3, if the user i is not selected at round t, then its local privacy cost is kept unchanged at this round. Take the worst case of where x′ could lie in, at round t,M satisfies t-DP, where t = maxi∈[N ] it, local mechanism M i satisfies it-DP, and the local privacy cost i t is accumulated via local moment accountant in i-th user over t rounds. A.2 THREAT MODELS We consider targeted poisoning attacks of two types. In backdoor attacks (Gu et al., 2019; Chen et al., 2017a), the goal is to embed a backdoor pattern (i.e., a trigger) during training such that any test input with such pattern will be mis-classified as the target. In label flipping attacks (Biggio et al., 2012; Huang et al., 2011), the labels of clean training examples from one source class are flipped to the target class while the features of the data are kept unchanged. In FL, the purpose of backdoor attacks is to manipulate local models with backdoored local data, so that the global model would behave normally on untampered data samples while achieving high attack success rate on clean data (Bagdasaryan et al., 2020). Given the same purpose, distributed backdoor attack (DBA) (Xie et al., 2019) decomposes the same backdoor pattern to several smaller ones and embeds them to different local training sets for different adversarial users. The goal of label flipping attack against FL is to manipulate local datasets with flipped labels such that the global model will mis-classify the test data in the source class as the target class. The model replacement (Bagdasaryan et al., 2020) is a more powerful approach to perform the above attacks, where the attackers first train the local models using the poisoned datasets and then scale the malicious updates before sending them to the server. This way, the attacker’s updates would have a stronger impact on the FL model. We use the model replacement method to perform poisoning attacks and study the effectiveness of DPFL. For UserDP-FedAvg, we consider backdoor, distributed backdoor, and label flipping attacks via the model replacement approach. Next, we formalize the attack process and introduce the notations. Suppose the attacker controls k adversarial users, i.e., there are k attackers out of N users. Let B be the original user set of N benign users, and B′ be the user set that contains k attackers. Let D := {D1, D2, . . . , DN} be the union of original benign local datasets across all users. For a data sample zij := {xij , yij} in Di, we denote its backdoored version as z′ i j := {xij + δx, y∗}, where δx is the backdoor pattern, y∗ is the targeted label; the distributed backdoor attack (DBA) version as z′ i j := {xij + δix, y∗}, where δix is the distributed backdoor pattern for attacker i; the label-flipped version as z′ij := {xij , y∗}. Note that the composition of all DBA patterns is equivalent to the backdoor pattern, i.e., ∑k i=1 δ i x = δx. We assume attacker i has αi fraction of poisoned samples in its local dataset D′i. Let D ′ := {D′1, . . . , D′k−1, D′k, Dk+1, . . . , DN} be the union of local datasets when k attackers are present. The adversarial user i performs model replacement by scaling the model update with hyperparameter γ before submitting it to the server, i.e., ∆wit ← γ∆wit. In our threat model, we consider the attacker that follows our training protocol and has no control over which users are sampled. For InsDP-FedAvg, we consider both backdoor and label flipping attacks. Since distributed backdoor and model replacement attacks are proposed for adversarial users rather than adversarial instances, we do not consider them for instance-level DPFL. There are k backdoored or label-flipped instances {z′1, z′2, . . . , z′k}, which could be controlled by same or multiple users. In our threat model, we consider the attacker that follows our training protocol and has no control over which data partition (or batch) is sampled. Note that we do not assume that the adversaries’ poisoning data always be sampled. In our algorithms, each batch is randomly subsampled, so the adversaries cannot control if poisoned data are sampled in each step. A.3 EXPERIMENTAL DETAILS AND ADDITIONAL RESULTS A.3.1 DATASETS AND MODELS We evaluate our robustness certification results with two datasets: MNIST (LeCun & Cortes, 2010) and CIFAR-10 (Krizhevsky, 2009). For each dataset, we use corresponding standard CNN architectures in the differential privacy library (opa, 2021) of PyTorch (Paszke et al., 2019). MNIST: We study an image classification problem of handwritten digits in MNIST. It is a dataset of 70000 28x28 pixel images of digits in 10 classes, split into a train set of 60000 images and a test set of 10000 images. Except Section A.3.6, we consider binary classification on classes 0 and 1, making our train set contain 12665 samples, and the test set 2115 samples. The model consists of two Conv-ReLu-MaxPooling layers and two linear layers. CIFAR-10: We study image classification of vehicles and animals in CIFAR-10. This is a harder dataset than MNIST, consisting of 60000 32x32x3 images, split into a train set of 50000 and a test set of 10000. Except Section A.3.6, we consider binary classification on class airplane and bird, making our train set contain 10000 samples, and the test set 2000 samples. The model consists of four Conv-ReLu-AveragePooling layers and one linear layer. When training on CIFAR10, we follow the standard practice for differential privacy (Abadi et al., 2016; Jagielski et al., 2020) and fine-tune a whole model pre-trained non-privately on the more complex CIFAR100, a similarly sized but more complex benchmark dataset. We can achieve reasonable performance on CIFAR-10 datasets by only training (fine-tuning) few rounds. Sent140: We consider a text sentiment analysis task on tweets from Sentiment140 (Go et al.) (Sent140) which involves classifying Twitter posts as positive or negative. We use a two layer LSTM binary classifier containing 256 hidden units with pretrained 300D GloVe embedding (Pennington et al., 2014). Each twitter account corresponds to a device. We use the same network architecture, non-iid dataset partition method, number of selected user per round, learning rate, batch size, etc. as in (Li et al., 2018), which are summarized in Table 1. A.3.2 TRAINING DETAILS We simulate the federated learning setup by splitting the training datasets for N FL users in an i.i.d manner. FL users run SGD with learning rate η, momentum 0.9, weight decay 0.0005 to update the local models. The training parameter setups are summarized in Table 1. Following (McMahan et al., 2018) that use δ ≈ 1N1.1 as privacy parameter, for UserDP-FedAvg we set δ = 0.0029 according to the total number of users, and for InsDP-FedAvg we set δ = 0.00001 according the total number of training samples. Next we summarize the privacy guarantees and clean accuracy offered when we study the certified prediction and certified attack cost, which are also the training parameters setups when k = 0 in Figure 1, 2, 3, 4, 5, 8. User-level DPFL In order to study the user-level certified prediction under different privacy guarantee, for MNIST, we set to be 0.2808, 0.4187, 0.6298, 0.8694, 1.8504, 2.8305, 4.8913, 6.9269, which are obtained by training UserDP-FedAvg FL model for 3 rounds with noise level σ = 3.0, 2.3, 1.8, 1.5, 1.0, 0.8, 0.6, 0.5, respectively (Figure 1(a)). For CIFAR-10, we set to be 0.1083, 0.1179, 0.1451, 0.2444, 0.3663, 0.4527, 0.5460, 0.8781, which are obtained by training UserDP-FedAvg FL model for one round with noise level σ = 10.0, 8.0, 6.0, 4.0, 3.0, 2.6, 2.3, 1.7, respectively (Figure 1(b)). The clean accuracy (average over 1000 runs) of UserDP-FedAvg under non-DP training ( = ∞) and DP training (varying ) on MNIST and CIFAR-10 are reported in Table. 2 and Table. 3 respectively. To certify the attack cost under different number of adversarial users k (Figure 2), for MNIST, we set the noise level σ to be 2.5. When k = 0, after training UserDP-FedAvg for T = 3, 4, 5 rounds, we obtain FL models with privacy guarantee = 0.3672, 0.4025, 0.4344 and clean accuracy (average over M runs) 86.69%, 88.76%, 88.99%. For CIFAR-10, we set the noise level σ to be 3.0. After training UserDP-FedAvg for T = 3, 4 rounds under k = 0, we obtain FL models with privacy guarantee = 0.5346, 0.5978 and clean accuracy 78.63%, 78.46%. With the interest of certifying attack cost under different user-level DP guarantee (Figure 3, Figure 5), we explore the empirical attack cost and the certified attack cost lower bound given different . For MNIST, we set the privacy guarantee to be 1.2716, 0.8794, 0.6608, 0.5249, 0.4344, which are obtained by training UserDP-FedAvg FL models for 5 rounds under noise level σ = 1.3, 1.6, 1.9, 2.2, 2.5, respectively, and the clean accuracy for the corresponding models are 99.50%, 99.06%, 96.52%, 93.39%, 88.99%. For CIFAR-10, we set the privacy guarantee to be 1.600, 1.2127, 1.0395.0.8530, 0.7616, 0.6543, 0.5978, which are obtained by training UserDP-FedAvg FL models for 4 rounds under noise level σ = 1.5, 1.8, 2.0, 2.3, 2.5, 2.8, 3.0, respectively, and the clean accuracy for the corresponding models are 85.59%, 84.52%, 83.23%, 81.90%, 81.27%, 79.23%, 78.46%. Instance-level DPFL To certify the prediction for instance-level DPFL under different privacy guarantee, for MNIST, we set privacy cost to be 0.2029, 0.2251, 0.2484, 0.3593, 0.4589, 0.6373, 1.0587, 3.5691, which are obtained by training InsDP-FedAvg FL models for 3 rounds with noise level σ = 15, 10, 8, 5, 4, 3, 2, 1, respectively (Figure 1(c)). For CIFAR-10, we set privacy cost to be 0.3158, 0.3587, 0.4221, 0.5130, 0.6546, 0.9067, 1.4949, 4.6978, which are obtained by training InsDP-FedAvg FL models for one round with noise level σ = 8, 7, 6, 5, 4, 3, 2, 1, respectively (Figure 1(d)). The clean accuracy (average over 1000 runs) of InsDP-FedAvg under non-DP training ( =∞) and DP training (varying ) on MNIST and CIFAR-10 are reported in Table. 4 and Table. 5 respectively. With the aim to study certified attack cost under different number of adversarial instances k, for MNIST, we set the noise level σ to be 10. When k = 0, after training InsDP-FedAvg for T = 4, 9 rounds, we obtain FL models with privacy guarantee = 0.2383, 0.304 and clean accuracy (average over M runs) 96.40%, 96.93% (Figure 8(a)(b)). For CIFAR-10, we set the noise level σ to be 8.0. After training InsDP-FedAvg for one round under k = 0, we obtain FL models with privacy guarantee = 0.3158 and clean accuracy 61.78% (Figure 4(a)(b)). In order to study the empirical attack cost and certified attack cost lower bound under different instance-level DP guarantee, we set the privacy guarantee to be 0.5016, 0.311, 0.2646, 0.2318, 0.2202, 0.2096, 0.205 for MNIST, which are obtained by training InsDP-FedAvg FL models for 6 rounds under noise level σ = 5, 8, 10, 13, 15, 18, 20, respectively, and the clean accuracy for the corresponding models are 99.60%, 98.81%, 97.34%, 92.29%, 88.01%, 80.94%, 79.60% (Figure 8 (c)(d)). For CIFAR-10, we set the privacy guarantee to be 1.261, 0.9146, 0.7187, 0.5923, 0.5038, 0.4385, which are obtained by training InsDP-FedAvg FL models for 2 rounds under noise level σ = 3, 4, 5, 6, 7, 8, respectively, and the clean accuracy for the corresponding models are 84.47%, 80.99%, 76.01%, 68.65%, 63.07%, 60.65% (Figure 4 (c)(d)). With the intention of exploring the upper bound for k given τ under different instance-level DP guarantee, for MNIST, we set noise level σ to be 5, 8, 10, 13, 20, respectively, to obtain instance-DP FL models after 10 rounds with privacy guarantee = 0.6439, 0.3937, 0.3172, 0.2626, 0.2179 and clean accuracy 99.58%, 98.83%, 97.58%, 95.23%, 85.72% (Figure 5(c)). For CIFAR-10, we set noise level σ to be 3, 4, 5, 6, 7, 8 and train InsDP-FedAvg for T = 3 rounds, to obtain FL models with privacy guarantee = 1.5365, 1.1162, 0.8777, 0.7238, 0.6159, 0.5361 and clean accuracy 84.34%, 80.27%, 74.62%, 66.94%, 62.14%, 59.75% (Figure 5(d)). A.3.3 ADDITIONAL IMPLEMENTATION DETAILS (Threat Models) For the attacks against UserDP-FedAvg, by default, the local poison fraction α = 100%, and the scale factor γ = 50. We use same parameters setups for all k attackers. In terms of label flipping attacks, the attackers swap the label of images in source class (digit 1 for MNIST; bird for CIFAR-10) into the target label (digit 0 for MNIST; airplane for CIFAR-10). In terms of backdoor attacks in MNIST and CIFAR-10, the attackers add a backdoor pattern, as shown in Figure 6 (left), in images and swap the label of any sample with such pattern into the target label (digit 0 for MNIST; airplane for CIFAR-10). In terms of distributed backdoor attacks, Figure 6 (right) shows an example when the triangle pattern is evenly decomposed into k = 4 parts, and they are used as the distributed patterns for k = 4 attackers respectively. For the cases where there are more or fewer distributed attackers, the similar decomposition strategy is adopted. For the attacks against InsDP-FedAvg, the same target classes and backdoor patterns are used as UserDP-FedAvg. The parameters setups are the same for all k poisoned instances. (Robustness Certification) We certified 2115/2000/1122 test samples from the MNIST/CIFAR10/Sent140 test sets. In Theorem 3 and Corollary 1 that are related to certified attack cost, C̄ specifies the range of C(·). In the implementation, C̄ is set to be larger than the maximum empirical attack cost evaluated on the test sets (see Table 1 for details). For each dataset, we use the same C̄ for cost function C defined in Example 1 and Example 2. When using Monte-Carlo sampling, we run M = 1000 times for certified accuracy, and M = 100 times for certified attack cost in all experiments. (Machines) We simulate the federated learning setup (1 server and N users) on a Linux machine with Intel® Xe
1. What is the focus of the paper regarding federated learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its robustness criteria and evaluation using MNIST and CIFAR datasets? 3. Do you have any concerns about the certification approach used in the paper? 4. How does the reviewer assess the impact of the proposed method on performance reduction for benign data? 5. Are there any suggestions for additional evaluations or modifications to strengthen the submission?
Summary Of The Paper Review
Summary Of The Paper The paper states that the model produced by differentially private federated learning to be already certified against poisoning attacks. Review Strength: the paper proposes criteria for robustness of FL, evaluates the theoretical results using MNIST and CIFAR datasets. Weakness: I can't entirely agree with the paper's message that the certification comes for free and instead suggest that this certification comes at a relatively high price requiring differential privacy. For example, [1] uses gradient shaping without full DP. However, I would agree that in cases where differential privacy is inevitable, the conclusions seem helpful and could further promote the use of DP. Furthermore, the paper does not evaluate the effects of certification on performance reduction for the benign data. Specifically, I am surprised by an extremely low budget in CIFAR for the presented experiments, i.e. epsilon less than 1 and an extreme amount of noise, i.e. std around 10. Clarifying further A.3.2 that has only few rounds would be helpful.The provided certification might not provide much utility, especially for diverse users already disproportionally affected by less strict DP budget. Evaluating the proposed method on the Language Modeling task might further strengthen the submission to provide a realistic privacy training regime. [1] Sanghyun Hong, Varun Chandrasekaran, Yig ̆itcan Kaya, Tudor Dumitras ̧, and Nicolas Papernot. On the effectiveness of mitigating data poisoning attacks with gradient shaping. arXiv preprint arXiv:2002.11497, 2020.
ICLR
Title Certified Robustness for Free in Differentially Private Federated Learning Abstract Federated learning (FL) provides an efficient training paradigm to jointly train a global model leveraging data from distributed users. As the local training data comes from different users who may not be trustworthy, several studies have shown that FL is vulnerable to poisoning attacks where adversaries add malicious data during training. On the other hand, to protect the privacy of users, FL is usually trained in a differentially private manner (DPFL). Given these properties of FL, in this paper, we aim to ask: Can we leverage the innate privacy property of DPFL to provide robustness certification against poisoning attacks? Can we further improve the privacy of FL to improve such certification? To this end, we first investigate both user-level and instance-level privacy for FL, and propose novel randomization mechanisms and analysis to achieve improved differential privacy. We then provide two robustness certification criteria: certified prediction and certified attack cost for DPFL on both levels. Theoretically, given different privacy properties of DPFL, we prove their certified robustness under a bounded number of adversarial users or instances. Empirically, we conduct extensive experiments to verify our theories under different attacks on a range of datasets. We show that the global model with a tighter privacy guarantee always provides stronger robustness certification in terms of the certified attack cost, while it may exhibit tradeoffs regarding the certified prediction. We believe our work will inspire future research of developing certifiably robust DPFL based on its inherent properties. 1 INTRODUCTION Federated Learning (FL), which aims to jointly train a global model with distributed local data, has been widely applied in different applications, such as finance (Yang et al., 2019b), medical analysis (Brisimi et al., 2018), and user behavior prediction (Hard et al., 2018; Yang et al., 2018; 2019a). However, the fact that the local data and the training process are entirely controlled by the local users who may be adversarial raises great concerns from both security and privacy perspectives. In particular, recent studies show that FL is vulnerable to different types of training-time attacks, such as model poisoning (Bhagoji et al., 2019), backdoor attacks (Bagdasaryan et al., 2020; Xie et al., 2019; Wang et al., 2020), and label-flipping attacks (Fung et al., 2020). Further, privacy concerns have motivated the need to keep the raw data on local devices without sharing. However, sharing other indirect information such as gradients or model updates as part of the FL training process can also leak sensitive user information (Zhu et al., 2019; Geiping et al., 2020; Bhowmick et al., 2018; Melis et al., 2019). As a result, approaches based on differential privacy (DP) (Dwork & Roth, 2014), homomorphic encryption (Bost et al., 2015; Rouhani et al., 2018; Gilad-Bachrach et al., 2016), and secure multiparty computation (Ben-Or et al., 1988; Bonawitz et al., 2017) have been proposed to protect privacy of users in federated learning. In particular, differentially private federated learning (DPFL) provides strong information theoretic guarantees on user privacy, while causing relatively low performance overhead (Li et al., 2020b). Several defenses have been proposed to defend against poisoning attacks in FL. For instance, various robust aggregation methods (Fung et al., 2020; Pillutla et al., 2019; Blanchard et al., 2017; El Mhamdi et al., 2018; Chen et al., 2017b; Yin et al., 2018; Fu et al., 2019; Li et al., 2020a) identify and down-weight the malicious updates during aggregation or estimate a true “center” of the received updates rather than taking a weighted average. Other methods include robust federated training protocols (e.g., clipping (Sun et al., 2019), noisy perturbation (Sun et al., 2019), and additional evaluation during training (Andreina et al., 2020)) and post-training strategies (e.g., fine-tuning and pruning (Wu et al., 2020)) that repair the poisoned global model. However, as these works mainly focus on providing empirical robustness for FL, they have been shown to be vulnerable to newly proposed strong adaptive attacks (Wang et al., 2020; Xie et al., 2019; Baruch et al., 2019; Fang et al., 2020). Hence, in this paper, we aim to develop certified robustness guarantees for FL against different poisoning attacks. Further, as differentially private federated learning (DPFL) is often used to protect user privacy, we also aim to ask: Can we leverage the innate privacy property of DPFL to provide robustness certification against poisoning attacks for free? Can we further improve the privacy of FL so as to improve its certified robustness? Recent studies suggest that differential privacy (DP) is inherently related with robustness of ML models. Intuitively, DP is designed to protect the privacy of individual data, such that the output of an algorithm remains essentially unchanged when one individual input point is modified. Hence, the prediction of a DP model will be less impacted by a small amount of poisoned training data. Consequently, DP has been used to provide both theoretical and empirical defenses against evasion attacks (Lecuyer et al., 2019a) and data poisoning attacks (Ma et al., 2019; Hong et al., 2020) on centralized ML models. It has also been used as an empirical defense against backdoor attacks (Gu et al., 2019) in federated learning (Bagdasaryan et al., 2020; Sun et al., 2019), although no theoretical guarantee is provided. To the best of our knowledge, despite of the wide application of DPFL,there is no work providing certified robustness for DPFL leveraging its privacy property. In this paper, we aim to leverage the inherent privacy property of DPFL to provide robustness certification for FL against poisoning attacks for free. Our challenges include: (1) performing privacy analysis over training rounds in DPFL algorithms and (2) theoretically guaranteeing certified robustness based on DP properties under a given privacy budget. We propose two robustness certification criteria for FL: certified prediction and certified attack cost under different attack constraints. We consider both user-level DP (Agarwal et al., 2018; Geyer et al., 2017; McMahan et al., 2018; Asoodeh & Calmon, 2020; Liang et al., 2020) which is widely guaranteed in FL, and instance-level DP (Malekzadeh et al., 2021; Zhu et al., 2021) which is less explored in FL. We prove that a FL model satisfying user-level DP is certifiably robust against a bounded number of adversarial users. In addition, we propose InsDP-FedAvg algorithm to improve instance-level DP in FL, and prove that instance-level DPFL is certifiably robust against a bounded number of adversarial instances. We also study the correlation between privacy guarantee and certified robustness of FL. While stronger privacy guarantees result in greater attack cost, overly strong privacy can hurt the certified prediction by introducing too much noise in the training process. Thus, the optimal certified prediction is often achieved under a proper balance between privacy protection and utility loss. Key Contributions. Our work takes the first step to provide certified robustness in DPFL for free against poisoning attacks. We make contributions on both theoretical and empirical fronts. • We propose two criteria for certified robustness of FL against poisoning attacks (Section 4.2). • Given a FL model satisfying user-level DP, we prove that it is certifiably robust against arbitrary poisoning attacks with a bounded number of adversarial users (Section 4.2). • We propose InsDP-FedAvg algorithm to improve FL instance-level privacy guarantee (Sec- tion 5.1). We prove that instance-level DPFL is certifiably robust against the manipulation of a bounded number of instances during training (Section 5.2). • We conduct extensive experiments on image classification of MNIST, CIFAR-10 and sentiment analysis of tweets to verify our proposed certifications of two robustness criteria, and compare the certified results of different DPFL algorithms (Section 6). 2 RELATED WORK Differentially Private Federated Learning. Different approaches are proposed to guarantee the user-level privacy for FL. (Geyer et al., 2017; McMahan et al., 2018) clip the norm of each local update, add Gaussian noise on the summed update, and characterize its privacy budget via moment accountant (Abadi et al., 2016). (McMahan et al., 2018) extends (Geyer et al., 2017) to language models. In CpSGD (Agarwal et al., 2018), each user clips and quantizes the model update, and adds noise drawn from Binomial distribution, achieving both communication efficiency and DP. (Bhowmick et al., 2018) derive DP for FL via Rényi divergence (Mironov, 2017) and study its protection against data reconstruction attacks. (Liang et al., 2020) utilizes Laplacian smoothing for each local update to enhance the model utility. Instead of using moment accountant to track privacy budget over FL rounds as previous work, (Asoodeh & Calmon, 2020) derives the DP parameters by interpreting each round as a Markov kernel and quantify its impact on privacy parameters. All these works only focus on providing user-level privacy, leaving its robustness property unexplored. In terms of instance-level privacy for FL, there are only a few work (Malekzadeh et al., 2021; Zhu et al., 2021). Dopamine (Malekzadeh et al., 2021) provides instance-level privacy guarantee when each user only performs one step of DP-SGD (Abadi et al., 2016) at each FL round. However, it cannot be applied to multi-step SGD for each user, thus it cannot be extended to the general FL setting FedAvg (McMahan et al., 2017). (Zhu et al., 2021) privately aggregate the labels from users in a voting scheme, and provide DP guarantees on both user level and instance level. However, it is also not applicable to standard FL, since it does not allow aggregating the gradients or updates. Differential Privacy and Robustness. In standard (centralized) learning, Pixel-DP (Lecuyer et al., 2019a) is proposed to certify the model robsutness against evasion attacks. However, it is unclear how to leverage it to certify against poisoning attacks. To certify the robustness against poisoning attacks, (Ma et al., 2019) show that private learners are resistant to data poisoning and analyze the lower bound of attack cost against poisoning attacks for regression models. Here we certify the robustness in DPFL setting with such lower bound as one of our certification criteria and additionally derive its upper bounds. (Hong et al., 2020) show that the off-the-shelf mechanism DP-SGD (Abadi et al., 2016), which clips per-sample gradients and add Guassian noises during training, can serve as a defense against poisoning attacks empirically. In federated learning, empirical work (Bagdasaryan et al., 2020; Sun et al., 2019) show that DPFL can mitigate backdoor attacks; however, none of these work provides certified robustness guarantees for DPFL against poisoning attacks. 3 PRELIMINARIES We start by providing some background on differential privacy (DP) and federated learning (FL). Differential Privacy (DP). DP is a formal, mathematically rigorous definition (and standard) of privacy that intuitively guarantees that a randomized algorithm behaves similarly on similar inputs and that the output of the algorithm is about the same whether or not an individual’s data is included as part of the input (Dwork & Roth, 2014). Definition 1 (( , δ)-DP (Dwork et al., 2006)). A randomized mechanismM : D → Θ with domain D and range Θ satisfies ( , δ)-DP if for any pair of two adjacent datasets d, d′ ∈ D, and for any possible (measurable) output set E ⊆ Θ, it holds that Pr[M(d) ∈ E] ≤ e Pr [M (d′) ∈ E] + δ. In Definition 1, whenM is a training algorithm for ML model, domain D and range Θ represent all possible training datasets and all possible trained models respectively. Group DP for ( , δ)-DP mechanisms follows immediately from Definition 1 where the privacy guarantee drops with the size of the group. Formally, it says: Lemma 1 (Group DP). For mechanismM that satisfies ( , δ)-DP, it satisfies (k , 1−e k 1−e δ)-DP for groups of size k. That is, for any d, d′ ∈ D that differ by k individuals, and any E ⊆ Θ it holds that Pr[M(d) ∈ E] ≤ ek Pr [M (d′) ∈ E] + 1−e k 1−e δ. Federated Learning. FedAvg was introduced by (McMahan et al., 2017) for FL to train a shared global model without direct access to training data of users. Specifically, given a FL system with N users, at round t, the server sends the current global model wt−1 to users in the selected user set Ut, where |Ut| = m = qN and q is the user sampling probability. Each selected user i ∈ Ut locally updates the model for E local epochs with its dataset Di and learning rate η to obtain a new local model. Then, the user sends the local model updates ∆wit to the server. Finally, the server aggregates over the updates from all selected users into the new global model wt = wt−1 + 1m ∑ i∈Ut ∆w i t. 4 USER-LEVEL PRIVACY AND CERTIFIED ROBUSTNESS FOR FL 4.1 USER-LEVEL PRIVACY AND BACKGROUND Definition 1 leaves the definition of adjacent datasets flexible, which depends on applications. To protect user-level privacy, adjacent datasets are defined as those differing by data from one user (McMahan et al., 2018). The formal definition of User-level ( , δ)-DP (Definition 2) is omitted to Appendix A.1. Following standard DPFL (Geyer et al., 2017; McMahan et al., 2018), we introduce one of standard user-level DPFL algorithms UserDP-FedAvg (Algorithm 1 in Appendix A.1). At each round, the server first clips the update from each user with a threshold S such that its `2-sensitivity is upper bounded by S. Next, the server sums up the updates, adds Gaussian noise sampled from N (0, σ2S2), and takes the average, i.e., wt ← wt−1 + 1m (∑ i∈Ut Clip(∆w i t, S) +N ( 0, σ2S2 )) . Given the user sampling probability q, noise level σ, FL rounds T , and a δ > 0, the privacy analysis of UserDP-FedAvg satisfying ( , δ)-DP is given by Proposition 1 in Appendix A.1, which is a generalization of (Abadi et al., 2016). The aim of Proposition 1 is to analyze privacy budget in FL, which is accumulated as T increases due to the continuous access to training data. Following (Geyer et al., 2017; McMahan et al., 2018), moment accountant (Abadi et al., 2016) is used in the privacy analysis. 4.2 CERTIFIED ROBUSTNESS OF USER-LEVEL DPFL AGAINST POISONING ATTACKS Threat Model. We consider the poisoning attacks against FL, where k adversarial users have poisoned instances in local datasets, aiming to fool the trained DPFL global model. Such attacks include backdoor attacks (Gu et al., 2019; Chen et al., 2017a) and label flipping attacks (Biggio et al., 2012; Huang et al., 2011). The detailed description of these attacks is deferred to Appendix A.2. Note that our robustness certification is attack-agnostic under certain attack constraints (e.g., k), and we will verify our certification bounds with different poisoning attacks in Section 6. Next, we propose two criteria for the robustness certification in FL: certified prediction and certified attack cost. Certified Prediction. Consider the classification task with C classes. We define the classification scoring function f : (Θ,Rd) → ΥC which maps model parameters θ ∈ Θ and an input data x ∈ Rd to a confidence vector f(θ, x), and fc(θ, x) ∈ [0, 1] represents the confidence of class c. We mainly focus on the confidence after normalization, i.e., f(θ, x) ∈ ΥC = {p ∈ RC≥0 : ‖p‖1 = 1} in the probability simplex. Since the DP mechanismM is randomized and produces a stochastic FL global model θ = M(D), it is natural to resort to a probabilistic expression as a bridge for quantitative robustness certifications. Following the convention in (Lecuyer et al., 2019b; Ma et al., 2019), we use the expectation of the model’s prediction to provide a quantitative guarantee on the robustness of M. Specifically, we define the expected scoring function F : (θ,Rd)→ ΥC where Fc(M(D), x) = E[fc(M(D), x)] is the expected confidence for class c. The expectation is taken over DP training randomness, e.g., random Gaussian noise and random user subsampling. The corresponding prediction H : (θ,Rd) → [C] is defined by H(M(D), x) := arg maxc∈[C] Fc(M(D), x), which is the top-1 class based on the expected prediction confidence. We will prove that such prediction allows robustness certification against poisoning attacks. Following our threat model above and DPFL training in Algorithm 1, we denote the trained global model exposed to poisoning attacks byM(D′). When k = 1, D and D′ are user-level adjacent datasets according to Definition 2. Given that mechanismM satisfies user-level ( , δ)-DP, based on the innate DP property, the distribution of the stochastic model M(D′) is “close” to the distribution of M(D). Moreover, according to the post-processing property of DP, during testing, given a test sample x, we would expect the values of the expected confidence for each class c, i.e., Fc(M(D′), x) and Fc(M(D), x), to be close, and hence the returned most likely class to be the same, i.e., H(M(D), x) = H(M(D′), x), indicating robust prediction against poisoning attacks. Theorem 1 (Condition for Certified Prediction under One Adversarial User). Suppose a randomized mechanismM satisfies user-level ( , δ)-DP. For two user sets B and B′ that differ by one user, let D and D′ be the corresponding training datasets. For a test input x, suppose A,B ∈ [C] satisfy A = arg maxc∈[C] Fc(M(D), x) and B = arg maxc∈[C]:c 6=A Fc(M(D), x), then if FA(M(D), x) > e2 FB(M(D), x) + (1 + e )δ, (1) it is guaranteed that H(M(D′), x) = H(M(D), x) = A. When k > 1, we resort to group DP. According to Lemma 1, given mechanismM satisfying userlevel ( , δ)-DP, it also satisfies user-level (k , 1−e k 1−e δ)-DP for groups of size k. When k is smaller than a certain threshold, leveraging the group DP property, we would expect that the distribution of the stochastic modelM(D′) is not too far away from the distribution ofM(D) such that they would make the same prediction for a test sample with probabilistic guarantees. Therefore, the privacy and robustness guarantees are simultaneously met byM. Theorem 2 (Upper Bound of k for Certified Prediction). Suppose a randomized mechanism M satisfies user-level ( , δ)-DP. For two user sets B and B′ that differ by k users, let D and D′ be the corresponding training datasets. For a test input x, suppose A,B ∈ [C] satisfy A = arg maxc∈[C] Fc(M(D), x) and B = arg maxc∈[C]:c 6=A Fc(M(D), x), then H(M(D′), x) = H(M(D), x) = A, ∀k < K where K is the certified number of adversarial users: K = 1 2 log FA(M(D), x)(e − 1) + δ FB(M(D), x)(e − 1) + δ (2) The proofs of Theorems 1 and 2 are omitted to Appendix A.4. Theorems 1 and 2 reflect a tradeoff between privacy and certified prediction: (i) in Theorem 1, if is large such that the RHS of Eq (1) > 1, the robustness condition cannot be met since the expected confidence FA(M(D), x) ∈ [0, 1]. However, to achieve small , i.e., strong privacy, large noise is required during training, which would hurt model utility and thus result in small confidence margin between the top two classes (e.g., FA(M(D), x) and FB(M(D), x)), making it hard to meet the robustness condition. (ii) In Theorem 2 if we fix FA(M(D), x) and FB(M(D), x), smaller of FL can certify larger K. However, smaller also induces smaller confidence margin, thus reducing K instead. As a result, properly choosing would help to certify a large K. Certified Attack Cost. In addition to the certified prediction, we define the attack cost for attacker C : Θ → R which quantifies the difference between the poisoned model and the attack goal. In general, attacker aims to minimize the expected attack cost J(D) := E[C(M(D))], where the expectation is taken over the randomness of DP training. The cost function can be instantiated according to the concrete attack goal in different types of poisoning attacks, and we provide some examples below. Given a global FL model satisfying user-level ( , δ)-DP, we will prove the lower bound of the attack cost J(D′) when manipulating the data of at most k users. Higher lower bound of the attack cost indicates more certifiably robust global model. Example 1. (Backdoor attack (Gu et al., 2019)) C(θ) = 1n ∑n i=1 l(θ, z ∗ i ), where z ∗ i = (xi + δx, y ∗), δx is the backdoor pattern, y∗ is the target adversarial label. Minimizing J(D′) drives the prediction on any test data with the backdoor pattern δx to the target label y∗. Example 2. (Label Flipping attack (Biggio et al., 2012)) C(θ) = 1n ∑n i=1 l(θ, z ∗ i ), where z ∗ i = (xi, y ∗) and y∗ is the target adversarial label. Minimizing J(D′) drives the prediction on test data xi to the target label y∗. Example 3. (Parameter-Targeting attack (Ma et al., 2019)) C(θ) = 12‖θ − θ ?‖2, where θ? is the target model. Minimizing J(D′) drives the poisoned model to be close to the target model. Theorem 3 (Attack Cost with k Attackers). Suppose a randomized mechanismM satisfies user-level ( , δ)-DP. For two user sets B and B′ that differ k users, D and D′ are the corresponding training datasets. Let J(D) be the expected attack cost where |C(·)| ≤ C̄. Then, min{ek J(D) + e k − 1 e − 1 δC̄, C̄} ≥ J(D ′) ≥ max{e−k J(D)− 1− e −k e − 1 δC̄, 0}, if C(·) ≥ 0 min{e−k J(D) + 1− e −k e − 1 δC̄, 0} ≥ J(D ′) ≥ max{ek J(D)− e k − 1 e − 1 δC̄,−C̄}, if C(·) ≤ 0 (3) The proof is omitted to Appendix A.4. Theorem 3 provides the upper bounds and lower bounds for attack cost J(D′). The lower bounds show that to what extent the attack can reduce J(D′) by manipulating up to k users, i.e., how successful the attack can be. The lower bounds depend on the attack cost on clean model J(D), k and . When J(D) is higher, the DPFL model under poisoning attacks is more robust because the lower bounds are accordingly higher; a tighter privacy guarantee, i.e., smaller , can also lead to higher robustness certification as it increases the lower bounds; with larger k, the attacker ability grows and thus lead to lower possible J(D′). The upper bounds show the least adversarial effect brought by k attackers, i.e., how vulnerable the DPFL model is under the optimistic case (e.g., the backdoor pattern is less distinguishable). Leveraging the lower bounds in Theorem 3, we can lower-bound the minimum number of attackers required to reduce the attack cost to certain level associated with hyperparameter τ in Corollary 1. Corollary 1 (Lower Bound of k Given τ ). Suppose a randomized mechanism M satisfies userlevel ( , δ)-DP. Let attack cost function be C, the expected attack cost be J(·). In order to achieve J(D′) ≤ 1τ J(D) for τ ≥ 1 when 0 ≤ C(·) ≤ C̄, or achieve J(D ′) ≤ τJ(D) for 1 ≤ τ ≤ − C̄J(D) when −C̄ ≤ C(·) ≤ 0, the number of adversarial users should satisfy: k ≥ 1 log (e − 1) J(D)τ + C̄δτ (e − 1) J(D) + C̄δτ or k ≥ 1 log (e − 1) J(D)τ − C̄δ (e − 1) J(D)− C̄δ respectively. (4) The proof is omitted to Appendix A.4. Corollary 1 shows that stronger privacy guarantee (i.e., smaller ) requires more attackers to achieve the same effectiveness of attack, indicating higher robustness. 5 INSTANCE-LEVEL PRIVACY AND CERTIFIED ROBUSTNESS FOR FL 5.1 INSTANCE-LEVEL PRIVACY In this section, we introduce the instance-level DP definition, the corresponding algorithm, and the privacy analysis for FL. When DP is used to protect the privacy of individual instance, the trained stochastic FL model should not differ much if one instance is modified. Hence, the adjacent datasets in instance-level DP are defined as those differing by one instance. The formal definition of Instance-level ( , δ)-DP (Definition 3) is omitted to Appendix A.1. Dopamine (Malekzadeh et al., 2021) provides the first instance-level privacy guarantee under FedSGD (McMahan et al., 2017). However, it has two limitations. First, its privacy bound is loose. Although FedSGD performs both user and batch sampling during training, Dopamine ignores the privacy gain provided by random user sampling. In this section, we improve the privacy guarantee under FedSGD with privacy amplification via user sampling (Bassily et al., 2014; Abadi et al., 2016). This improvement leads to algorithm InsDP-FedSGD, to achieve tighter privacy analysis. We defer the algorithm (Algorithm 2) as well as its privacy guarantee to Appendix A.1. Besides the loose privacy bound, Dopamine (Malekzadeh et al., 2021) only allows users to perform one step of DP-SGD (Abadi et al., 2016) during each FL round. This restriction limits the efficiency of the algorithm and increases the communication overhead. In practice, users in FL are typically allowed to update their local models for many steps before submitting updates to reduce the communication cost. To solve this problem, we further improve InsDP-FedSGD to support multiple local steps during each round. Specifically, we propose a novel instance-level DPFL algorithm InsDP-FedAvg (Algorithm 3 in Appendix A.1) allowing users to train multiple local SGD steps before submitting the updates. In InsDP-FedAvg, each user i performs local DP-SGD so that the local training mechanismMi satisfies instance-level DP. Then, the server aggregates the updates. We prove that the global mechanismM preserves instance-level DP using DP parallel composition theorem (Dwork & Lei, 2009) and moment accountant (Abadi et al., 2016). Algorithm 3 formally presents the InsDP-FedAvg algorithm and the calculation of its privacy budget . Specifically, at first, local privacy cost i0 is initialized as 0 before FL training. At round t, if user i is not selected, its local privacy cost is kept unchanged it ← it−1. Otherwise user i updates local model by running DP-SGD for V local steps with batch sampling probability p, noise level σ and clipping threshold S, and it is accumulated upon i t−1 via its local moment accountant. Next, the server aggregates the updates from selected users, and leverages { it}i∈[N ] and the parallel composition in Theorem 4 to calculate the global privacy cost t. After T rounds, the mechanismM that outputs the FL global model in Algorithm 3 is instance-level ( T , δ)-DP. Theorem 4 (InsDP-FedAvg Privacy Guarantee). In Algorithm 3, during round t, if the local mechanismMi satisfies ( it, δ)-DP, then the global mechanismM satisfies ( maxi∈[N ] i t, δ ) -DP. The idea behind Theorem 4 is that when D′ and D differ in one instance, the modified instance only falls into one local dataset, and thus parallel composition theorem (Dwork & Lei, 2009) can be applied. Then the privacy guarantee corresponds to the worst-case, and is obtained by taking the maximum local privacy cost across all the users. The detailed proof is given in Appendix A.1. 5.2 CERTIFIED ROBUSTNESS OF INSTANCE-LEVEL DPFL AGAINST POISONING ATTACKS Threat Model. We consider poisoning attacks under the presence of k poisoned instances. These instances could be controlled by the same or multiple adversarial users. Our robustness certification is agnostic to the attack methods as long as the number of poisoned instances is constrained. According to the group DP property (Lemma 1) and the post-processing property for FL model with instance-level ( , δ)-DP, we prove that our robust certification results proposed for user-level DP are also applicable to instance-level DP. Below is the formal theorem (proof is given in Appendix A.4). Theorem 5. Suppose D and D′ differ by k instances, and the randomized mechanismM satisfies instance-level ( , δ)-DP. The results in Theorems 1, 2,and 3, and Corollary 1 hold forM, D, and D′. Comparison with existing certified prediction methods in centralized setting. The form of Theorem 1 is similar with the robustness condition against test-time attack in Proposition 1 of (Lecuyer et al., 2019a). This is because the derived robustness conditions are both rooted in the DP properties, but ours focus on the robustness against training-time attacks in FL, which is more challenging considering the distributed nature and the model training dynamics, i.e., the analysis of the privacy budget over training rounds. Our Theorem 1 is also different from previous randomized smoothingbased certifiably robust centralized learning against backdoor (Weber et al., 2020) and label flipping (Rosenfeld et al., 2020). First, our randomness comes from the inherent training randomness of user/instance-level ( , δ)-DP, e.g., user subsampling and Gaussian noise. Thus, the certified robustness for free in DPFL means that the DPFL learning algorithmM itself is randomized, and such randomness can lead to the robustness certification with non-trivial quantitative measurement of the randomness. On the contrary, robustness in randomized smoothing-based methods comes from explicitly making the classification process randomized via adding noise in training datasets (Weber et al., 2020; Rosenfeld et al., 2020), or test samples (Lecuyer et al., 2019a; Cohen et al., 2019) which is easier to measure. Second, our Theorem 1, 2 hold no matter how is achieved, which means that we can add different types of noise, leverage different subsampling strategies or even different FL training protocols to achieve user/instance-level . However, in (Weber et al., 2020; Rosenfeld et al., 2020) different certifications require different types of noise (Laplacian, Gaussian, etc.). Additionally, DP is suitable to characterize the robustness against poisoning since DP composition theorems can be leveraged to track privacy cost , which captures the training dynamics of ML model parameters without additional assumptions. Otherwise one may need to track the deviations of model parameters by analyzing SGD over training, which is theoretically knotty and often requires strong assumptions on Lipschitz continuity, smoothness or convexity for the trained models. 6 EXPERIMENTS We present evaluations for robustness certifications, expecially Thm. 2, 3 and Cor. 1. We find that 1) there is a tradeoff between certified prediction and privacy on certain datasets; 2) a tighter privacy guarantee always provides stronger certified robustness in terms of the certified attack cost; 3) our lower bounds of certified attack cost are generally tight when k is small. When k is large, they are tight under strong attacks (e.g., large local poisoning ratio α). Stronger attacks or tighter certification are requried to further tighten the gap between the emprical robustness and theoretical bounds. Data and Model. We evaluate our robustness certification results with three datasets: image classfication on MNIST, CIFAR-10 and text sentiment analysis task on tweets from Sentiment140 (Go et al.) (Sent140), which involves classifying Twitter posts as positive or negative. For image datasets, we use corresponding standard CNN architectures in the differential privacy library (opa, 2021) of PyTorch; for Sent140, we use a LSTM classifier. Following previous work on DP ML (Jagielski et al., 2020; Ma et al., 2019) and backdoor attacks (Tran et al., 2018; Weber et al., 2020) which evaluate with two classes, we focus on binary classification for MNIST (digit 0 and 1) and CIFAR-10 (airplane and bird), and defer the 10-class results to Appendix A.3. We train FL model following Algorithm 1 for user-level privacy and Algorithm 3 for instance-level privacy. We refer the readers to Appendix A.3 for details about the datasets, networks, parameter setups. Poisoning Attacks. We evaluate several state-of-the-art poisoning attacks against the proposed UserDP-FedAvg and InsDP-FedAvg. We first consider backdoor attacks (BKD) (Bagdasaryan et al., 2020) and label flipping attacks (LF) (Fung et al., 2020). For InsDP-FedAvg, we consider the worst case where k backdoored or lable-flipped instances are fallen into the dataset of one user. For UserDP-FedAvg, we additionally evaluate distributed backdoor attack (DBA) (Xie et al., 2019), which is claimed to be a more stealthy backdoor attack against FL. Moreover, we consider BKD, LF and DBA via model replacement approach (Bagdasaryan et al., 2020) where k attackers train the local models using local datasets with α fraction of poisoned instances, and scale the malicious updates with hyperparameter γ, i.e., ∆wit ← γ∆wit, before sending them to the sever. This way, the malicious updates would have a stronger impact on the FL model. Note that even when attackers perform scaling, after server clipping, the sensitivity of updates is still upper-bounded by the clipping threshold S. So the privacy guarantee in Proposition 1 still holds under poisoning attacks via model replacement. Detailed attack setups are presented in Appendix A.3. Evaluation Metrics and Setup. We consider two evaluation metrics based on our robustness certification criteria. The first metric is certified accuracy, which is the fraction of the test set for which the poisoned FL model makes correct and consistent predictions compared with the clean FL model. Given a test set of size n, for i-th test sample, the ground truth label is yi, the output prediction is ci , and the certified number of adversarial users/instances is Ki. We calculate the certified accuracy at k as 1n ∑n i=1 1{ci = yi and Ki ≥ k}. The second metric is the lower bound of attack cost in Theorem 3: J(D′) = max{e−k J(B)− 1−e −k e −1 δC̄, 0}. We evaluate the tightness of J(D′) by comparing it with empirical attack cost J(D′). To quantify the robustness, we evaluate the expected class confidence Fc(M(D), x) for class c via Monte-Carlo sampling. We run the private FL algorithms for M =1000 times, with class confidence fsc = fc(M(D), x) for each time. We compute its expectation to estimate Fc(M(D), x) ≈ 1M ∑M s=1 f s c and use it to evaluate Theorem 2. In addition, we use Hoeffding’s inequality (Hoeffding, 1994) to calibrates the empirical estimation with confidence level parameter ψ, and results are deferred to Appendix A.3. In terms of the attack cost, we use Example 1, 2 as the definitions of cost function C for backdoor attacks and label flipping attacks respectively. We follow similar protocol to estimate J(D′) for Theorem 3 and Corollary 1. 6.1 ROBUSTNESS EVALUATION OF USER-LEVEL DPFL Certified Prediction. Figure 1(a)(b) present the user-level certified accuracy under different by training DPFL models with different noise scale σ. The results on Sent140 dataset is presented in Figure 13 of Appendix. A.3.8. We observe that the largest k can be certified when is around 0.6298 in MNIST, 0.1451 in CIFAR-10, and 0.2247 in Sent140 which verifies the tradeoff between and certified accuracy as we discussed in Section 4.2. Advanced DP protocols that requires less noise while achieving similar level of privacy are favored to improve the privacy, utility, and certified accuracy simultaneously. Furthermore, we compare the certified accuracy of four different user-level DPFL methods (McMahan et al., 2018; Geyer et al., 2017) given the same privacy budget . As shown in Figure 14 and Figure 15 of Appendix. A.3.9, the models trained by different DPFL algorithms satisfying same have different certified robustness results. This is because even under the same , different DPFL algorithmsM produce trained modelsM(D) with different model performance, thus leading to different certified robustness. More discussion could be found in Appendix. A.3.9. Certified Attack Cost. In order to evaluate Theorem 3 and characterize the tightness of our theoretical lower bound J(D′), we compare it with the empirical attack cost J(D′) under different local poison fraction α , attack methods and scale factor γ in Figure 2. Note that when k = 0, the model is benign so the empirical cost equals to the certified one. We find that 1) when k increases, the attack ability grows, and both the empirical attack cost and theoretical lower bound decreases. 2) In Figure 2 row 1, given the same k, higher α, i.e., poisoning more local instances for each attacker, achieves a stronger attack, under which lower empirical J(D) can be achieved and is more close to the certified lower bound. This indicates that the lower bound appears tighter when the poisoning attack is stronger. 3) In Figure 2 row 2, we fix α = 100% and evaluate UserDP-FedAvg under different γ and attack methods. It turns out that DP serves as a strong defense empirically for FL, given that J(D) did not vary much under different γ(1, 50, 100) and different attack methods (BKD, DBA, LF). This is because the clipping operation restricts the magnitude of malicious updates, rendering the model replacement ineffective; the Gaussian noise perturbs the malicious updates and makes the DPFL model stable, and thus the FL model is less likely to memorize the poisoning instances. 4) In both rows, the lower bounds are tight when k is small. When k is large, there remains a gap between our theoretical lower bounds and empirical attack costs under different attacks, which will inspire more effective poisoning attacks or tighter robustness certification. Certified Attack Cost under Different . Here we further explore the impacts of different factors on the certified attack cost. Figure 3 presents the empirical attack cost and the certified attack cost lower bound given different on user-level DP. It is shown that as the privacy guarantee becomes stronger, i.e. smaller , the model is more robust achieving higher J(D′) and J(D′). In Figure 5 (a)(b), we train user-level ( , δ) DPFL models, calculate corresponding J(D), and plot the lower bound of k given different attack effectiveness hyperparameter τ according to Corollary 1. It shows that 1) when the required attack effectiveness is higher, i.e., τ is larger, more number of attackers is required. 2) To achieve the same effectiveness of attack, fewer number of attackers is needed for larger , which means that DPFL model with weaker privacy is more vulnerable to poisoning attacks. 6.2 ROBUSTNESS EVALUATION OF INSTANCE-LEVEL DPFL Certified Prediction. Figure 1(c)(d) show the instance-level certified accuracy under different . The optimal for K is around 0.3593 for MNIST and 0.6546 for CIFAR-10, which is aligned with our observation of the tradeoff between certified accuracy and privacy on user-level DPFL (Section 6.1). Certified Attack Cost. Figure 4 show the certified attack cost on CIFAR-10. From Figure 4 (a)(b), poisoning more instances (i.e., larger k) induces lower theoretical and empirical attack cost. From Figure 4 (c)(d), it is clear that instance-level DPFL with stronger privacy guarantee provides higher attack cost both empirically and theoretically, meaning that it is more robust against poisoning attacks. Results on MNIST are deferred to Appendix A.3. Figure 5 (c)(d) show the lower bound of k under different instance-level given different τ . Fewer poisoned instances are required to reduce the J(D′) to the similar level for a less private DPFL model, indicating that the model is easier to be attacked. 7 CONCLUSION In this paper, we present the first work on deriving certified robustness in DPFL for free against poisoning attacks. We propose two robustness certification criteria, based on which we prove that a FL model satisfying user-level (instance-level) DP is certifiably robust against a bounded number of adversarial users (instances). Our theoretical analysis characterizes the inherent relation between certified robustness and differential privacy of FL on both user and instance levels, which are empirically verified with extensive experiments. Our results can be used to improve the trustworthiness of DPFL. Ethics Statement. Our work study the robustness guarantee of differentially private federated learning models from theoretical and empirical perspectives. All the datasets and packages we use are open-sourced. We do not have ethical concerns in our paper. Reproducibility Statement. Our source code is available as the supplemental material for reproducibility purpose. Our experiments can be reproduced following our detailed training and evaluation setups in Appendix A.3. The complete proofs of privacy analysis and certified robustness analysis can be found in the Appendix A.1 and Appendix A.4, respectively. A APPENDIX The Appendix is organized as follows: • Appendix A.1 provides the DP definitions and the DPFL algorithms on both user and instance levels, and the proofs for corresponding privacy guarantees. • Appendix A.2 specifies our threat models. • Appendix A.3 provides more details on experimental setups for training and evaluation, the addition experimental results on certified accuracy with confidence level, robustness evaluation of InsDP-FedAvg on MNIST, robustness evaluation on 10-class classification, DP bound comparison between InsDP-FedSGD and Dopamine, certified accuracy of UserDP-FedAvg on Sent140 and certified accuracy comparison of different user-level DPFL algorithms. • Appendix A.4 provides the proofs for the certified robustness related analysis, including Lemma 1, Theorem 1, 2, 3, 5 and Corollary 1. • Appendix A.5 provides the comparison to related work (Lecuyer et al., 2019a; Ma et al., 2019). A.1 DIFFERENTIALLY PRIVATE FEDERATED LEARNING A.1.1 USERDP-FEDAVG Definition 2 (User-level ( , δ)-DP). Let B,B′ be two user sets with size N . Let D and D′ be the datasets that are the union of local training examples from all users inB andB′ respectively. Then,D and D′ are adjacent if B and B′ differ by one user. The mechanismM satisfies user-level ( , δ)-DP if it meets Definition 1 with D and D′ as adjacent datasets. Algorithm 1: UserDP-FedAvg. Input: Initial model w0, user sampling probability q, privacy parameter δ, clipping threshold S, noise level σ, local datasets D1, ..., DN , local epochs E, learning rate η. Output: FL model wT and privacy cost Server executes: for each round t = 1 to T do m← max(q ·N, 1); Ut ← (random subset of m users); for each user i ∈ Ut in parallel do ∆wit ← UserUpdate(i, wt−1) ; wt ← wt−1+ 1m (∑ i∈Ut Clip(∆w i t, S) +N ( 0, σ2S2 )) ; M.accum priv spending(σ, q, δ) ; =M.get privacy spent() ; return wT , Procedure UserUpdate(i, wt−1) w ← wt−1 ; for local epoch e = 1 to E do for batch b ∈ local dataset Di do w ← w − η∇l(w; b) ∆wit ← w − wt−1 ; return ∆wit Procedure Clip(∆, S) return ∆/max ( 1, ‖∆‖2 S ) In Algorithm 1,M.accum priv spending() andM.get privacy spent() are the calls on the moments accountantM refer to the API of (Abadi et al., 2016). Given the user sampling probability q, noise level σ, FL rounds T , and a δ > 0, UserDP-FedAvg satisfies ( , δ)-DP as below, which is a generalization of (Abadi et al., 2016). The aim is to analyze privacy budget , which is accumulated as T increases due to the continuous access to training data. Proposition 1 (UserDP-FedAvg Privacy Guarantee). There exist constants c1 and c2 so that given user sampling probability q, and FL rounds T , for any ε < c1q2T , if σ ≥ c2 q √ T log(1/δ) , the randomized mechanismM in Algorithm 1 is ( , δ)-DP for any δ > 0. Proof. The proof follows the proof of Theorem 1 in (Abadi et al., 2016), while the notations have slightly different meanings under FL settings. In Proposition 1, we use q to represent user-level sampling probability and T to represent FL training rounds. Note that the above privacy analysis can be further improved by Rényi Differential Privacy (Mironov et al., 2019). Discussion (Li et al., 2020b) divide the user-level privacy into global privacy (Geyer et al., 2017; McMahan et al., 2018) and local privacy (Agarwal et al., 2018). In both local and global privacy, the norm of each update is clipped. The difference lies in that the noise is added on the aggregated model updates in global privacy because a trusted server is assumed, while the noise is added on each local update in local privacy because it assumes that the central server might be malicious. Algorithm 1 belongs to global privacy. A.1.2 INSDP-FEDSGD Definition 3 (Instance-level ( , δ)-DP). Let D be the dataset that is the union of local training examples from all users. Then, D and D′ are adjacent if they differ by one instance. The mechanism M is instance-level ( , δ)-DP if it meets Definition 1 with D and D′ as adjacent datasets. Algorithm 2: InsDP-FedSGD. Input: Initial model w0, user sampling probability q, privacy parameter δ, local clipping threshold S, local noise level σ, local datasets D1, ..., DN , learning rate η, batch sampling probability p. Output: FL model wT and privacy cost Server executes: for each round t = 1 to T do m← max(q ·N, 1); Ut ← (random subset of m clients); for each user i ∈ Ut in parallel do ∆wit ← UserUpdate(i, wt−1) ; wt ← wt−1 + 1m ∑ i∈Ut ∆w i t ; M.accum priv spending( √ mσ, pq, δ) =M.get privacy spent() ; return wT , Procedure UserUpdate(i, wt−1) w ← wt−1 ; bit ←(uniformly sample a batch fromDi with probability p = L/|Di|); for each xj ∈ bit do g(xj)← ∇l(w;xj); ḡ(xj)← Clip(g(xj), S) ; g̃ ← 1L (∑ j ḡ(xj) +N ( 0, σ2S2 )) ; w ← w − ηg̃ ; ∆wit ← w − wt−1 ; return ∆wit Procedure Clip(∆, S) return ∆/max ( 1, ‖∆‖2 S ) Under FedSGD, when each local model performs one step of DP-SGD (Abadi et al., 2016), the randomized mechanismM that outputs the global model preserves the instance-level DP. We can regard the one-step update for the global model in Algorithm 2 as: wt ← wt−1 − 1 m ∑ i∈Ut η L ∑ xj∈bit ḡ(xj) +N ( 0, σ2S2 ) (5) Proposition 2 (InsDP-FedSGD Privacy Guarantee). There exist constants c1 and c2 so that given batch sampling probability p, and user sampling probability q, the number of selected users each round m, and FL rounds T , for any ε < c1(pq)2T , if σ ≥ c2 pq √ T log(1/δ) √ m , the randomized mechanismM in Algorithm 2 is ( , δ)-DP for any δ > 0. Proof. i) In instance-level DP, we consider the sampling probability of each instance under the combination of user-level sampling and batch-level sampling. Since the user-level sampling probability is q and the batch-level sampling probablity is p, each instance is sampled with probability pq. ii) Additionally, since the sensitivity of instance-wise gradient w.r.t one instance is S, after local gradient descent and server FL aggregation, the equivalent sensitivity of global model w.r.t one instance is S′ = ηSLm according to Eq (5). iii) Moreover, since the local noise is ni ∼ N (0, σ 2S2) , then the “virtual” global noise is n = ηmL ∑ i∈Ut ni according to Eq (5), so n ∼ N (0, η2σ2S2 mL2 ). Let η2σ2S2 mL2 = σ ′2S′ 2 such that n ∼ N (0, σ′2S′2). Because S′ = ηSLm , the equivalent global noise level is σ′2 = σ2m, i.e., σ′ = σ √ m. In Proposition 2, we use pq to represent instance-level sampling probability, T to represent FL training rounds, σ √ m to represent the equivalent global noise level. The rest of the proof follows the proof of Theorem 1 in (Abadi et al., 2016). We defer the DP bound evaluation comparison between InsDP-FedSGD and Dopamine to Appendix A.3.7. A.1.3 INSDP-FEDAVG Algorithm 3: InsDP-FedAvg. Input: Initial model w0, user sampling probability q, privacy parameter δ, local clipping threshold S, local noise level σ, local datasets D1, ..., DN , local steps V , learning rate η, batch sampling probability p. Output: FL model wT and privacy cost Server executes: for each round t = 1 to T do m← max(q ·N, 1); Ut ← (random subset of m users); for each user i ∈ Ut in parallel do ∆wit, i t ← UserUpdate(i, wt−1) ; for each user i /∈ Ut do it ← it−1 ; wt ← wt−1 + 1m ∑ i∈Ut ∆w i t ; t =M.parallel composition({ it}i∈[N ]) = T ; return wT , Procedure UserUpdate(i, wt−1) w ← wt−1 ; for each local step v = 1 to V do b ←(uniformly sample a batch from Di with probability p = L/|Di|); for each xj ∈ b do g(xj)← ∇l(w;xj); ḡ(xj)← Clip(g(xj), S) ; g̃ ← 1L ( ∑ j ḡ(xj) +N ( 0, σ2S2 ) ); w ← w − ηg̃ ; Mi.accum priv spending(σ, p, δ) ; it =Mi.get privacy spent() ; ∆wit ← w − wt−1 ; return ∆wit, it Procedure Clip(∆, S) return ∆/max ( 1, ‖∆‖2 S ) Lemma 2 (InsDP-FedAvg Privacy Guarantee when T = 1). In Algorithm 3, when T = 1, suppose local mechanismMi satisfies ( i, δ)-DP, then global mechanismM satisfies (maxi∈[N ] i, δ)-DP. Proof. We can regard federated learning as partitioning a dataset D into N disjoint subsets {D1, D2, . . . , DN}. N mechanisms {M1, . . . ,MN} are operated on these N parts separately and eachMi satisfies its own i-DP for i ∈ [1, N ]. Note that if i-th user is not selected , i = 0 because local dataset Di is not accessed and there is no privacy cost. Without loss of generality, we assume the modified data sample x′ (x → x′ causes D → D′) is in the local dataset of k-th client Dk. Let D,D′ be two neighboring datasets (Dk, D′k are also two neighboring datasets). M is randomized mechanism that outputs the global model, andMi is the randomized mechanism that outputs the local model update ∆wi. Suppose w0 is the initialized and deterministic global model, and {z1, . . . , zN} are randomized local updates. We have a sequence of computations {z1 = M1(D1), z2 = M2(D2; z1), z3 = M3(D3; z1, z2) . . .} and z = M(D) = w0 + ∑N i=1 zi. Note that if i-th user is not selected , zi = 0. According to the parallel composition (Tu), we have Pr[M(D) = z] = Pr[M1(D1) = z1] Pr[M2(D2; z1) = z2] . . .Pr[MN (DN ; z1, . . . , zN−1) = zN ] ≤ exp( k) Pr[Mk(D′k; z1, . . . , zk−1) = zk] ∏ i6=k Pr[Mi(Di; z1, . . . , zi−1) = zi] = exp( k) Pr[M(D′) = z] SoM satisfies k-DP when the modified data sample lies in the subset Dk. Consider the worst case of where the modified data sample could fall in, we know thatM satisfies (maxi∈[N ] i)-DP. We recall Theorem 4. Theorem 4 (InsDP-FedAvg Privacy Guarantee). In Algorithm 3, during round t, if the local mechanismMi satisfies ( it, δ)-DP, then the global mechanismM satisfies ( maxi∈[N ] i t, δ ) -DP. Proof. Again, without loss of generality, we assume the modified data sample x′ (x → x′ causes D → D′) is in the local dataset of k-th user Dk. We first consider the case when all users are selected. At each round t, N mechanisms are operated on N disjoint parts and eachMit satisfies own i-DP where i is the privacy cost for accessing the local dataset Di for one round (not accumulating over previous rounds). Let D,D′ be two neighboring datasets (Dk, D′k are also two neighboring datasets). Suppose z0 = Mt−1(D) is the aggregated randomized global model at round t − 1, and {z1, . . . , zN} are the randomized local updates at round t, we have a sequence of computations {z1 = M1t (D1; z0), z2 = M2t (D2; z0, z1), z3 = M3t (D3; z0, z1, z2) . . .} and z =Mt(D) = z0 + ∑N i zi. We first consider the sequential composition (Dwork & Roth, 2014) to accumulate the privacy cost over FL rounds. According to parallel composition, we have Pr[Mt(D) = z] = Pr[Mt−1(D) = z0] N∏ i=1 Pr[Mit(Di; z0, z1, . . . , zi−1) = zi] = Pr[Mt−1(D) = z0] Pr[Mkt (Dk; z0, z1, . . . , zk−1) = zk] ∏ i 6=k Pr[Mit(Di; z0, z1, . . . , zi−1) = zi] ≤ exp( t−1) Pr[Mt−1(D′) = z0] exp( k) Pr[Mkt (D′k; z0, z1, . . . , zk−1) = zk] ∏ i 6=k Pr[Mit(Di; z0, z1, . . . , zi−1) = zi] = exp( t−1 + k) Pr[Mt(D′) = z] Therefore,Mt satisfies t-DP, where t = t−1 + k. Because the modified data sample always lies in Dk over t rounds and 0 = 0, we can have t = t k, which means that the privacy guarantee of global mechanismMt is only determined by the local mechanism of k-th user over t rounds. Moreover, moment accountant (Abadi et al., 2016) is known to reduce the privacy cost from O(t) to O( √ t). We can use the more advanced composition, i.e., moment accountant, instead of the sequential composition, to accumulate the privacy cost for local mechanismMk over t FL rounds. In addition, we consider user subsampling. As described in Algorithm 3, if the user i is not selected at round t, then its local privacy cost is kept unchanged at this round. Take the worst case of where x′ could lie in, at round t,M satisfies t-DP, where t = maxi∈[N ] it, local mechanism M i satisfies it-DP, and the local privacy cost i t is accumulated via local moment accountant in i-th user over t rounds. A.2 THREAT MODELS We consider targeted poisoning attacks of two types. In backdoor attacks (Gu et al., 2019; Chen et al., 2017a), the goal is to embed a backdoor pattern (i.e., a trigger) during training such that any test input with such pattern will be mis-classified as the target. In label flipping attacks (Biggio et al., 2012; Huang et al., 2011), the labels of clean training examples from one source class are flipped to the target class while the features of the data are kept unchanged. In FL, the purpose of backdoor attacks is to manipulate local models with backdoored local data, so that the global model would behave normally on untampered data samples while achieving high attack success rate on clean data (Bagdasaryan et al., 2020). Given the same purpose, distributed backdoor attack (DBA) (Xie et al., 2019) decomposes the same backdoor pattern to several smaller ones and embeds them to different local training sets for different adversarial users. The goal of label flipping attack against FL is to manipulate local datasets with flipped labels such that the global model will mis-classify the test data in the source class as the target class. The model replacement (Bagdasaryan et al., 2020) is a more powerful approach to perform the above attacks, where the attackers first train the local models using the poisoned datasets and then scale the malicious updates before sending them to the server. This way, the attacker’s updates would have a stronger impact on the FL model. We use the model replacement method to perform poisoning attacks and study the effectiveness of DPFL. For UserDP-FedAvg, we consider backdoor, distributed backdoor, and label flipping attacks via the model replacement approach. Next, we formalize the attack process and introduce the notations. Suppose the attacker controls k adversarial users, i.e., there are k attackers out of N users. Let B be the original user set of N benign users, and B′ be the user set that contains k attackers. Let D := {D1, D2, . . . , DN} be the union of original benign local datasets across all users. For a data sample zij := {xij , yij} in Di, we denote its backdoored version as z′ i j := {xij + δx, y∗}, where δx is the backdoor pattern, y∗ is the targeted label; the distributed backdoor attack (DBA) version as z′ i j := {xij + δix, y∗}, where δix is the distributed backdoor pattern for attacker i; the label-flipped version as z′ij := {xij , y∗}. Note that the composition of all DBA patterns is equivalent to the backdoor pattern, i.e., ∑k i=1 δ i x = δx. We assume attacker i has αi fraction of poisoned samples in its local dataset D′i. Let D ′ := {D′1, . . . , D′k−1, D′k, Dk+1, . . . , DN} be the union of local datasets when k attackers are present. The adversarial user i performs model replacement by scaling the model update with hyperparameter γ before submitting it to the server, i.e., ∆wit ← γ∆wit. In our threat model, we consider the attacker that follows our training protocol and has no control over which users are sampled. For InsDP-FedAvg, we consider both backdoor and label flipping attacks. Since distributed backdoor and model replacement attacks are proposed for adversarial users rather than adversarial instances, we do not consider them for instance-level DPFL. There are k backdoored or label-flipped instances {z′1, z′2, . . . , z′k}, which could be controlled by same or multiple users. In our threat model, we consider the attacker that follows our training protocol and has no control over which data partition (or batch) is sampled. Note that we do not assume that the adversaries’ poisoning data always be sampled. In our algorithms, each batch is randomly subsampled, so the adversaries cannot control if poisoned data are sampled in each step. A.3 EXPERIMENTAL DETAILS AND ADDITIONAL RESULTS A.3.1 DATASETS AND MODELS We evaluate our robustness certification results with two datasets: MNIST (LeCun & Cortes, 2010) and CIFAR-10 (Krizhevsky, 2009). For each dataset, we use corresponding standard CNN architectures in the differential privacy library (opa, 2021) of PyTorch (Paszke et al., 2019). MNIST: We study an image classification problem of handwritten digits in MNIST. It is a dataset of 70000 28x28 pixel images of digits in 10 classes, split into a train set of 60000 images and a test set of 10000 images. Except Section A.3.6, we consider binary classification on classes 0 and 1, making our train set contain 12665 samples, and the test set 2115 samples. The model consists of two Conv-ReLu-MaxPooling layers and two linear layers. CIFAR-10: We study image classification of vehicles and animals in CIFAR-10. This is a harder dataset than MNIST, consisting of 60000 32x32x3 images, split into a train set of 50000 and a test set of 10000. Except Section A.3.6, we consider binary classification on class airplane and bird, making our train set contain 10000 samples, and the test set 2000 samples. The model consists of four Conv-ReLu-AveragePooling layers and one linear layer. When training on CIFAR10, we follow the standard practice for differential privacy (Abadi et al., 2016; Jagielski et al., 2020) and fine-tune a whole model pre-trained non-privately on the more complex CIFAR100, a similarly sized but more complex benchmark dataset. We can achieve reasonable performance on CIFAR-10 datasets by only training (fine-tuning) few rounds. Sent140: We consider a text sentiment analysis task on tweets from Sentiment140 (Go et al.) (Sent140) which involves classifying Twitter posts as positive or negative. We use a two layer LSTM binary classifier containing 256 hidden units with pretrained 300D GloVe embedding (Pennington et al., 2014). Each twitter account corresponds to a device. We use the same network architecture, non-iid dataset partition method, number of selected user per round, learning rate, batch size, etc. as in (Li et al., 2018), which are summarized in Table 1. A.3.2 TRAINING DETAILS We simulate the federated learning setup by splitting the training datasets for N FL users in an i.i.d manner. FL users run SGD with learning rate η, momentum 0.9, weight decay 0.0005 to update the local models. The training parameter setups are summarized in Table 1. Following (McMahan et al., 2018) that use δ ≈ 1N1.1 as privacy parameter, for UserDP-FedAvg we set δ = 0.0029 according to the total number of users, and for InsDP-FedAvg we set δ = 0.00001 according the total number of training samples. Next we summarize the privacy guarantees and clean accuracy offered when we study the certified prediction and certified attack cost, which are also the training parameters setups when k = 0 in Figure 1, 2, 3, 4, 5, 8. User-level DPFL In order to study the user-level certified prediction under different privacy guarantee, for MNIST, we set to be 0.2808, 0.4187, 0.6298, 0.8694, 1.8504, 2.8305, 4.8913, 6.9269, which are obtained by training UserDP-FedAvg FL model for 3 rounds with noise level σ = 3.0, 2.3, 1.8, 1.5, 1.0, 0.8, 0.6, 0.5, respectively (Figure 1(a)). For CIFAR-10, we set to be 0.1083, 0.1179, 0.1451, 0.2444, 0.3663, 0.4527, 0.5460, 0.8781, which are obtained by training UserDP-FedAvg FL model for one round with noise level σ = 10.0, 8.0, 6.0, 4.0, 3.0, 2.6, 2.3, 1.7, respectively (Figure 1(b)). The clean accuracy (average over 1000 runs) of UserDP-FedAvg under non-DP training ( = ∞) and DP training (varying ) on MNIST and CIFAR-10 are reported in Table. 2 and Table. 3 respectively. To certify the attack cost under different number of adversarial users k (Figure 2), for MNIST, we set the noise level σ to be 2.5. When k = 0, after training UserDP-FedAvg for T = 3, 4, 5 rounds, we obtain FL models with privacy guarantee = 0.3672, 0.4025, 0.4344 and clean accuracy (average over M runs) 86.69%, 88.76%, 88.99%. For CIFAR-10, we set the noise level σ to be 3.0. After training UserDP-FedAvg for T = 3, 4 rounds under k = 0, we obtain FL models with privacy guarantee = 0.5346, 0.5978 and clean accuracy 78.63%, 78.46%. With the interest of certifying attack cost under different user-level DP guarantee (Figure 3, Figure 5), we explore the empirical attack cost and the certified attack cost lower bound given different . For MNIST, we set the privacy guarantee to be 1.2716, 0.8794, 0.6608, 0.5249, 0.4344, which are obtained by training UserDP-FedAvg FL models for 5 rounds under noise level σ = 1.3, 1.6, 1.9, 2.2, 2.5, respectively, and the clean accuracy for the corresponding models are 99.50%, 99.06%, 96.52%, 93.39%, 88.99%. For CIFAR-10, we set the privacy guarantee to be 1.600, 1.2127, 1.0395.0.8530, 0.7616, 0.6543, 0.5978, which are obtained by training UserDP-FedAvg FL models for 4 rounds under noise level σ = 1.5, 1.8, 2.0, 2.3, 2.5, 2.8, 3.0, respectively, and the clean accuracy for the corresponding models are 85.59%, 84.52%, 83.23%, 81.90%, 81.27%, 79.23%, 78.46%. Instance-level DPFL To certify the prediction for instance-level DPFL under different privacy guarantee, for MNIST, we set privacy cost to be 0.2029, 0.2251, 0.2484, 0.3593, 0.4589, 0.6373, 1.0587, 3.5691, which are obtained by training InsDP-FedAvg FL models for 3 rounds with noise level σ = 15, 10, 8, 5, 4, 3, 2, 1, respectively (Figure 1(c)). For CIFAR-10, we set privacy cost to be 0.3158, 0.3587, 0.4221, 0.5130, 0.6546, 0.9067, 1.4949, 4.6978, which are obtained by training InsDP-FedAvg FL models for one round with noise level σ = 8, 7, 6, 5, 4, 3, 2, 1, respectively (Figure 1(d)). The clean accuracy (average over 1000 runs) of InsDP-FedAvg under non-DP training ( =∞) and DP training (varying ) on MNIST and CIFAR-10 are reported in Table. 4 and Table. 5 respectively. With the aim to study certified attack cost under different number of adversarial instances k, for MNIST, we set the noise level σ to be 10. When k = 0, after training InsDP-FedAvg for T = 4, 9 rounds, we obtain FL models with privacy guarantee = 0.2383, 0.304 and clean accuracy (average over M runs) 96.40%, 96.93% (Figure 8(a)(b)). For CIFAR-10, we set the noise level σ to be 8.0. After training InsDP-FedAvg for one round under k = 0, we obtain FL models with privacy guarantee = 0.3158 and clean accuracy 61.78% (Figure 4(a)(b)). In order to study the empirical attack cost and certified attack cost lower bound under different instance-level DP guarantee, we set the privacy guarantee to be 0.5016, 0.311, 0.2646, 0.2318, 0.2202, 0.2096, 0.205 for MNIST, which are obtained by training InsDP-FedAvg FL models for 6 rounds under noise level σ = 5, 8, 10, 13, 15, 18, 20, respectively, and the clean accuracy for the corresponding models are 99.60%, 98.81%, 97.34%, 92.29%, 88.01%, 80.94%, 79.60% (Figure 8 (c)(d)). For CIFAR-10, we set the privacy guarantee to be 1.261, 0.9146, 0.7187, 0.5923, 0.5038, 0.4385, which are obtained by training InsDP-FedAvg FL models for 2 rounds under noise level σ = 3, 4, 5, 6, 7, 8, respectively, and the clean accuracy for the corresponding models are 84.47%, 80.99%, 76.01%, 68.65%, 63.07%, 60.65% (Figure 4 (c)(d)). With the intention of exploring the upper bound for k given τ under different instance-level DP guarantee, for MNIST, we set noise level σ to be 5, 8, 10, 13, 20, respectively, to obtain instance-DP FL models after 10 rounds with privacy guarantee = 0.6439, 0.3937, 0.3172, 0.2626, 0.2179 and clean accuracy 99.58%, 98.83%, 97.58%, 95.23%, 85.72% (Figure 5(c)). For CIFAR-10, we set noise level σ to be 3, 4, 5, 6, 7, 8 and train InsDP-FedAvg for T = 3 rounds, to obtain FL models with privacy guarantee = 1.5365, 1.1162, 0.8777, 0.7238, 0.6159, 0.5361 and clean accuracy 84.34%, 80.27%, 74.62%, 66.94%, 62.14%, 59.75% (Figure 5(d)). A.3.3 ADDITIONAL IMPLEMENTATION DETAILS (Threat Models) For the attacks against UserDP-FedAvg, by default, the local poison fraction α = 100%, and the scale factor γ = 50. We use same parameters setups for all k attackers. In terms of label flipping attacks, the attackers swap the label of images in source class (digit 1 for MNIST; bird for CIFAR-10) into the target label (digit 0 for MNIST; airplane for CIFAR-10). In terms of backdoor attacks in MNIST and CIFAR-10, the attackers add a backdoor pattern, as shown in Figure 6 (left), in images and swap the label of any sample with such pattern into the target label (digit 0 for MNIST; airplane for CIFAR-10). In terms of distributed backdoor attacks, Figure 6 (right) shows an example when the triangle pattern is evenly decomposed into k = 4 parts, and they are used as the distributed patterns for k = 4 attackers respectively. For the cases where there are more or fewer distributed attackers, the similar decomposition strategy is adopted. For the attacks against InsDP-FedAvg, the same target classes and backdoor patterns are used as UserDP-FedAvg. The parameters setups are the same for all k poisoned instances. (Robustness Certification) We certified 2115/2000/1122 test samples from the MNIST/CIFAR10/Sent140 test sets. In Theorem 3 and Corollary 1 that are related to certified attack cost, C̄ specifies the range of C(·). In the implementation, C̄ is set to be larger than the maximum empirical attack cost evaluated on the test sets (see Table 1 for details). For each dataset, we use the same C̄ for cost function C defined in Example 1 and Example 2. When using Monte-Carlo sampling, we run M = 1000 times for certified accuracy, and M = 100 times for certified attack cost in all experiments. (Machines) We simulate the federated learning setup (1 server and N users) on a Linux machine with Intel® Xe
1. What is the focus and contribution of the paper on differentially private federated learning? 2. What are the strengths of the proposed approach, particularly in terms of its empirical evaluations and practical usefulness? 3. What are the weaknesses of the paper, especially regarding the surprising nature of the results and the lack of theoretical/empirical intuition for utility? 4. Do you have any concerns or suggestions for improving the paper's explanation of its contributions and comparisons with prior work?
Summary Of The Paper Review
Summary Of The Paper This paper studies differentially private federated learning and its intrinsic robustness against data poisoning attacks. Theoretically, the authors build two definitions for certified robustness against data poisoning attacks, draw the connection with user-level and instance-level differential privacy. The key proof is based on the definition of individual privacy and group privacy. Empirically, the authors verify the correctness of the bounds by performing real attacks. I think the main contribution is to establish the robustness bound. Review Update: Thank you for your response. I read the authors' response and other reviewers' comments. My second question has been well addressed. However, as other reviewers suggested, the contributions of this paper and comparisons with prior work should be explained more clearly. I would not consider the main contribution as the theory part, because given the group privacy property of DP, this result is not very surprising. I would suggest the authors highlight the empirical contributions and federated learning setting. Strengths: I think this paper provides an interesting perspective for robust machine learning. The empirical evaluations are solid. the proposed algorithms are practically useful Weaknesses: I think the results are correct but not surprising. There have been many papers (theoretically and empirically) showing either robust algorithms can be made private easily and private algorithms provide intrinsic robustness. It would be better if the authors could provide some theoretical/empirical intuition for the utility. It is known that both robust learning algorithms and private algorithms would cause the performance drop. It would be nice if the authors could provide non-private(epsilon=infty) clean accuracy as a comparison.
ICLR
Title Certified Robustness for Free in Differentially Private Federated Learning Abstract Federated learning (FL) provides an efficient training paradigm to jointly train a global model leveraging data from distributed users. As the local training data comes from different users who may not be trustworthy, several studies have shown that FL is vulnerable to poisoning attacks where adversaries add malicious data during training. On the other hand, to protect the privacy of users, FL is usually trained in a differentially private manner (DPFL). Given these properties of FL, in this paper, we aim to ask: Can we leverage the innate privacy property of DPFL to provide robustness certification against poisoning attacks? Can we further improve the privacy of FL to improve such certification? To this end, we first investigate both user-level and instance-level privacy for FL, and propose novel randomization mechanisms and analysis to achieve improved differential privacy. We then provide two robustness certification criteria: certified prediction and certified attack cost for DPFL on both levels. Theoretically, given different privacy properties of DPFL, we prove their certified robustness under a bounded number of adversarial users or instances. Empirically, we conduct extensive experiments to verify our theories under different attacks on a range of datasets. We show that the global model with a tighter privacy guarantee always provides stronger robustness certification in terms of the certified attack cost, while it may exhibit tradeoffs regarding the certified prediction. We believe our work will inspire future research of developing certifiably robust DPFL based on its inherent properties. 1 INTRODUCTION Federated Learning (FL), which aims to jointly train a global model with distributed local data, has been widely applied in different applications, such as finance (Yang et al., 2019b), medical analysis (Brisimi et al., 2018), and user behavior prediction (Hard et al., 2018; Yang et al., 2018; 2019a). However, the fact that the local data and the training process are entirely controlled by the local users who may be adversarial raises great concerns from both security and privacy perspectives. In particular, recent studies show that FL is vulnerable to different types of training-time attacks, such as model poisoning (Bhagoji et al., 2019), backdoor attacks (Bagdasaryan et al., 2020; Xie et al., 2019; Wang et al., 2020), and label-flipping attacks (Fung et al., 2020). Further, privacy concerns have motivated the need to keep the raw data on local devices without sharing. However, sharing other indirect information such as gradients or model updates as part of the FL training process can also leak sensitive user information (Zhu et al., 2019; Geiping et al., 2020; Bhowmick et al., 2018; Melis et al., 2019). As a result, approaches based on differential privacy (DP) (Dwork & Roth, 2014), homomorphic encryption (Bost et al., 2015; Rouhani et al., 2018; Gilad-Bachrach et al., 2016), and secure multiparty computation (Ben-Or et al., 1988; Bonawitz et al., 2017) have been proposed to protect privacy of users in federated learning. In particular, differentially private federated learning (DPFL) provides strong information theoretic guarantees on user privacy, while causing relatively low performance overhead (Li et al., 2020b). Several defenses have been proposed to defend against poisoning attacks in FL. For instance, various robust aggregation methods (Fung et al., 2020; Pillutla et al., 2019; Blanchard et al., 2017; El Mhamdi et al., 2018; Chen et al., 2017b; Yin et al., 2018; Fu et al., 2019; Li et al., 2020a) identify and down-weight the malicious updates during aggregation or estimate a true “center” of the received updates rather than taking a weighted average. Other methods include robust federated training protocols (e.g., clipping (Sun et al., 2019), noisy perturbation (Sun et al., 2019), and additional evaluation during training (Andreina et al., 2020)) and post-training strategies (e.g., fine-tuning and pruning (Wu et al., 2020)) that repair the poisoned global model. However, as these works mainly focus on providing empirical robustness for FL, they have been shown to be vulnerable to newly proposed strong adaptive attacks (Wang et al., 2020; Xie et al., 2019; Baruch et al., 2019; Fang et al., 2020). Hence, in this paper, we aim to develop certified robustness guarantees for FL against different poisoning attacks. Further, as differentially private federated learning (DPFL) is often used to protect user privacy, we also aim to ask: Can we leverage the innate privacy property of DPFL to provide robustness certification against poisoning attacks for free? Can we further improve the privacy of FL so as to improve its certified robustness? Recent studies suggest that differential privacy (DP) is inherently related with robustness of ML models. Intuitively, DP is designed to protect the privacy of individual data, such that the output of an algorithm remains essentially unchanged when one individual input point is modified. Hence, the prediction of a DP model will be less impacted by a small amount of poisoned training data. Consequently, DP has been used to provide both theoretical and empirical defenses against evasion attacks (Lecuyer et al., 2019a) and data poisoning attacks (Ma et al., 2019; Hong et al., 2020) on centralized ML models. It has also been used as an empirical defense against backdoor attacks (Gu et al., 2019) in federated learning (Bagdasaryan et al., 2020; Sun et al., 2019), although no theoretical guarantee is provided. To the best of our knowledge, despite of the wide application of DPFL,there is no work providing certified robustness for DPFL leveraging its privacy property. In this paper, we aim to leverage the inherent privacy property of DPFL to provide robustness certification for FL against poisoning attacks for free. Our challenges include: (1) performing privacy analysis over training rounds in DPFL algorithms and (2) theoretically guaranteeing certified robustness based on DP properties under a given privacy budget. We propose two robustness certification criteria for FL: certified prediction and certified attack cost under different attack constraints. We consider both user-level DP (Agarwal et al., 2018; Geyer et al., 2017; McMahan et al., 2018; Asoodeh & Calmon, 2020; Liang et al., 2020) which is widely guaranteed in FL, and instance-level DP (Malekzadeh et al., 2021; Zhu et al., 2021) which is less explored in FL. We prove that a FL model satisfying user-level DP is certifiably robust against a bounded number of adversarial users. In addition, we propose InsDP-FedAvg algorithm to improve instance-level DP in FL, and prove that instance-level DPFL is certifiably robust against a bounded number of adversarial instances. We also study the correlation between privacy guarantee and certified robustness of FL. While stronger privacy guarantees result in greater attack cost, overly strong privacy can hurt the certified prediction by introducing too much noise in the training process. Thus, the optimal certified prediction is often achieved under a proper balance between privacy protection and utility loss. Key Contributions. Our work takes the first step to provide certified robustness in DPFL for free against poisoning attacks. We make contributions on both theoretical and empirical fronts. • We propose two criteria for certified robustness of FL against poisoning attacks (Section 4.2). • Given a FL model satisfying user-level DP, we prove that it is certifiably robust against arbitrary poisoning attacks with a bounded number of adversarial users (Section 4.2). • We propose InsDP-FedAvg algorithm to improve FL instance-level privacy guarantee (Sec- tion 5.1). We prove that instance-level DPFL is certifiably robust against the manipulation of a bounded number of instances during training (Section 5.2). • We conduct extensive experiments on image classification of MNIST, CIFAR-10 and sentiment analysis of tweets to verify our proposed certifications of two robustness criteria, and compare the certified results of different DPFL algorithms (Section 6). 2 RELATED WORK Differentially Private Federated Learning. Different approaches are proposed to guarantee the user-level privacy for FL. (Geyer et al., 2017; McMahan et al., 2018) clip the norm of each local update, add Gaussian noise on the summed update, and characterize its privacy budget via moment accountant (Abadi et al., 2016). (McMahan et al., 2018) extends (Geyer et al., 2017) to language models. In CpSGD (Agarwal et al., 2018), each user clips and quantizes the model update, and adds noise drawn from Binomial distribution, achieving both communication efficiency and DP. (Bhowmick et al., 2018) derive DP for FL via Rényi divergence (Mironov, 2017) and study its protection against data reconstruction attacks. (Liang et al., 2020) utilizes Laplacian smoothing for each local update to enhance the model utility. Instead of using moment accountant to track privacy budget over FL rounds as previous work, (Asoodeh & Calmon, 2020) derives the DP parameters by interpreting each round as a Markov kernel and quantify its impact on privacy parameters. All these works only focus on providing user-level privacy, leaving its robustness property unexplored. In terms of instance-level privacy for FL, there are only a few work (Malekzadeh et al., 2021; Zhu et al., 2021). Dopamine (Malekzadeh et al., 2021) provides instance-level privacy guarantee when each user only performs one step of DP-SGD (Abadi et al., 2016) at each FL round. However, it cannot be applied to multi-step SGD for each user, thus it cannot be extended to the general FL setting FedAvg (McMahan et al., 2017). (Zhu et al., 2021) privately aggregate the labels from users in a voting scheme, and provide DP guarantees on both user level and instance level. However, it is also not applicable to standard FL, since it does not allow aggregating the gradients or updates. Differential Privacy and Robustness. In standard (centralized) learning, Pixel-DP (Lecuyer et al., 2019a) is proposed to certify the model robsutness against evasion attacks. However, it is unclear how to leverage it to certify against poisoning attacks. To certify the robustness against poisoning attacks, (Ma et al., 2019) show that private learners are resistant to data poisoning and analyze the lower bound of attack cost against poisoning attacks for regression models. Here we certify the robustness in DPFL setting with such lower bound as one of our certification criteria and additionally derive its upper bounds. (Hong et al., 2020) show that the off-the-shelf mechanism DP-SGD (Abadi et al., 2016), which clips per-sample gradients and add Guassian noises during training, can serve as a defense against poisoning attacks empirically. In federated learning, empirical work (Bagdasaryan et al., 2020; Sun et al., 2019) show that DPFL can mitigate backdoor attacks; however, none of these work provides certified robustness guarantees for DPFL against poisoning attacks. 3 PRELIMINARIES We start by providing some background on differential privacy (DP) and federated learning (FL). Differential Privacy (DP). DP is a formal, mathematically rigorous definition (and standard) of privacy that intuitively guarantees that a randomized algorithm behaves similarly on similar inputs and that the output of the algorithm is about the same whether or not an individual’s data is included as part of the input (Dwork & Roth, 2014). Definition 1 (( , δ)-DP (Dwork et al., 2006)). A randomized mechanismM : D → Θ with domain D and range Θ satisfies ( , δ)-DP if for any pair of two adjacent datasets d, d′ ∈ D, and for any possible (measurable) output set E ⊆ Θ, it holds that Pr[M(d) ∈ E] ≤ e Pr [M (d′) ∈ E] + δ. In Definition 1, whenM is a training algorithm for ML model, domain D and range Θ represent all possible training datasets and all possible trained models respectively. Group DP for ( , δ)-DP mechanisms follows immediately from Definition 1 where the privacy guarantee drops with the size of the group. Formally, it says: Lemma 1 (Group DP). For mechanismM that satisfies ( , δ)-DP, it satisfies (k , 1−e k 1−e δ)-DP for groups of size k. That is, for any d, d′ ∈ D that differ by k individuals, and any E ⊆ Θ it holds that Pr[M(d) ∈ E] ≤ ek Pr [M (d′) ∈ E] + 1−e k 1−e δ. Federated Learning. FedAvg was introduced by (McMahan et al., 2017) for FL to train a shared global model without direct access to training data of users. Specifically, given a FL system with N users, at round t, the server sends the current global model wt−1 to users in the selected user set Ut, where |Ut| = m = qN and q is the user sampling probability. Each selected user i ∈ Ut locally updates the model for E local epochs with its dataset Di and learning rate η to obtain a new local model. Then, the user sends the local model updates ∆wit to the server. Finally, the server aggregates over the updates from all selected users into the new global model wt = wt−1 + 1m ∑ i∈Ut ∆w i t. 4 USER-LEVEL PRIVACY AND CERTIFIED ROBUSTNESS FOR FL 4.1 USER-LEVEL PRIVACY AND BACKGROUND Definition 1 leaves the definition of adjacent datasets flexible, which depends on applications. To protect user-level privacy, adjacent datasets are defined as those differing by data from one user (McMahan et al., 2018). The formal definition of User-level ( , δ)-DP (Definition 2) is omitted to Appendix A.1. Following standard DPFL (Geyer et al., 2017; McMahan et al., 2018), we introduce one of standard user-level DPFL algorithms UserDP-FedAvg (Algorithm 1 in Appendix A.1). At each round, the server first clips the update from each user with a threshold S such that its `2-sensitivity is upper bounded by S. Next, the server sums up the updates, adds Gaussian noise sampled from N (0, σ2S2), and takes the average, i.e., wt ← wt−1 + 1m (∑ i∈Ut Clip(∆w i t, S) +N ( 0, σ2S2 )) . Given the user sampling probability q, noise level σ, FL rounds T , and a δ > 0, the privacy analysis of UserDP-FedAvg satisfying ( , δ)-DP is given by Proposition 1 in Appendix A.1, which is a generalization of (Abadi et al., 2016). The aim of Proposition 1 is to analyze privacy budget in FL, which is accumulated as T increases due to the continuous access to training data. Following (Geyer et al., 2017; McMahan et al., 2018), moment accountant (Abadi et al., 2016) is used in the privacy analysis. 4.2 CERTIFIED ROBUSTNESS OF USER-LEVEL DPFL AGAINST POISONING ATTACKS Threat Model. We consider the poisoning attacks against FL, where k adversarial users have poisoned instances in local datasets, aiming to fool the trained DPFL global model. Such attacks include backdoor attacks (Gu et al., 2019; Chen et al., 2017a) and label flipping attacks (Biggio et al., 2012; Huang et al., 2011). The detailed description of these attacks is deferred to Appendix A.2. Note that our robustness certification is attack-agnostic under certain attack constraints (e.g., k), and we will verify our certification bounds with different poisoning attacks in Section 6. Next, we propose two criteria for the robustness certification in FL: certified prediction and certified attack cost. Certified Prediction. Consider the classification task with C classes. We define the classification scoring function f : (Θ,Rd) → ΥC which maps model parameters θ ∈ Θ and an input data x ∈ Rd to a confidence vector f(θ, x), and fc(θ, x) ∈ [0, 1] represents the confidence of class c. We mainly focus on the confidence after normalization, i.e., f(θ, x) ∈ ΥC = {p ∈ RC≥0 : ‖p‖1 = 1} in the probability simplex. Since the DP mechanismM is randomized and produces a stochastic FL global model θ = M(D), it is natural to resort to a probabilistic expression as a bridge for quantitative robustness certifications. Following the convention in (Lecuyer et al., 2019b; Ma et al., 2019), we use the expectation of the model’s prediction to provide a quantitative guarantee on the robustness of M. Specifically, we define the expected scoring function F : (θ,Rd)→ ΥC where Fc(M(D), x) = E[fc(M(D), x)] is the expected confidence for class c. The expectation is taken over DP training randomness, e.g., random Gaussian noise and random user subsampling. The corresponding prediction H : (θ,Rd) → [C] is defined by H(M(D), x) := arg maxc∈[C] Fc(M(D), x), which is the top-1 class based on the expected prediction confidence. We will prove that such prediction allows robustness certification against poisoning attacks. Following our threat model above and DPFL training in Algorithm 1, we denote the trained global model exposed to poisoning attacks byM(D′). When k = 1, D and D′ are user-level adjacent datasets according to Definition 2. Given that mechanismM satisfies user-level ( , δ)-DP, based on the innate DP property, the distribution of the stochastic model M(D′) is “close” to the distribution of M(D). Moreover, according to the post-processing property of DP, during testing, given a test sample x, we would expect the values of the expected confidence for each class c, i.e., Fc(M(D′), x) and Fc(M(D), x), to be close, and hence the returned most likely class to be the same, i.e., H(M(D), x) = H(M(D′), x), indicating robust prediction against poisoning attacks. Theorem 1 (Condition for Certified Prediction under One Adversarial User). Suppose a randomized mechanismM satisfies user-level ( , δ)-DP. For two user sets B and B′ that differ by one user, let D and D′ be the corresponding training datasets. For a test input x, suppose A,B ∈ [C] satisfy A = arg maxc∈[C] Fc(M(D), x) and B = arg maxc∈[C]:c 6=A Fc(M(D), x), then if FA(M(D), x) > e2 FB(M(D), x) + (1 + e )δ, (1) it is guaranteed that H(M(D′), x) = H(M(D), x) = A. When k > 1, we resort to group DP. According to Lemma 1, given mechanismM satisfying userlevel ( , δ)-DP, it also satisfies user-level (k , 1−e k 1−e δ)-DP for groups of size k. When k is smaller than a certain threshold, leveraging the group DP property, we would expect that the distribution of the stochastic modelM(D′) is not too far away from the distribution ofM(D) such that they would make the same prediction for a test sample with probabilistic guarantees. Therefore, the privacy and robustness guarantees are simultaneously met byM. Theorem 2 (Upper Bound of k for Certified Prediction). Suppose a randomized mechanism M satisfies user-level ( , δ)-DP. For two user sets B and B′ that differ by k users, let D and D′ be the corresponding training datasets. For a test input x, suppose A,B ∈ [C] satisfy A = arg maxc∈[C] Fc(M(D), x) and B = arg maxc∈[C]:c 6=A Fc(M(D), x), then H(M(D′), x) = H(M(D), x) = A, ∀k < K where K is the certified number of adversarial users: K = 1 2 log FA(M(D), x)(e − 1) + δ FB(M(D), x)(e − 1) + δ (2) The proofs of Theorems 1 and 2 are omitted to Appendix A.4. Theorems 1 and 2 reflect a tradeoff between privacy and certified prediction: (i) in Theorem 1, if is large such that the RHS of Eq (1) > 1, the robustness condition cannot be met since the expected confidence FA(M(D), x) ∈ [0, 1]. However, to achieve small , i.e., strong privacy, large noise is required during training, which would hurt model utility and thus result in small confidence margin between the top two classes (e.g., FA(M(D), x) and FB(M(D), x)), making it hard to meet the robustness condition. (ii) In Theorem 2 if we fix FA(M(D), x) and FB(M(D), x), smaller of FL can certify larger K. However, smaller also induces smaller confidence margin, thus reducing K instead. As a result, properly choosing would help to certify a large K. Certified Attack Cost. In addition to the certified prediction, we define the attack cost for attacker C : Θ → R which quantifies the difference between the poisoned model and the attack goal. In general, attacker aims to minimize the expected attack cost J(D) := E[C(M(D))], where the expectation is taken over the randomness of DP training. The cost function can be instantiated according to the concrete attack goal in different types of poisoning attacks, and we provide some examples below. Given a global FL model satisfying user-level ( , δ)-DP, we will prove the lower bound of the attack cost J(D′) when manipulating the data of at most k users. Higher lower bound of the attack cost indicates more certifiably robust global model. Example 1. (Backdoor attack (Gu et al., 2019)) C(θ) = 1n ∑n i=1 l(θ, z ∗ i ), where z ∗ i = (xi + δx, y ∗), δx is the backdoor pattern, y∗ is the target adversarial label. Minimizing J(D′) drives the prediction on any test data with the backdoor pattern δx to the target label y∗. Example 2. (Label Flipping attack (Biggio et al., 2012)) C(θ) = 1n ∑n i=1 l(θ, z ∗ i ), where z ∗ i = (xi, y ∗) and y∗ is the target adversarial label. Minimizing J(D′) drives the prediction on test data xi to the target label y∗. Example 3. (Parameter-Targeting attack (Ma et al., 2019)) C(θ) = 12‖θ − θ ?‖2, where θ? is the target model. Minimizing J(D′) drives the poisoned model to be close to the target model. Theorem 3 (Attack Cost with k Attackers). Suppose a randomized mechanismM satisfies user-level ( , δ)-DP. For two user sets B and B′ that differ k users, D and D′ are the corresponding training datasets. Let J(D) be the expected attack cost where |C(·)| ≤ C̄. Then, min{ek J(D) + e k − 1 e − 1 δC̄, C̄} ≥ J(D ′) ≥ max{e−k J(D)− 1− e −k e − 1 δC̄, 0}, if C(·) ≥ 0 min{e−k J(D) + 1− e −k e − 1 δC̄, 0} ≥ J(D ′) ≥ max{ek J(D)− e k − 1 e − 1 δC̄,−C̄}, if C(·) ≤ 0 (3) The proof is omitted to Appendix A.4. Theorem 3 provides the upper bounds and lower bounds for attack cost J(D′). The lower bounds show that to what extent the attack can reduce J(D′) by manipulating up to k users, i.e., how successful the attack can be. The lower bounds depend on the attack cost on clean model J(D), k and . When J(D) is higher, the DPFL model under poisoning attacks is more robust because the lower bounds are accordingly higher; a tighter privacy guarantee, i.e., smaller , can also lead to higher robustness certification as it increases the lower bounds; with larger k, the attacker ability grows and thus lead to lower possible J(D′). The upper bounds show the least adversarial effect brought by k attackers, i.e., how vulnerable the DPFL model is under the optimistic case (e.g., the backdoor pattern is less distinguishable). Leveraging the lower bounds in Theorem 3, we can lower-bound the minimum number of attackers required to reduce the attack cost to certain level associated with hyperparameter τ in Corollary 1. Corollary 1 (Lower Bound of k Given τ ). Suppose a randomized mechanism M satisfies userlevel ( , δ)-DP. Let attack cost function be C, the expected attack cost be J(·). In order to achieve J(D′) ≤ 1τ J(D) for τ ≥ 1 when 0 ≤ C(·) ≤ C̄, or achieve J(D ′) ≤ τJ(D) for 1 ≤ τ ≤ − C̄J(D) when −C̄ ≤ C(·) ≤ 0, the number of adversarial users should satisfy: k ≥ 1 log (e − 1) J(D)τ + C̄δτ (e − 1) J(D) + C̄δτ or k ≥ 1 log (e − 1) J(D)τ − C̄δ (e − 1) J(D)− C̄δ respectively. (4) The proof is omitted to Appendix A.4. Corollary 1 shows that stronger privacy guarantee (i.e., smaller ) requires more attackers to achieve the same effectiveness of attack, indicating higher robustness. 5 INSTANCE-LEVEL PRIVACY AND CERTIFIED ROBUSTNESS FOR FL 5.1 INSTANCE-LEVEL PRIVACY In this section, we introduce the instance-level DP definition, the corresponding algorithm, and the privacy analysis for FL. When DP is used to protect the privacy of individual instance, the trained stochastic FL model should not differ much if one instance is modified. Hence, the adjacent datasets in instance-level DP are defined as those differing by one instance. The formal definition of Instance-level ( , δ)-DP (Definition 3) is omitted to Appendix A.1. Dopamine (Malekzadeh et al., 2021) provides the first instance-level privacy guarantee under FedSGD (McMahan et al., 2017). However, it has two limitations. First, its privacy bound is loose. Although FedSGD performs both user and batch sampling during training, Dopamine ignores the privacy gain provided by random user sampling. In this section, we improve the privacy guarantee under FedSGD with privacy amplification via user sampling (Bassily et al., 2014; Abadi et al., 2016). This improvement leads to algorithm InsDP-FedSGD, to achieve tighter privacy analysis. We defer the algorithm (Algorithm 2) as well as its privacy guarantee to Appendix A.1. Besides the loose privacy bound, Dopamine (Malekzadeh et al., 2021) only allows users to perform one step of DP-SGD (Abadi et al., 2016) during each FL round. This restriction limits the efficiency of the algorithm and increases the communication overhead. In practice, users in FL are typically allowed to update their local models for many steps before submitting updates to reduce the communication cost. To solve this problem, we further improve InsDP-FedSGD to support multiple local steps during each round. Specifically, we propose a novel instance-level DPFL algorithm InsDP-FedAvg (Algorithm 3 in Appendix A.1) allowing users to train multiple local SGD steps before submitting the updates. In InsDP-FedAvg, each user i performs local DP-SGD so that the local training mechanismMi satisfies instance-level DP. Then, the server aggregates the updates. We prove that the global mechanismM preserves instance-level DP using DP parallel composition theorem (Dwork & Lei, 2009) and moment accountant (Abadi et al., 2016). Algorithm 3 formally presents the InsDP-FedAvg algorithm and the calculation of its privacy budget . Specifically, at first, local privacy cost i0 is initialized as 0 before FL training. At round t, if user i is not selected, its local privacy cost is kept unchanged it ← it−1. Otherwise user i updates local model by running DP-SGD for V local steps with batch sampling probability p, noise level σ and clipping threshold S, and it is accumulated upon i t−1 via its local moment accountant. Next, the server aggregates the updates from selected users, and leverages { it}i∈[N ] and the parallel composition in Theorem 4 to calculate the global privacy cost t. After T rounds, the mechanismM that outputs the FL global model in Algorithm 3 is instance-level ( T , δ)-DP. Theorem 4 (InsDP-FedAvg Privacy Guarantee). In Algorithm 3, during round t, if the local mechanismMi satisfies ( it, δ)-DP, then the global mechanismM satisfies ( maxi∈[N ] i t, δ ) -DP. The idea behind Theorem 4 is that when D′ and D differ in one instance, the modified instance only falls into one local dataset, and thus parallel composition theorem (Dwork & Lei, 2009) can be applied. Then the privacy guarantee corresponds to the worst-case, and is obtained by taking the maximum local privacy cost across all the users. The detailed proof is given in Appendix A.1. 5.2 CERTIFIED ROBUSTNESS OF INSTANCE-LEVEL DPFL AGAINST POISONING ATTACKS Threat Model. We consider poisoning attacks under the presence of k poisoned instances. These instances could be controlled by the same or multiple adversarial users. Our robustness certification is agnostic to the attack methods as long as the number of poisoned instances is constrained. According to the group DP property (Lemma 1) and the post-processing property for FL model with instance-level ( , δ)-DP, we prove that our robust certification results proposed for user-level DP are also applicable to instance-level DP. Below is the formal theorem (proof is given in Appendix A.4). Theorem 5. Suppose D and D′ differ by k instances, and the randomized mechanismM satisfies instance-level ( , δ)-DP. The results in Theorems 1, 2,and 3, and Corollary 1 hold forM, D, and D′. Comparison with existing certified prediction methods in centralized setting. The form of Theorem 1 is similar with the robustness condition against test-time attack in Proposition 1 of (Lecuyer et al., 2019a). This is because the derived robustness conditions are both rooted in the DP properties, but ours focus on the robustness against training-time attacks in FL, which is more challenging considering the distributed nature and the model training dynamics, i.e., the analysis of the privacy budget over training rounds. Our Theorem 1 is also different from previous randomized smoothingbased certifiably robust centralized learning against backdoor (Weber et al., 2020) and label flipping (Rosenfeld et al., 2020). First, our randomness comes from the inherent training randomness of user/instance-level ( , δ)-DP, e.g., user subsampling and Gaussian noise. Thus, the certified robustness for free in DPFL means that the DPFL learning algorithmM itself is randomized, and such randomness can lead to the robustness certification with non-trivial quantitative measurement of the randomness. On the contrary, robustness in randomized smoothing-based methods comes from explicitly making the classification process randomized via adding noise in training datasets (Weber et al., 2020; Rosenfeld et al., 2020), or test samples (Lecuyer et al., 2019a; Cohen et al., 2019) which is easier to measure. Second, our Theorem 1, 2 hold no matter how is achieved, which means that we can add different types of noise, leverage different subsampling strategies or even different FL training protocols to achieve user/instance-level . However, in (Weber et al., 2020; Rosenfeld et al., 2020) different certifications require different types of noise (Laplacian, Gaussian, etc.). Additionally, DP is suitable to characterize the robustness against poisoning since DP composition theorems can be leveraged to track privacy cost , which captures the training dynamics of ML model parameters without additional assumptions. Otherwise one may need to track the deviations of model parameters by analyzing SGD over training, which is theoretically knotty and often requires strong assumptions on Lipschitz continuity, smoothness or convexity for the trained models. 6 EXPERIMENTS We present evaluations for robustness certifications, expecially Thm. 2, 3 and Cor. 1. We find that 1) there is a tradeoff between certified prediction and privacy on certain datasets; 2) a tighter privacy guarantee always provides stronger certified robustness in terms of the certified attack cost; 3) our lower bounds of certified attack cost are generally tight when k is small. When k is large, they are tight under strong attacks (e.g., large local poisoning ratio α). Stronger attacks or tighter certification are requried to further tighten the gap between the emprical robustness and theoretical bounds. Data and Model. We evaluate our robustness certification results with three datasets: image classfication on MNIST, CIFAR-10 and text sentiment analysis task on tweets from Sentiment140 (Go et al.) (Sent140), which involves classifying Twitter posts as positive or negative. For image datasets, we use corresponding standard CNN architectures in the differential privacy library (opa, 2021) of PyTorch; for Sent140, we use a LSTM classifier. Following previous work on DP ML (Jagielski et al., 2020; Ma et al., 2019) and backdoor attacks (Tran et al., 2018; Weber et al., 2020) which evaluate with two classes, we focus on binary classification for MNIST (digit 0 and 1) and CIFAR-10 (airplane and bird), and defer the 10-class results to Appendix A.3. We train FL model following Algorithm 1 for user-level privacy and Algorithm 3 for instance-level privacy. We refer the readers to Appendix A.3 for details about the datasets, networks, parameter setups. Poisoning Attacks. We evaluate several state-of-the-art poisoning attacks against the proposed UserDP-FedAvg and InsDP-FedAvg. We first consider backdoor attacks (BKD) (Bagdasaryan et al., 2020) and label flipping attacks (LF) (Fung et al., 2020). For InsDP-FedAvg, we consider the worst case where k backdoored or lable-flipped instances are fallen into the dataset of one user. For UserDP-FedAvg, we additionally evaluate distributed backdoor attack (DBA) (Xie et al., 2019), which is claimed to be a more stealthy backdoor attack against FL. Moreover, we consider BKD, LF and DBA via model replacement approach (Bagdasaryan et al., 2020) where k attackers train the local models using local datasets with α fraction of poisoned instances, and scale the malicious updates with hyperparameter γ, i.e., ∆wit ← γ∆wit, before sending them to the sever. This way, the malicious updates would have a stronger impact on the FL model. Note that even when attackers perform scaling, after server clipping, the sensitivity of updates is still upper-bounded by the clipping threshold S. So the privacy guarantee in Proposition 1 still holds under poisoning attacks via model replacement. Detailed attack setups are presented in Appendix A.3. Evaluation Metrics and Setup. We consider two evaluation metrics based on our robustness certification criteria. The first metric is certified accuracy, which is the fraction of the test set for which the poisoned FL model makes correct and consistent predictions compared with the clean FL model. Given a test set of size n, for i-th test sample, the ground truth label is yi, the output prediction is ci , and the certified number of adversarial users/instances is Ki. We calculate the certified accuracy at k as 1n ∑n i=1 1{ci = yi and Ki ≥ k}. The second metric is the lower bound of attack cost in Theorem 3: J(D′) = max{e−k J(B)− 1−e −k e −1 δC̄, 0}. We evaluate the tightness of J(D′) by comparing it with empirical attack cost J(D′). To quantify the robustness, we evaluate the expected class confidence Fc(M(D), x) for class c via Monte-Carlo sampling. We run the private FL algorithms for M =1000 times, with class confidence fsc = fc(M(D), x) for each time. We compute its expectation to estimate Fc(M(D), x) ≈ 1M ∑M s=1 f s c and use it to evaluate Theorem 2. In addition, we use Hoeffding’s inequality (Hoeffding, 1994) to calibrates the empirical estimation with confidence level parameter ψ, and results are deferred to Appendix A.3. In terms of the attack cost, we use Example 1, 2 as the definitions of cost function C for backdoor attacks and label flipping attacks respectively. We follow similar protocol to estimate J(D′) for Theorem 3 and Corollary 1. 6.1 ROBUSTNESS EVALUATION OF USER-LEVEL DPFL Certified Prediction. Figure 1(a)(b) present the user-level certified accuracy under different by training DPFL models with different noise scale σ. The results on Sent140 dataset is presented in Figure 13 of Appendix. A.3.8. We observe that the largest k can be certified when is around 0.6298 in MNIST, 0.1451 in CIFAR-10, and 0.2247 in Sent140 which verifies the tradeoff between and certified accuracy as we discussed in Section 4.2. Advanced DP protocols that requires less noise while achieving similar level of privacy are favored to improve the privacy, utility, and certified accuracy simultaneously. Furthermore, we compare the certified accuracy of four different user-level DPFL methods (McMahan et al., 2018; Geyer et al., 2017) given the same privacy budget . As shown in Figure 14 and Figure 15 of Appendix. A.3.9, the models trained by different DPFL algorithms satisfying same have different certified robustness results. This is because even under the same , different DPFL algorithmsM produce trained modelsM(D) with different model performance, thus leading to different certified robustness. More discussion could be found in Appendix. A.3.9. Certified Attack Cost. In order to evaluate Theorem 3 and characterize the tightness of our theoretical lower bound J(D′), we compare it with the empirical attack cost J(D′) under different local poison fraction α , attack methods and scale factor γ in Figure 2. Note that when k = 0, the model is benign so the empirical cost equals to the certified one. We find that 1) when k increases, the attack ability grows, and both the empirical attack cost and theoretical lower bound decreases. 2) In Figure 2 row 1, given the same k, higher α, i.e., poisoning more local instances for each attacker, achieves a stronger attack, under which lower empirical J(D) can be achieved and is more close to the certified lower bound. This indicates that the lower bound appears tighter when the poisoning attack is stronger. 3) In Figure 2 row 2, we fix α = 100% and evaluate UserDP-FedAvg under different γ and attack methods. It turns out that DP serves as a strong defense empirically for FL, given that J(D) did not vary much under different γ(1, 50, 100) and different attack methods (BKD, DBA, LF). This is because the clipping operation restricts the magnitude of malicious updates, rendering the model replacement ineffective; the Gaussian noise perturbs the malicious updates and makes the DPFL model stable, and thus the FL model is less likely to memorize the poisoning instances. 4) In both rows, the lower bounds are tight when k is small. When k is large, there remains a gap between our theoretical lower bounds and empirical attack costs under different attacks, which will inspire more effective poisoning attacks or tighter robustness certification. Certified Attack Cost under Different . Here we further explore the impacts of different factors on the certified attack cost. Figure 3 presents the empirical attack cost and the certified attack cost lower bound given different on user-level DP. It is shown that as the privacy guarantee becomes stronger, i.e. smaller , the model is more robust achieving higher J(D′) and J(D′). In Figure 5 (a)(b), we train user-level ( , δ) DPFL models, calculate corresponding J(D), and plot the lower bound of k given different attack effectiveness hyperparameter τ according to Corollary 1. It shows that 1) when the required attack effectiveness is higher, i.e., τ is larger, more number of attackers is required. 2) To achieve the same effectiveness of attack, fewer number of attackers is needed for larger , which means that DPFL model with weaker privacy is more vulnerable to poisoning attacks. 6.2 ROBUSTNESS EVALUATION OF INSTANCE-LEVEL DPFL Certified Prediction. Figure 1(c)(d) show the instance-level certified accuracy under different . The optimal for K is around 0.3593 for MNIST and 0.6546 for CIFAR-10, which is aligned with our observation of the tradeoff between certified accuracy and privacy on user-level DPFL (Section 6.1). Certified Attack Cost. Figure 4 show the certified attack cost on CIFAR-10. From Figure 4 (a)(b), poisoning more instances (i.e., larger k) induces lower theoretical and empirical attack cost. From Figure 4 (c)(d), it is clear that instance-level DPFL with stronger privacy guarantee provides higher attack cost both empirically and theoretically, meaning that it is more robust against poisoning attacks. Results on MNIST are deferred to Appendix A.3. Figure 5 (c)(d) show the lower bound of k under different instance-level given different τ . Fewer poisoned instances are required to reduce the J(D′) to the similar level for a less private DPFL model, indicating that the model is easier to be attacked. 7 CONCLUSION In this paper, we present the first work on deriving certified robustness in DPFL for free against poisoning attacks. We propose two robustness certification criteria, based on which we prove that a FL model satisfying user-level (instance-level) DP is certifiably robust against a bounded number of adversarial users (instances). Our theoretical analysis characterizes the inherent relation between certified robustness and differential privacy of FL on both user and instance levels, which are empirically verified with extensive experiments. Our results can be used to improve the trustworthiness of DPFL. Ethics Statement. Our work study the robustness guarantee of differentially private federated learning models from theoretical and empirical perspectives. All the datasets and packages we use are open-sourced. We do not have ethical concerns in our paper. Reproducibility Statement. Our source code is available as the supplemental material for reproducibility purpose. Our experiments can be reproduced following our detailed training and evaluation setups in Appendix A.3. The complete proofs of privacy analysis and certified robustness analysis can be found in the Appendix A.1 and Appendix A.4, respectively. A APPENDIX The Appendix is organized as follows: • Appendix A.1 provides the DP definitions and the DPFL algorithms on both user and instance levels, and the proofs for corresponding privacy guarantees. • Appendix A.2 specifies our threat models. • Appendix A.3 provides more details on experimental setups for training and evaluation, the addition experimental results on certified accuracy with confidence level, robustness evaluation of InsDP-FedAvg on MNIST, robustness evaluation on 10-class classification, DP bound comparison between InsDP-FedSGD and Dopamine, certified accuracy of UserDP-FedAvg on Sent140 and certified accuracy comparison of different user-level DPFL algorithms. • Appendix A.4 provides the proofs for the certified robustness related analysis, including Lemma 1, Theorem 1, 2, 3, 5 and Corollary 1. • Appendix A.5 provides the comparison to related work (Lecuyer et al., 2019a; Ma et al., 2019). A.1 DIFFERENTIALLY PRIVATE FEDERATED LEARNING A.1.1 USERDP-FEDAVG Definition 2 (User-level ( , δ)-DP). Let B,B′ be two user sets with size N . Let D and D′ be the datasets that are the union of local training examples from all users inB andB′ respectively. Then,D and D′ are adjacent if B and B′ differ by one user. The mechanismM satisfies user-level ( , δ)-DP if it meets Definition 1 with D and D′ as adjacent datasets. Algorithm 1: UserDP-FedAvg. Input: Initial model w0, user sampling probability q, privacy parameter δ, clipping threshold S, noise level σ, local datasets D1, ..., DN , local epochs E, learning rate η. Output: FL model wT and privacy cost Server executes: for each round t = 1 to T do m← max(q ·N, 1); Ut ← (random subset of m users); for each user i ∈ Ut in parallel do ∆wit ← UserUpdate(i, wt−1) ; wt ← wt−1+ 1m (∑ i∈Ut Clip(∆w i t, S) +N ( 0, σ2S2 )) ; M.accum priv spending(σ, q, δ) ; =M.get privacy spent() ; return wT , Procedure UserUpdate(i, wt−1) w ← wt−1 ; for local epoch e = 1 to E do for batch b ∈ local dataset Di do w ← w − η∇l(w; b) ∆wit ← w − wt−1 ; return ∆wit Procedure Clip(∆, S) return ∆/max ( 1, ‖∆‖2 S ) In Algorithm 1,M.accum priv spending() andM.get privacy spent() are the calls on the moments accountantM refer to the API of (Abadi et al., 2016). Given the user sampling probability q, noise level σ, FL rounds T , and a δ > 0, UserDP-FedAvg satisfies ( , δ)-DP as below, which is a generalization of (Abadi et al., 2016). The aim is to analyze privacy budget , which is accumulated as T increases due to the continuous access to training data. Proposition 1 (UserDP-FedAvg Privacy Guarantee). There exist constants c1 and c2 so that given user sampling probability q, and FL rounds T , for any ε < c1q2T , if σ ≥ c2 q √ T log(1/δ) , the randomized mechanismM in Algorithm 1 is ( , δ)-DP for any δ > 0. Proof. The proof follows the proof of Theorem 1 in (Abadi et al., 2016), while the notations have slightly different meanings under FL settings. In Proposition 1, we use q to represent user-level sampling probability and T to represent FL training rounds. Note that the above privacy analysis can be further improved by Rényi Differential Privacy (Mironov et al., 2019). Discussion (Li et al., 2020b) divide the user-level privacy into global privacy (Geyer et al., 2017; McMahan et al., 2018) and local privacy (Agarwal et al., 2018). In both local and global privacy, the norm of each update is clipped. The difference lies in that the noise is added on the aggregated model updates in global privacy because a trusted server is assumed, while the noise is added on each local update in local privacy because it assumes that the central server might be malicious. Algorithm 1 belongs to global privacy. A.1.2 INSDP-FEDSGD Definition 3 (Instance-level ( , δ)-DP). Let D be the dataset that is the union of local training examples from all users. Then, D and D′ are adjacent if they differ by one instance. The mechanism M is instance-level ( , δ)-DP if it meets Definition 1 with D and D′ as adjacent datasets. Algorithm 2: InsDP-FedSGD. Input: Initial model w0, user sampling probability q, privacy parameter δ, local clipping threshold S, local noise level σ, local datasets D1, ..., DN , learning rate η, batch sampling probability p. Output: FL model wT and privacy cost Server executes: for each round t = 1 to T do m← max(q ·N, 1); Ut ← (random subset of m clients); for each user i ∈ Ut in parallel do ∆wit ← UserUpdate(i, wt−1) ; wt ← wt−1 + 1m ∑ i∈Ut ∆w i t ; M.accum priv spending( √ mσ, pq, δ) =M.get privacy spent() ; return wT , Procedure UserUpdate(i, wt−1) w ← wt−1 ; bit ←(uniformly sample a batch fromDi with probability p = L/|Di|); for each xj ∈ bit do g(xj)← ∇l(w;xj); ḡ(xj)← Clip(g(xj), S) ; g̃ ← 1L (∑ j ḡ(xj) +N ( 0, σ2S2 )) ; w ← w − ηg̃ ; ∆wit ← w − wt−1 ; return ∆wit Procedure Clip(∆, S) return ∆/max ( 1, ‖∆‖2 S ) Under FedSGD, when each local model performs one step of DP-SGD (Abadi et al., 2016), the randomized mechanismM that outputs the global model preserves the instance-level DP. We can regard the one-step update for the global model in Algorithm 2 as: wt ← wt−1 − 1 m ∑ i∈Ut η L ∑ xj∈bit ḡ(xj) +N ( 0, σ2S2 ) (5) Proposition 2 (InsDP-FedSGD Privacy Guarantee). There exist constants c1 and c2 so that given batch sampling probability p, and user sampling probability q, the number of selected users each round m, and FL rounds T , for any ε < c1(pq)2T , if σ ≥ c2 pq √ T log(1/δ) √ m , the randomized mechanismM in Algorithm 2 is ( , δ)-DP for any δ > 0. Proof. i) In instance-level DP, we consider the sampling probability of each instance under the combination of user-level sampling and batch-level sampling. Since the user-level sampling probability is q and the batch-level sampling probablity is p, each instance is sampled with probability pq. ii) Additionally, since the sensitivity of instance-wise gradient w.r.t one instance is S, after local gradient descent and server FL aggregation, the equivalent sensitivity of global model w.r.t one instance is S′ = ηSLm according to Eq (5). iii) Moreover, since the local noise is ni ∼ N (0, σ 2S2) , then the “virtual” global noise is n = ηmL ∑ i∈Ut ni according to Eq (5), so n ∼ N (0, η2σ2S2 mL2 ). Let η2σ2S2 mL2 = σ ′2S′ 2 such that n ∼ N (0, σ′2S′2). Because S′ = ηSLm , the equivalent global noise level is σ′2 = σ2m, i.e., σ′ = σ √ m. In Proposition 2, we use pq to represent instance-level sampling probability, T to represent FL training rounds, σ √ m to represent the equivalent global noise level. The rest of the proof follows the proof of Theorem 1 in (Abadi et al., 2016). We defer the DP bound evaluation comparison between InsDP-FedSGD and Dopamine to Appendix A.3.7. A.1.3 INSDP-FEDAVG Algorithm 3: InsDP-FedAvg. Input: Initial model w0, user sampling probability q, privacy parameter δ, local clipping threshold S, local noise level σ, local datasets D1, ..., DN , local steps V , learning rate η, batch sampling probability p. Output: FL model wT and privacy cost Server executes: for each round t = 1 to T do m← max(q ·N, 1); Ut ← (random subset of m users); for each user i ∈ Ut in parallel do ∆wit, i t ← UserUpdate(i, wt−1) ; for each user i /∈ Ut do it ← it−1 ; wt ← wt−1 + 1m ∑ i∈Ut ∆w i t ; t =M.parallel composition({ it}i∈[N ]) = T ; return wT , Procedure UserUpdate(i, wt−1) w ← wt−1 ; for each local step v = 1 to V do b ←(uniformly sample a batch from Di with probability p = L/|Di|); for each xj ∈ b do g(xj)← ∇l(w;xj); ḡ(xj)← Clip(g(xj), S) ; g̃ ← 1L ( ∑ j ḡ(xj) +N ( 0, σ2S2 ) ); w ← w − ηg̃ ; Mi.accum priv spending(σ, p, δ) ; it =Mi.get privacy spent() ; ∆wit ← w − wt−1 ; return ∆wit, it Procedure Clip(∆, S) return ∆/max ( 1, ‖∆‖2 S ) Lemma 2 (InsDP-FedAvg Privacy Guarantee when T = 1). In Algorithm 3, when T = 1, suppose local mechanismMi satisfies ( i, δ)-DP, then global mechanismM satisfies (maxi∈[N ] i, δ)-DP. Proof. We can regard federated learning as partitioning a dataset D into N disjoint subsets {D1, D2, . . . , DN}. N mechanisms {M1, . . . ,MN} are operated on these N parts separately and eachMi satisfies its own i-DP for i ∈ [1, N ]. Note that if i-th user is not selected , i = 0 because local dataset Di is not accessed and there is no privacy cost. Without loss of generality, we assume the modified data sample x′ (x → x′ causes D → D′) is in the local dataset of k-th client Dk. Let D,D′ be two neighboring datasets (Dk, D′k are also two neighboring datasets). M is randomized mechanism that outputs the global model, andMi is the randomized mechanism that outputs the local model update ∆wi. Suppose w0 is the initialized and deterministic global model, and {z1, . . . , zN} are randomized local updates. We have a sequence of computations {z1 = M1(D1), z2 = M2(D2; z1), z3 = M3(D3; z1, z2) . . .} and z = M(D) = w0 + ∑N i=1 zi. Note that if i-th user is not selected , zi = 0. According to the parallel composition (Tu), we have Pr[M(D) = z] = Pr[M1(D1) = z1] Pr[M2(D2; z1) = z2] . . .Pr[MN (DN ; z1, . . . , zN−1) = zN ] ≤ exp( k) Pr[Mk(D′k; z1, . . . , zk−1) = zk] ∏ i6=k Pr[Mi(Di; z1, . . . , zi−1) = zi] = exp( k) Pr[M(D′) = z] SoM satisfies k-DP when the modified data sample lies in the subset Dk. Consider the worst case of where the modified data sample could fall in, we know thatM satisfies (maxi∈[N ] i)-DP. We recall Theorem 4. Theorem 4 (InsDP-FedAvg Privacy Guarantee). In Algorithm 3, during round t, if the local mechanismMi satisfies ( it, δ)-DP, then the global mechanismM satisfies ( maxi∈[N ] i t, δ ) -DP. Proof. Again, without loss of generality, we assume the modified data sample x′ (x → x′ causes D → D′) is in the local dataset of k-th user Dk. We first consider the case when all users are selected. At each round t, N mechanisms are operated on N disjoint parts and eachMit satisfies own i-DP where i is the privacy cost for accessing the local dataset Di for one round (not accumulating over previous rounds). Let D,D′ be two neighboring datasets (Dk, D′k are also two neighboring datasets). Suppose z0 = Mt−1(D) is the aggregated randomized global model at round t − 1, and {z1, . . . , zN} are the randomized local updates at round t, we have a sequence of computations {z1 = M1t (D1; z0), z2 = M2t (D2; z0, z1), z3 = M3t (D3; z0, z1, z2) . . .} and z =Mt(D) = z0 + ∑N i zi. We first consider the sequential composition (Dwork & Roth, 2014) to accumulate the privacy cost over FL rounds. According to parallel composition, we have Pr[Mt(D) = z] = Pr[Mt−1(D) = z0] N∏ i=1 Pr[Mit(Di; z0, z1, . . . , zi−1) = zi] = Pr[Mt−1(D) = z0] Pr[Mkt (Dk; z0, z1, . . . , zk−1) = zk] ∏ i 6=k Pr[Mit(Di; z0, z1, . . . , zi−1) = zi] ≤ exp( t−1) Pr[Mt−1(D′) = z0] exp( k) Pr[Mkt (D′k; z0, z1, . . . , zk−1) = zk] ∏ i 6=k Pr[Mit(Di; z0, z1, . . . , zi−1) = zi] = exp( t−1 + k) Pr[Mt(D′) = z] Therefore,Mt satisfies t-DP, where t = t−1 + k. Because the modified data sample always lies in Dk over t rounds and 0 = 0, we can have t = t k, which means that the privacy guarantee of global mechanismMt is only determined by the local mechanism of k-th user over t rounds. Moreover, moment accountant (Abadi et al., 2016) is known to reduce the privacy cost from O(t) to O( √ t). We can use the more advanced composition, i.e., moment accountant, instead of the sequential composition, to accumulate the privacy cost for local mechanismMk over t FL rounds. In addition, we consider user subsampling. As described in Algorithm 3, if the user i is not selected at round t, then its local privacy cost is kept unchanged at this round. Take the worst case of where x′ could lie in, at round t,M satisfies t-DP, where t = maxi∈[N ] it, local mechanism M i satisfies it-DP, and the local privacy cost i t is accumulated via local moment accountant in i-th user over t rounds. A.2 THREAT MODELS We consider targeted poisoning attacks of two types. In backdoor attacks (Gu et al., 2019; Chen et al., 2017a), the goal is to embed a backdoor pattern (i.e., a trigger) during training such that any test input with such pattern will be mis-classified as the target. In label flipping attacks (Biggio et al., 2012; Huang et al., 2011), the labels of clean training examples from one source class are flipped to the target class while the features of the data are kept unchanged. In FL, the purpose of backdoor attacks is to manipulate local models with backdoored local data, so that the global model would behave normally on untampered data samples while achieving high attack success rate on clean data (Bagdasaryan et al., 2020). Given the same purpose, distributed backdoor attack (DBA) (Xie et al., 2019) decomposes the same backdoor pattern to several smaller ones and embeds them to different local training sets for different adversarial users. The goal of label flipping attack against FL is to manipulate local datasets with flipped labels such that the global model will mis-classify the test data in the source class as the target class. The model replacement (Bagdasaryan et al., 2020) is a more powerful approach to perform the above attacks, where the attackers first train the local models using the poisoned datasets and then scale the malicious updates before sending them to the server. This way, the attacker’s updates would have a stronger impact on the FL model. We use the model replacement method to perform poisoning attacks and study the effectiveness of DPFL. For UserDP-FedAvg, we consider backdoor, distributed backdoor, and label flipping attacks via the model replacement approach. Next, we formalize the attack process and introduce the notations. Suppose the attacker controls k adversarial users, i.e., there are k attackers out of N users. Let B be the original user set of N benign users, and B′ be the user set that contains k attackers. Let D := {D1, D2, . . . , DN} be the union of original benign local datasets across all users. For a data sample zij := {xij , yij} in Di, we denote its backdoored version as z′ i j := {xij + δx, y∗}, where δx is the backdoor pattern, y∗ is the targeted label; the distributed backdoor attack (DBA) version as z′ i j := {xij + δix, y∗}, where δix is the distributed backdoor pattern for attacker i; the label-flipped version as z′ij := {xij , y∗}. Note that the composition of all DBA patterns is equivalent to the backdoor pattern, i.e., ∑k i=1 δ i x = δx. We assume attacker i has αi fraction of poisoned samples in its local dataset D′i. Let D ′ := {D′1, . . . , D′k−1, D′k, Dk+1, . . . , DN} be the union of local datasets when k attackers are present. The adversarial user i performs model replacement by scaling the model update with hyperparameter γ before submitting it to the server, i.e., ∆wit ← γ∆wit. In our threat model, we consider the attacker that follows our training protocol and has no control over which users are sampled. For InsDP-FedAvg, we consider both backdoor and label flipping attacks. Since distributed backdoor and model replacement attacks are proposed for adversarial users rather than adversarial instances, we do not consider them for instance-level DPFL. There are k backdoored or label-flipped instances {z′1, z′2, . . . , z′k}, which could be controlled by same or multiple users. In our threat model, we consider the attacker that follows our training protocol and has no control over which data partition (or batch) is sampled. Note that we do not assume that the adversaries’ poisoning data always be sampled. In our algorithms, each batch is randomly subsampled, so the adversaries cannot control if poisoned data are sampled in each step. A.3 EXPERIMENTAL DETAILS AND ADDITIONAL RESULTS A.3.1 DATASETS AND MODELS We evaluate our robustness certification results with two datasets: MNIST (LeCun & Cortes, 2010) and CIFAR-10 (Krizhevsky, 2009). For each dataset, we use corresponding standard CNN architectures in the differential privacy library (opa, 2021) of PyTorch (Paszke et al., 2019). MNIST: We study an image classification problem of handwritten digits in MNIST. It is a dataset of 70000 28x28 pixel images of digits in 10 classes, split into a train set of 60000 images and a test set of 10000 images. Except Section A.3.6, we consider binary classification on classes 0 and 1, making our train set contain 12665 samples, and the test set 2115 samples. The model consists of two Conv-ReLu-MaxPooling layers and two linear layers. CIFAR-10: We study image classification of vehicles and animals in CIFAR-10. This is a harder dataset than MNIST, consisting of 60000 32x32x3 images, split into a train set of 50000 and a test set of 10000. Except Section A.3.6, we consider binary classification on class airplane and bird, making our train set contain 10000 samples, and the test set 2000 samples. The model consists of four Conv-ReLu-AveragePooling layers and one linear layer. When training on CIFAR10, we follow the standard practice for differential privacy (Abadi et al., 2016; Jagielski et al., 2020) and fine-tune a whole model pre-trained non-privately on the more complex CIFAR100, a similarly sized but more complex benchmark dataset. We can achieve reasonable performance on CIFAR-10 datasets by only training (fine-tuning) few rounds. Sent140: We consider a text sentiment analysis task on tweets from Sentiment140 (Go et al.) (Sent140) which involves classifying Twitter posts as positive or negative. We use a two layer LSTM binary classifier containing 256 hidden units with pretrained 300D GloVe embedding (Pennington et al., 2014). Each twitter account corresponds to a device. We use the same network architecture, non-iid dataset partition method, number of selected user per round, learning rate, batch size, etc. as in (Li et al., 2018), which are summarized in Table 1. A.3.2 TRAINING DETAILS We simulate the federated learning setup by splitting the training datasets for N FL users in an i.i.d manner. FL users run SGD with learning rate η, momentum 0.9, weight decay 0.0005 to update the local models. The training parameter setups are summarized in Table 1. Following (McMahan et al., 2018) that use δ ≈ 1N1.1 as privacy parameter, for UserDP-FedAvg we set δ = 0.0029 according to the total number of users, and for InsDP-FedAvg we set δ = 0.00001 according the total number of training samples. Next we summarize the privacy guarantees and clean accuracy offered when we study the certified prediction and certified attack cost, which are also the training parameters setups when k = 0 in Figure 1, 2, 3, 4, 5, 8. User-level DPFL In order to study the user-level certified prediction under different privacy guarantee, for MNIST, we set to be 0.2808, 0.4187, 0.6298, 0.8694, 1.8504, 2.8305, 4.8913, 6.9269, which are obtained by training UserDP-FedAvg FL model for 3 rounds with noise level σ = 3.0, 2.3, 1.8, 1.5, 1.0, 0.8, 0.6, 0.5, respectively (Figure 1(a)). For CIFAR-10, we set to be 0.1083, 0.1179, 0.1451, 0.2444, 0.3663, 0.4527, 0.5460, 0.8781, which are obtained by training UserDP-FedAvg FL model for one round with noise level σ = 10.0, 8.0, 6.0, 4.0, 3.0, 2.6, 2.3, 1.7, respectively (Figure 1(b)). The clean accuracy (average over 1000 runs) of UserDP-FedAvg under non-DP training ( = ∞) and DP training (varying ) on MNIST and CIFAR-10 are reported in Table. 2 and Table. 3 respectively. To certify the attack cost under different number of adversarial users k (Figure 2), for MNIST, we set the noise level σ to be 2.5. When k = 0, after training UserDP-FedAvg for T = 3, 4, 5 rounds, we obtain FL models with privacy guarantee = 0.3672, 0.4025, 0.4344 and clean accuracy (average over M runs) 86.69%, 88.76%, 88.99%. For CIFAR-10, we set the noise level σ to be 3.0. After training UserDP-FedAvg for T = 3, 4 rounds under k = 0, we obtain FL models with privacy guarantee = 0.5346, 0.5978 and clean accuracy 78.63%, 78.46%. With the interest of certifying attack cost under different user-level DP guarantee (Figure 3, Figure 5), we explore the empirical attack cost and the certified attack cost lower bound given different . For MNIST, we set the privacy guarantee to be 1.2716, 0.8794, 0.6608, 0.5249, 0.4344, which are obtained by training UserDP-FedAvg FL models for 5 rounds under noise level σ = 1.3, 1.6, 1.9, 2.2, 2.5, respectively, and the clean accuracy for the corresponding models are 99.50%, 99.06%, 96.52%, 93.39%, 88.99%. For CIFAR-10, we set the privacy guarantee to be 1.600, 1.2127, 1.0395.0.8530, 0.7616, 0.6543, 0.5978, which are obtained by training UserDP-FedAvg FL models for 4 rounds under noise level σ = 1.5, 1.8, 2.0, 2.3, 2.5, 2.8, 3.0, respectively, and the clean accuracy for the corresponding models are 85.59%, 84.52%, 83.23%, 81.90%, 81.27%, 79.23%, 78.46%. Instance-level DPFL To certify the prediction for instance-level DPFL under different privacy guarantee, for MNIST, we set privacy cost to be 0.2029, 0.2251, 0.2484, 0.3593, 0.4589, 0.6373, 1.0587, 3.5691, which are obtained by training InsDP-FedAvg FL models for 3 rounds with noise level σ = 15, 10, 8, 5, 4, 3, 2, 1, respectively (Figure 1(c)). For CIFAR-10, we set privacy cost to be 0.3158, 0.3587, 0.4221, 0.5130, 0.6546, 0.9067, 1.4949, 4.6978, which are obtained by training InsDP-FedAvg FL models for one round with noise level σ = 8, 7, 6, 5, 4, 3, 2, 1, respectively (Figure 1(d)). The clean accuracy (average over 1000 runs) of InsDP-FedAvg under non-DP training ( =∞) and DP training (varying ) on MNIST and CIFAR-10 are reported in Table. 4 and Table. 5 respectively. With the aim to study certified attack cost under different number of adversarial instances k, for MNIST, we set the noise level σ to be 10. When k = 0, after training InsDP-FedAvg for T = 4, 9 rounds, we obtain FL models with privacy guarantee = 0.2383, 0.304 and clean accuracy (average over M runs) 96.40%, 96.93% (Figure 8(a)(b)). For CIFAR-10, we set the noise level σ to be 8.0. After training InsDP-FedAvg for one round under k = 0, we obtain FL models with privacy guarantee = 0.3158 and clean accuracy 61.78% (Figure 4(a)(b)). In order to study the empirical attack cost and certified attack cost lower bound under different instance-level DP guarantee, we set the privacy guarantee to be 0.5016, 0.311, 0.2646, 0.2318, 0.2202, 0.2096, 0.205 for MNIST, which are obtained by training InsDP-FedAvg FL models for 6 rounds under noise level σ = 5, 8, 10, 13, 15, 18, 20, respectively, and the clean accuracy for the corresponding models are 99.60%, 98.81%, 97.34%, 92.29%, 88.01%, 80.94%, 79.60% (Figure 8 (c)(d)). For CIFAR-10, we set the privacy guarantee to be 1.261, 0.9146, 0.7187, 0.5923, 0.5038, 0.4385, which are obtained by training InsDP-FedAvg FL models for 2 rounds under noise level σ = 3, 4, 5, 6, 7, 8, respectively, and the clean accuracy for the corresponding models are 84.47%, 80.99%, 76.01%, 68.65%, 63.07%, 60.65% (Figure 4 (c)(d)). With the intention of exploring the upper bound for k given τ under different instance-level DP guarantee, for MNIST, we set noise level σ to be 5, 8, 10, 13, 20, respectively, to obtain instance-DP FL models after 10 rounds with privacy guarantee = 0.6439, 0.3937, 0.3172, 0.2626, 0.2179 and clean accuracy 99.58%, 98.83%, 97.58%, 95.23%, 85.72% (Figure 5(c)). For CIFAR-10, we set noise level σ to be 3, 4, 5, 6, 7, 8 and train InsDP-FedAvg for T = 3 rounds, to obtain FL models with privacy guarantee = 1.5365, 1.1162, 0.8777, 0.7238, 0.6159, 0.5361 and clean accuracy 84.34%, 80.27%, 74.62%, 66.94%, 62.14%, 59.75% (Figure 5(d)). A.3.3 ADDITIONAL IMPLEMENTATION DETAILS (Threat Models) For the attacks against UserDP-FedAvg, by default, the local poison fraction α = 100%, and the scale factor γ = 50. We use same parameters setups for all k attackers. In terms of label flipping attacks, the attackers swap the label of images in source class (digit 1 for MNIST; bird for CIFAR-10) into the target label (digit 0 for MNIST; airplane for CIFAR-10). In terms of backdoor attacks in MNIST and CIFAR-10, the attackers add a backdoor pattern, as shown in Figure 6 (left), in images and swap the label of any sample with such pattern into the target label (digit 0 for MNIST; airplane for CIFAR-10). In terms of distributed backdoor attacks, Figure 6 (right) shows an example when the triangle pattern is evenly decomposed into k = 4 parts, and they are used as the distributed patterns for k = 4 attackers respectively. For the cases where there are more or fewer distributed attackers, the similar decomposition strategy is adopted. For the attacks against InsDP-FedAvg, the same target classes and backdoor patterns are used as UserDP-FedAvg. The parameters setups are the same for all k poisoned instances. (Robustness Certification) We certified 2115/2000/1122 test samples from the MNIST/CIFAR10/Sent140 test sets. In Theorem 3 and Corollary 1 that are related to certified attack cost, C̄ specifies the range of C(·). In the implementation, C̄ is set to be larger than the maximum empirical attack cost evaluated on the test sets (see Table 1 for details). For each dataset, we use the same C̄ for cost function C defined in Example 1 and Example 2. When using Monte-Carlo sampling, we run M = 1000 times for certified accuracy, and M = 100 times for certified attack cost in all experiments. (Machines) We simulate the federated learning setup (1 server and N users) on a Linux machine with Intel® Xe
1. What are the strengths and weaknesses of the paper regarding its focus, contributions, and comparisons with other works? 2. How does the reviewer assess the connection between robustness and privacy in the paper's approach? 3. What are the concerns regarding the threat models of the proposed DPFL algorithms, particularly for InsDP-FedSGD? 4. Do you have any questions about the paper's tendency to oversell its contribution, especially in comparison to previous works such as Lecuyer et al. 2019 and Ma et al. 2019? 5. What is your opinion on how the paper introduces results as specifically concerning FL when the certifiability results use properties of DP, regardless of the setting? 6. What are your thoughts on the privacy accounting method used in the paper, and how does it relate to existing tools and newer papers like Mironov et al. 2019?
Summary Of The Paper Review
Summary Of The Paper Update after rebuttal and discussions I thank the authors for taking the time to discuss the issues pointed out in the reviews at length. Unfortunately, I am still not convinced that the paper is ready for publication. My main concerns: There are now experiments in the updated paper claimed to be DP which are not (median clipping). I continue to have doubts about the subsampling amplification. Simply stating that the sampling is random is not good enough, since the key issue is the added uncertainty due to the subsampling: if the sampling does not increase the adversary's uncertainty, there is no amplification. As an immediate remedy, I suggest the authors state the threat model more clearly. I still think the paper can be improved a lot by taking the time to rewrite it focusing on the main contribution of certified robustness under DP and clarity of the presentation. The paper looks at the robustness properties of differentially private (DP) federated learning (FL), focusing on learning classification models from labeled data. The main idea is to turn DP privacy guarantees into certifiable robustness properties. The authors look at 2 certifiable properties, namely, certified prediction (data poisoning does not alter most likely label), and certified attack cost (there is a lower bound on the loss the given attack tries to minimize). They continue to show that DP models in general guarantee these on some level that depends on the privacy bounds. The paper also presents several DPFL learning algorithms for user and instance-level DP. Review Strong points: i) The problem of data poisoning is an important one, and the connection between robustness and privacy is a nice one to use for getting provable guarantees. ii) After doing some sporadic checking, I have not found any actual mistakes in the proofs. Weak points (and comments/questions for the authors): In my opinion the paper lacks focus: there are plenty of DPFL algorithms introduced, but very little testing and comparisons to existing work, while what I take as the main contribution, i.e., the certified robustness, is overshadowed by the FL parts and seems unfinished. It is generally somewhat hard to tell which parts are meant as original contributions and which are referencing existing work (e.g. Sec 4.1, is this meant as original contribution or just paraphrasing existing work?) The threat models of the proposed DPFL algorithms are not quite clear to me: e.g. for insDP, are the DP guarantees supposed to hold against adversaries who can poison some samples during the training? If so, this should probably affect the privacy guarantees resulting from subsampling in InsDP-FedSGD (since the adversary could have knowledge it the data partition in question has been chosen in the update?). How are the other DPFL adversaries? The paper tends to oversell it's contribution: 4.1) The certified prediction Thm1 and it's proof match almost exactly with Prop1 from Lecuyer et al. 2019, the same goes for Thm3 and Cor1 compared to Ma et al. 2019 Thm4, and Cor6. Although both works are cited in the current paper, I do feel that this near-identity should be clearly and unambiguously stated in introducing these results. There is some discussion on this right before Section 6 noting similarities with Lecuyer et al., but stating that "ours focus on the robustness against training-time attacks in FL, which is more challenging considering the distributed nature and the model training dynamics, i.e., the analysis of the privacy budget over training rounds". But the federated setting only shows up in ascertaining that a training algorithm satisfies DP and in considering a suitable neighbourhood definition, it clearly does not complicate matters in the certified robustness Thms. If anything I would think that the no-show of any notion of federation in the proofs shows that at bottom these problems boil down to the ones considered by Lecuyer et al. & Ma et al. and are therefore not any harder. In general I find it a bit misleading that the results are introduced as somehow specifically concerning FL, when the certifiability results actually just use properties of DP, no matter if the setting is federated, otherwise distributed, or centralised; the main things are the neighbourhood definition (to determine what the adversary can control) and the privacy parameters. 4.2) As for the privacy accounting, since the proposed DPFL algorithms seem to be simple modification of existing ones, the privacy cost can be readily and accurately calculated using existing tools, and this seems to be exactly what the authors do. It is therefore hard to see what value does the hard-to-read Moments accountant type Prop.1 bring (note also that there exists a newer and much clarified paper on RDP, which results in improved bounds [1]). References: [1] Mironov et al. 2019: Rényi Differential Privacy of the Sampled Gaussian Mechanism
ICLR
Title Certified Robustness for Free in Differentially Private Federated Learning Abstract Federated learning (FL) provides an efficient training paradigm to jointly train a global model leveraging data from distributed users. As the local training data comes from different users who may not be trustworthy, several studies have shown that FL is vulnerable to poisoning attacks where adversaries add malicious data during training. On the other hand, to protect the privacy of users, FL is usually trained in a differentially private manner (DPFL). Given these properties of FL, in this paper, we aim to ask: Can we leverage the innate privacy property of DPFL to provide robustness certification against poisoning attacks? Can we further improve the privacy of FL to improve such certification? To this end, we first investigate both user-level and instance-level privacy for FL, and propose novel randomization mechanisms and analysis to achieve improved differential privacy. We then provide two robustness certification criteria: certified prediction and certified attack cost for DPFL on both levels. Theoretically, given different privacy properties of DPFL, we prove their certified robustness under a bounded number of adversarial users or instances. Empirically, we conduct extensive experiments to verify our theories under different attacks on a range of datasets. We show that the global model with a tighter privacy guarantee always provides stronger robustness certification in terms of the certified attack cost, while it may exhibit tradeoffs regarding the certified prediction. We believe our work will inspire future research of developing certifiably robust DPFL based on its inherent properties. 1 INTRODUCTION Federated Learning (FL), which aims to jointly train a global model with distributed local data, has been widely applied in different applications, such as finance (Yang et al., 2019b), medical analysis (Brisimi et al., 2018), and user behavior prediction (Hard et al., 2018; Yang et al., 2018; 2019a). However, the fact that the local data and the training process are entirely controlled by the local users who may be adversarial raises great concerns from both security and privacy perspectives. In particular, recent studies show that FL is vulnerable to different types of training-time attacks, such as model poisoning (Bhagoji et al., 2019), backdoor attacks (Bagdasaryan et al., 2020; Xie et al., 2019; Wang et al., 2020), and label-flipping attacks (Fung et al., 2020). Further, privacy concerns have motivated the need to keep the raw data on local devices without sharing. However, sharing other indirect information such as gradients or model updates as part of the FL training process can also leak sensitive user information (Zhu et al., 2019; Geiping et al., 2020; Bhowmick et al., 2018; Melis et al., 2019). As a result, approaches based on differential privacy (DP) (Dwork & Roth, 2014), homomorphic encryption (Bost et al., 2015; Rouhani et al., 2018; Gilad-Bachrach et al., 2016), and secure multiparty computation (Ben-Or et al., 1988; Bonawitz et al., 2017) have been proposed to protect privacy of users in federated learning. In particular, differentially private federated learning (DPFL) provides strong information theoretic guarantees on user privacy, while causing relatively low performance overhead (Li et al., 2020b). Several defenses have been proposed to defend against poisoning attacks in FL. For instance, various robust aggregation methods (Fung et al., 2020; Pillutla et al., 2019; Blanchard et al., 2017; El Mhamdi et al., 2018; Chen et al., 2017b; Yin et al., 2018; Fu et al., 2019; Li et al., 2020a) identify and down-weight the malicious updates during aggregation or estimate a true “center” of the received updates rather than taking a weighted average. Other methods include robust federated training protocols (e.g., clipping (Sun et al., 2019), noisy perturbation (Sun et al., 2019), and additional evaluation during training (Andreina et al., 2020)) and post-training strategies (e.g., fine-tuning and pruning (Wu et al., 2020)) that repair the poisoned global model. However, as these works mainly focus on providing empirical robustness for FL, they have been shown to be vulnerable to newly proposed strong adaptive attacks (Wang et al., 2020; Xie et al., 2019; Baruch et al., 2019; Fang et al., 2020). Hence, in this paper, we aim to develop certified robustness guarantees for FL against different poisoning attacks. Further, as differentially private federated learning (DPFL) is often used to protect user privacy, we also aim to ask: Can we leverage the innate privacy property of DPFL to provide robustness certification against poisoning attacks for free? Can we further improve the privacy of FL so as to improve its certified robustness? Recent studies suggest that differential privacy (DP) is inherently related with robustness of ML models. Intuitively, DP is designed to protect the privacy of individual data, such that the output of an algorithm remains essentially unchanged when one individual input point is modified. Hence, the prediction of a DP model will be less impacted by a small amount of poisoned training data. Consequently, DP has been used to provide both theoretical and empirical defenses against evasion attacks (Lecuyer et al., 2019a) and data poisoning attacks (Ma et al., 2019; Hong et al., 2020) on centralized ML models. It has also been used as an empirical defense against backdoor attacks (Gu et al., 2019) in federated learning (Bagdasaryan et al., 2020; Sun et al., 2019), although no theoretical guarantee is provided. To the best of our knowledge, despite of the wide application of DPFL,there is no work providing certified robustness for DPFL leveraging its privacy property. In this paper, we aim to leverage the inherent privacy property of DPFL to provide robustness certification for FL against poisoning attacks for free. Our challenges include: (1) performing privacy analysis over training rounds in DPFL algorithms and (2) theoretically guaranteeing certified robustness based on DP properties under a given privacy budget. We propose two robustness certification criteria for FL: certified prediction and certified attack cost under different attack constraints. We consider both user-level DP (Agarwal et al., 2018; Geyer et al., 2017; McMahan et al., 2018; Asoodeh & Calmon, 2020; Liang et al., 2020) which is widely guaranteed in FL, and instance-level DP (Malekzadeh et al., 2021; Zhu et al., 2021) which is less explored in FL. We prove that a FL model satisfying user-level DP is certifiably robust against a bounded number of adversarial users. In addition, we propose InsDP-FedAvg algorithm to improve instance-level DP in FL, and prove that instance-level DPFL is certifiably robust against a bounded number of adversarial instances. We also study the correlation between privacy guarantee and certified robustness of FL. While stronger privacy guarantees result in greater attack cost, overly strong privacy can hurt the certified prediction by introducing too much noise in the training process. Thus, the optimal certified prediction is often achieved under a proper balance between privacy protection and utility loss. Key Contributions. Our work takes the first step to provide certified robustness in DPFL for free against poisoning attacks. We make contributions on both theoretical and empirical fronts. • We propose two criteria for certified robustness of FL against poisoning attacks (Section 4.2). • Given a FL model satisfying user-level DP, we prove that it is certifiably robust against arbitrary poisoning attacks with a bounded number of adversarial users (Section 4.2). • We propose InsDP-FedAvg algorithm to improve FL instance-level privacy guarantee (Sec- tion 5.1). We prove that instance-level DPFL is certifiably robust against the manipulation of a bounded number of instances during training (Section 5.2). • We conduct extensive experiments on image classification of MNIST, CIFAR-10 and sentiment analysis of tweets to verify our proposed certifications of two robustness criteria, and compare the certified results of different DPFL algorithms (Section 6). 2 RELATED WORK Differentially Private Federated Learning. Different approaches are proposed to guarantee the user-level privacy for FL. (Geyer et al., 2017; McMahan et al., 2018) clip the norm of each local update, add Gaussian noise on the summed update, and characterize its privacy budget via moment accountant (Abadi et al., 2016). (McMahan et al., 2018) extends (Geyer et al., 2017) to language models. In CpSGD (Agarwal et al., 2018), each user clips and quantizes the model update, and adds noise drawn from Binomial distribution, achieving both communication efficiency and DP. (Bhowmick et al., 2018) derive DP for FL via Rényi divergence (Mironov, 2017) and study its protection against data reconstruction attacks. (Liang et al., 2020) utilizes Laplacian smoothing for each local update to enhance the model utility. Instead of using moment accountant to track privacy budget over FL rounds as previous work, (Asoodeh & Calmon, 2020) derives the DP parameters by interpreting each round as a Markov kernel and quantify its impact on privacy parameters. All these works only focus on providing user-level privacy, leaving its robustness property unexplored. In terms of instance-level privacy for FL, there are only a few work (Malekzadeh et al., 2021; Zhu et al., 2021). Dopamine (Malekzadeh et al., 2021) provides instance-level privacy guarantee when each user only performs one step of DP-SGD (Abadi et al., 2016) at each FL round. However, it cannot be applied to multi-step SGD for each user, thus it cannot be extended to the general FL setting FedAvg (McMahan et al., 2017). (Zhu et al., 2021) privately aggregate the labels from users in a voting scheme, and provide DP guarantees on both user level and instance level. However, it is also not applicable to standard FL, since it does not allow aggregating the gradients or updates. Differential Privacy and Robustness. In standard (centralized) learning, Pixel-DP (Lecuyer et al., 2019a) is proposed to certify the model robsutness against evasion attacks. However, it is unclear how to leverage it to certify against poisoning attacks. To certify the robustness against poisoning attacks, (Ma et al., 2019) show that private learners are resistant to data poisoning and analyze the lower bound of attack cost against poisoning attacks for regression models. Here we certify the robustness in DPFL setting with such lower bound as one of our certification criteria and additionally derive its upper bounds. (Hong et al., 2020) show that the off-the-shelf mechanism DP-SGD (Abadi et al., 2016), which clips per-sample gradients and add Guassian noises during training, can serve as a defense against poisoning attacks empirically. In federated learning, empirical work (Bagdasaryan et al., 2020; Sun et al., 2019) show that DPFL can mitigate backdoor attacks; however, none of these work provides certified robustness guarantees for DPFL against poisoning attacks. 3 PRELIMINARIES We start by providing some background on differential privacy (DP) and federated learning (FL). Differential Privacy (DP). DP is a formal, mathematically rigorous definition (and standard) of privacy that intuitively guarantees that a randomized algorithm behaves similarly on similar inputs and that the output of the algorithm is about the same whether or not an individual’s data is included as part of the input (Dwork & Roth, 2014). Definition 1 (( , δ)-DP (Dwork et al., 2006)). A randomized mechanismM : D → Θ with domain D and range Θ satisfies ( , δ)-DP if for any pair of two adjacent datasets d, d′ ∈ D, and for any possible (measurable) output set E ⊆ Θ, it holds that Pr[M(d) ∈ E] ≤ e Pr [M (d′) ∈ E] + δ. In Definition 1, whenM is a training algorithm for ML model, domain D and range Θ represent all possible training datasets and all possible trained models respectively. Group DP for ( , δ)-DP mechanisms follows immediately from Definition 1 where the privacy guarantee drops with the size of the group. Formally, it says: Lemma 1 (Group DP). For mechanismM that satisfies ( , δ)-DP, it satisfies (k , 1−e k 1−e δ)-DP for groups of size k. That is, for any d, d′ ∈ D that differ by k individuals, and any E ⊆ Θ it holds that Pr[M(d) ∈ E] ≤ ek Pr [M (d′) ∈ E] + 1−e k 1−e δ. Federated Learning. FedAvg was introduced by (McMahan et al., 2017) for FL to train a shared global model without direct access to training data of users. Specifically, given a FL system with N users, at round t, the server sends the current global model wt−1 to users in the selected user set Ut, where |Ut| = m = qN and q is the user sampling probability. Each selected user i ∈ Ut locally updates the model for E local epochs with its dataset Di and learning rate η to obtain a new local model. Then, the user sends the local model updates ∆wit to the server. Finally, the server aggregates over the updates from all selected users into the new global model wt = wt−1 + 1m ∑ i∈Ut ∆w i t. 4 USER-LEVEL PRIVACY AND CERTIFIED ROBUSTNESS FOR FL 4.1 USER-LEVEL PRIVACY AND BACKGROUND Definition 1 leaves the definition of adjacent datasets flexible, which depends on applications. To protect user-level privacy, adjacent datasets are defined as those differing by data from one user (McMahan et al., 2018). The formal definition of User-level ( , δ)-DP (Definition 2) is omitted to Appendix A.1. Following standard DPFL (Geyer et al., 2017; McMahan et al., 2018), we introduce one of standard user-level DPFL algorithms UserDP-FedAvg (Algorithm 1 in Appendix A.1). At each round, the server first clips the update from each user with a threshold S such that its `2-sensitivity is upper bounded by S. Next, the server sums up the updates, adds Gaussian noise sampled from N (0, σ2S2), and takes the average, i.e., wt ← wt−1 + 1m (∑ i∈Ut Clip(∆w i t, S) +N ( 0, σ2S2 )) . Given the user sampling probability q, noise level σ, FL rounds T , and a δ > 0, the privacy analysis of UserDP-FedAvg satisfying ( , δ)-DP is given by Proposition 1 in Appendix A.1, which is a generalization of (Abadi et al., 2016). The aim of Proposition 1 is to analyze privacy budget in FL, which is accumulated as T increases due to the continuous access to training data. Following (Geyer et al., 2017; McMahan et al., 2018), moment accountant (Abadi et al., 2016) is used in the privacy analysis. 4.2 CERTIFIED ROBUSTNESS OF USER-LEVEL DPFL AGAINST POISONING ATTACKS Threat Model. We consider the poisoning attacks against FL, where k adversarial users have poisoned instances in local datasets, aiming to fool the trained DPFL global model. Such attacks include backdoor attacks (Gu et al., 2019; Chen et al., 2017a) and label flipping attacks (Biggio et al., 2012; Huang et al., 2011). The detailed description of these attacks is deferred to Appendix A.2. Note that our robustness certification is attack-agnostic under certain attack constraints (e.g., k), and we will verify our certification bounds with different poisoning attacks in Section 6. Next, we propose two criteria for the robustness certification in FL: certified prediction and certified attack cost. Certified Prediction. Consider the classification task with C classes. We define the classification scoring function f : (Θ,Rd) → ΥC which maps model parameters θ ∈ Θ and an input data x ∈ Rd to a confidence vector f(θ, x), and fc(θ, x) ∈ [0, 1] represents the confidence of class c. We mainly focus on the confidence after normalization, i.e., f(θ, x) ∈ ΥC = {p ∈ RC≥0 : ‖p‖1 = 1} in the probability simplex. Since the DP mechanismM is randomized and produces a stochastic FL global model θ = M(D), it is natural to resort to a probabilistic expression as a bridge for quantitative robustness certifications. Following the convention in (Lecuyer et al., 2019b; Ma et al., 2019), we use the expectation of the model’s prediction to provide a quantitative guarantee on the robustness of M. Specifically, we define the expected scoring function F : (θ,Rd)→ ΥC where Fc(M(D), x) = E[fc(M(D), x)] is the expected confidence for class c. The expectation is taken over DP training randomness, e.g., random Gaussian noise and random user subsampling. The corresponding prediction H : (θ,Rd) → [C] is defined by H(M(D), x) := arg maxc∈[C] Fc(M(D), x), which is the top-1 class based on the expected prediction confidence. We will prove that such prediction allows robustness certification against poisoning attacks. Following our threat model above and DPFL training in Algorithm 1, we denote the trained global model exposed to poisoning attacks byM(D′). When k = 1, D and D′ are user-level adjacent datasets according to Definition 2. Given that mechanismM satisfies user-level ( , δ)-DP, based on the innate DP property, the distribution of the stochastic model M(D′) is “close” to the distribution of M(D). Moreover, according to the post-processing property of DP, during testing, given a test sample x, we would expect the values of the expected confidence for each class c, i.e., Fc(M(D′), x) and Fc(M(D), x), to be close, and hence the returned most likely class to be the same, i.e., H(M(D), x) = H(M(D′), x), indicating robust prediction against poisoning attacks. Theorem 1 (Condition for Certified Prediction under One Adversarial User). Suppose a randomized mechanismM satisfies user-level ( , δ)-DP. For two user sets B and B′ that differ by one user, let D and D′ be the corresponding training datasets. For a test input x, suppose A,B ∈ [C] satisfy A = arg maxc∈[C] Fc(M(D), x) and B = arg maxc∈[C]:c 6=A Fc(M(D), x), then if FA(M(D), x) > e2 FB(M(D), x) + (1 + e )δ, (1) it is guaranteed that H(M(D′), x) = H(M(D), x) = A. When k > 1, we resort to group DP. According to Lemma 1, given mechanismM satisfying userlevel ( , δ)-DP, it also satisfies user-level (k , 1−e k 1−e δ)-DP for groups of size k. When k is smaller than a certain threshold, leveraging the group DP property, we would expect that the distribution of the stochastic modelM(D′) is not too far away from the distribution ofM(D) such that they would make the same prediction for a test sample with probabilistic guarantees. Therefore, the privacy and robustness guarantees are simultaneously met byM. Theorem 2 (Upper Bound of k for Certified Prediction). Suppose a randomized mechanism M satisfies user-level ( , δ)-DP. For two user sets B and B′ that differ by k users, let D and D′ be the corresponding training datasets. For a test input x, suppose A,B ∈ [C] satisfy A = arg maxc∈[C] Fc(M(D), x) and B = arg maxc∈[C]:c 6=A Fc(M(D), x), then H(M(D′), x) = H(M(D), x) = A, ∀k < K where K is the certified number of adversarial users: K = 1 2 log FA(M(D), x)(e − 1) + δ FB(M(D), x)(e − 1) + δ (2) The proofs of Theorems 1 and 2 are omitted to Appendix A.4. Theorems 1 and 2 reflect a tradeoff between privacy and certified prediction: (i) in Theorem 1, if is large such that the RHS of Eq (1) > 1, the robustness condition cannot be met since the expected confidence FA(M(D), x) ∈ [0, 1]. However, to achieve small , i.e., strong privacy, large noise is required during training, which would hurt model utility and thus result in small confidence margin between the top two classes (e.g., FA(M(D), x) and FB(M(D), x)), making it hard to meet the robustness condition. (ii) In Theorem 2 if we fix FA(M(D), x) and FB(M(D), x), smaller of FL can certify larger K. However, smaller also induces smaller confidence margin, thus reducing K instead. As a result, properly choosing would help to certify a large K. Certified Attack Cost. In addition to the certified prediction, we define the attack cost for attacker C : Θ → R which quantifies the difference between the poisoned model and the attack goal. In general, attacker aims to minimize the expected attack cost J(D) := E[C(M(D))], where the expectation is taken over the randomness of DP training. The cost function can be instantiated according to the concrete attack goal in different types of poisoning attacks, and we provide some examples below. Given a global FL model satisfying user-level ( , δ)-DP, we will prove the lower bound of the attack cost J(D′) when manipulating the data of at most k users. Higher lower bound of the attack cost indicates more certifiably robust global model. Example 1. (Backdoor attack (Gu et al., 2019)) C(θ) = 1n ∑n i=1 l(θ, z ∗ i ), where z ∗ i = (xi + δx, y ∗), δx is the backdoor pattern, y∗ is the target adversarial label. Minimizing J(D′) drives the prediction on any test data with the backdoor pattern δx to the target label y∗. Example 2. (Label Flipping attack (Biggio et al., 2012)) C(θ) = 1n ∑n i=1 l(θ, z ∗ i ), where z ∗ i = (xi, y ∗) and y∗ is the target adversarial label. Minimizing J(D′) drives the prediction on test data xi to the target label y∗. Example 3. (Parameter-Targeting attack (Ma et al., 2019)) C(θ) = 12‖θ − θ ?‖2, where θ? is the target model. Minimizing J(D′) drives the poisoned model to be close to the target model. Theorem 3 (Attack Cost with k Attackers). Suppose a randomized mechanismM satisfies user-level ( , δ)-DP. For two user sets B and B′ that differ k users, D and D′ are the corresponding training datasets. Let J(D) be the expected attack cost where |C(·)| ≤ C̄. Then, min{ek J(D) + e k − 1 e − 1 δC̄, C̄} ≥ J(D ′) ≥ max{e−k J(D)− 1− e −k e − 1 δC̄, 0}, if C(·) ≥ 0 min{e−k J(D) + 1− e −k e − 1 δC̄, 0} ≥ J(D ′) ≥ max{ek J(D)− e k − 1 e − 1 δC̄,−C̄}, if C(·) ≤ 0 (3) The proof is omitted to Appendix A.4. Theorem 3 provides the upper bounds and lower bounds for attack cost J(D′). The lower bounds show that to what extent the attack can reduce J(D′) by manipulating up to k users, i.e., how successful the attack can be. The lower bounds depend on the attack cost on clean model J(D), k and . When J(D) is higher, the DPFL model under poisoning attacks is more robust because the lower bounds are accordingly higher; a tighter privacy guarantee, i.e., smaller , can also lead to higher robustness certification as it increases the lower bounds; with larger k, the attacker ability grows and thus lead to lower possible J(D′). The upper bounds show the least adversarial effect brought by k attackers, i.e., how vulnerable the DPFL model is under the optimistic case (e.g., the backdoor pattern is less distinguishable). Leveraging the lower bounds in Theorem 3, we can lower-bound the minimum number of attackers required to reduce the attack cost to certain level associated with hyperparameter τ in Corollary 1. Corollary 1 (Lower Bound of k Given τ ). Suppose a randomized mechanism M satisfies userlevel ( , δ)-DP. Let attack cost function be C, the expected attack cost be J(·). In order to achieve J(D′) ≤ 1τ J(D) for τ ≥ 1 when 0 ≤ C(·) ≤ C̄, or achieve J(D ′) ≤ τJ(D) for 1 ≤ τ ≤ − C̄J(D) when −C̄ ≤ C(·) ≤ 0, the number of adversarial users should satisfy: k ≥ 1 log (e − 1) J(D)τ + C̄δτ (e − 1) J(D) + C̄δτ or k ≥ 1 log (e − 1) J(D)τ − C̄δ (e − 1) J(D)− C̄δ respectively. (4) The proof is omitted to Appendix A.4. Corollary 1 shows that stronger privacy guarantee (i.e., smaller ) requires more attackers to achieve the same effectiveness of attack, indicating higher robustness. 5 INSTANCE-LEVEL PRIVACY AND CERTIFIED ROBUSTNESS FOR FL 5.1 INSTANCE-LEVEL PRIVACY In this section, we introduce the instance-level DP definition, the corresponding algorithm, and the privacy analysis for FL. When DP is used to protect the privacy of individual instance, the trained stochastic FL model should not differ much if one instance is modified. Hence, the adjacent datasets in instance-level DP are defined as those differing by one instance. The formal definition of Instance-level ( , δ)-DP (Definition 3) is omitted to Appendix A.1. Dopamine (Malekzadeh et al., 2021) provides the first instance-level privacy guarantee under FedSGD (McMahan et al., 2017). However, it has two limitations. First, its privacy bound is loose. Although FedSGD performs both user and batch sampling during training, Dopamine ignores the privacy gain provided by random user sampling. In this section, we improve the privacy guarantee under FedSGD with privacy amplification via user sampling (Bassily et al., 2014; Abadi et al., 2016). This improvement leads to algorithm InsDP-FedSGD, to achieve tighter privacy analysis. We defer the algorithm (Algorithm 2) as well as its privacy guarantee to Appendix A.1. Besides the loose privacy bound, Dopamine (Malekzadeh et al., 2021) only allows users to perform one step of DP-SGD (Abadi et al., 2016) during each FL round. This restriction limits the efficiency of the algorithm and increases the communication overhead. In practice, users in FL are typically allowed to update their local models for many steps before submitting updates to reduce the communication cost. To solve this problem, we further improve InsDP-FedSGD to support multiple local steps during each round. Specifically, we propose a novel instance-level DPFL algorithm InsDP-FedAvg (Algorithm 3 in Appendix A.1) allowing users to train multiple local SGD steps before submitting the updates. In InsDP-FedAvg, each user i performs local DP-SGD so that the local training mechanismMi satisfies instance-level DP. Then, the server aggregates the updates. We prove that the global mechanismM preserves instance-level DP using DP parallel composition theorem (Dwork & Lei, 2009) and moment accountant (Abadi et al., 2016). Algorithm 3 formally presents the InsDP-FedAvg algorithm and the calculation of its privacy budget . Specifically, at first, local privacy cost i0 is initialized as 0 before FL training. At round t, if user i is not selected, its local privacy cost is kept unchanged it ← it−1. Otherwise user i updates local model by running DP-SGD for V local steps with batch sampling probability p, noise level σ and clipping threshold S, and it is accumulated upon i t−1 via its local moment accountant. Next, the server aggregates the updates from selected users, and leverages { it}i∈[N ] and the parallel composition in Theorem 4 to calculate the global privacy cost t. After T rounds, the mechanismM that outputs the FL global model in Algorithm 3 is instance-level ( T , δ)-DP. Theorem 4 (InsDP-FedAvg Privacy Guarantee). In Algorithm 3, during round t, if the local mechanismMi satisfies ( it, δ)-DP, then the global mechanismM satisfies ( maxi∈[N ] i t, δ ) -DP. The idea behind Theorem 4 is that when D′ and D differ in one instance, the modified instance only falls into one local dataset, and thus parallel composition theorem (Dwork & Lei, 2009) can be applied. Then the privacy guarantee corresponds to the worst-case, and is obtained by taking the maximum local privacy cost across all the users. The detailed proof is given in Appendix A.1. 5.2 CERTIFIED ROBUSTNESS OF INSTANCE-LEVEL DPFL AGAINST POISONING ATTACKS Threat Model. We consider poisoning attacks under the presence of k poisoned instances. These instances could be controlled by the same or multiple adversarial users. Our robustness certification is agnostic to the attack methods as long as the number of poisoned instances is constrained. According to the group DP property (Lemma 1) and the post-processing property for FL model with instance-level ( , δ)-DP, we prove that our robust certification results proposed for user-level DP are also applicable to instance-level DP. Below is the formal theorem (proof is given in Appendix A.4). Theorem 5. Suppose D and D′ differ by k instances, and the randomized mechanismM satisfies instance-level ( , δ)-DP. The results in Theorems 1, 2,and 3, and Corollary 1 hold forM, D, and D′. Comparison with existing certified prediction methods in centralized setting. The form of Theorem 1 is similar with the robustness condition against test-time attack in Proposition 1 of (Lecuyer et al., 2019a). This is because the derived robustness conditions are both rooted in the DP properties, but ours focus on the robustness against training-time attacks in FL, which is more challenging considering the distributed nature and the model training dynamics, i.e., the analysis of the privacy budget over training rounds. Our Theorem 1 is also different from previous randomized smoothingbased certifiably robust centralized learning against backdoor (Weber et al., 2020) and label flipping (Rosenfeld et al., 2020). First, our randomness comes from the inherent training randomness of user/instance-level ( , δ)-DP, e.g., user subsampling and Gaussian noise. Thus, the certified robustness for free in DPFL means that the DPFL learning algorithmM itself is randomized, and such randomness can lead to the robustness certification with non-trivial quantitative measurement of the randomness. On the contrary, robustness in randomized smoothing-based methods comes from explicitly making the classification process randomized via adding noise in training datasets (Weber et al., 2020; Rosenfeld et al., 2020), or test samples (Lecuyer et al., 2019a; Cohen et al., 2019) which is easier to measure. Second, our Theorem 1, 2 hold no matter how is achieved, which means that we can add different types of noise, leverage different subsampling strategies or even different FL training protocols to achieve user/instance-level . However, in (Weber et al., 2020; Rosenfeld et al., 2020) different certifications require different types of noise (Laplacian, Gaussian, etc.). Additionally, DP is suitable to characterize the robustness against poisoning since DP composition theorems can be leveraged to track privacy cost , which captures the training dynamics of ML model parameters without additional assumptions. Otherwise one may need to track the deviations of model parameters by analyzing SGD over training, which is theoretically knotty and often requires strong assumptions on Lipschitz continuity, smoothness or convexity for the trained models. 6 EXPERIMENTS We present evaluations for robustness certifications, expecially Thm. 2, 3 and Cor. 1. We find that 1) there is a tradeoff between certified prediction and privacy on certain datasets; 2) a tighter privacy guarantee always provides stronger certified robustness in terms of the certified attack cost; 3) our lower bounds of certified attack cost are generally tight when k is small. When k is large, they are tight under strong attacks (e.g., large local poisoning ratio α). Stronger attacks or tighter certification are requried to further tighten the gap between the emprical robustness and theoretical bounds. Data and Model. We evaluate our robustness certification results with three datasets: image classfication on MNIST, CIFAR-10 and text sentiment analysis task on tweets from Sentiment140 (Go et al.) (Sent140), which involves classifying Twitter posts as positive or negative. For image datasets, we use corresponding standard CNN architectures in the differential privacy library (opa, 2021) of PyTorch; for Sent140, we use a LSTM classifier. Following previous work on DP ML (Jagielski et al., 2020; Ma et al., 2019) and backdoor attacks (Tran et al., 2018; Weber et al., 2020) which evaluate with two classes, we focus on binary classification for MNIST (digit 0 and 1) and CIFAR-10 (airplane and bird), and defer the 10-class results to Appendix A.3. We train FL model following Algorithm 1 for user-level privacy and Algorithm 3 for instance-level privacy. We refer the readers to Appendix A.3 for details about the datasets, networks, parameter setups. Poisoning Attacks. We evaluate several state-of-the-art poisoning attacks against the proposed UserDP-FedAvg and InsDP-FedAvg. We first consider backdoor attacks (BKD) (Bagdasaryan et al., 2020) and label flipping attacks (LF) (Fung et al., 2020). For InsDP-FedAvg, we consider the worst case where k backdoored or lable-flipped instances are fallen into the dataset of one user. For UserDP-FedAvg, we additionally evaluate distributed backdoor attack (DBA) (Xie et al., 2019), which is claimed to be a more stealthy backdoor attack against FL. Moreover, we consider BKD, LF and DBA via model replacement approach (Bagdasaryan et al., 2020) where k attackers train the local models using local datasets with α fraction of poisoned instances, and scale the malicious updates with hyperparameter γ, i.e., ∆wit ← γ∆wit, before sending them to the sever. This way, the malicious updates would have a stronger impact on the FL model. Note that even when attackers perform scaling, after server clipping, the sensitivity of updates is still upper-bounded by the clipping threshold S. So the privacy guarantee in Proposition 1 still holds under poisoning attacks via model replacement. Detailed attack setups are presented in Appendix A.3. Evaluation Metrics and Setup. We consider two evaluation metrics based on our robustness certification criteria. The first metric is certified accuracy, which is the fraction of the test set for which the poisoned FL model makes correct and consistent predictions compared with the clean FL model. Given a test set of size n, for i-th test sample, the ground truth label is yi, the output prediction is ci , and the certified number of adversarial users/instances is Ki. We calculate the certified accuracy at k as 1n ∑n i=1 1{ci = yi and Ki ≥ k}. The second metric is the lower bound of attack cost in Theorem 3: J(D′) = max{e−k J(B)− 1−e −k e −1 δC̄, 0}. We evaluate the tightness of J(D′) by comparing it with empirical attack cost J(D′). To quantify the robustness, we evaluate the expected class confidence Fc(M(D), x) for class c via Monte-Carlo sampling. We run the private FL algorithms for M =1000 times, with class confidence fsc = fc(M(D), x) for each time. We compute its expectation to estimate Fc(M(D), x) ≈ 1M ∑M s=1 f s c and use it to evaluate Theorem 2. In addition, we use Hoeffding’s inequality (Hoeffding, 1994) to calibrates the empirical estimation with confidence level parameter ψ, and results are deferred to Appendix A.3. In terms of the attack cost, we use Example 1, 2 as the definitions of cost function C for backdoor attacks and label flipping attacks respectively. We follow similar protocol to estimate J(D′) for Theorem 3 and Corollary 1. 6.1 ROBUSTNESS EVALUATION OF USER-LEVEL DPFL Certified Prediction. Figure 1(a)(b) present the user-level certified accuracy under different by training DPFL models with different noise scale σ. The results on Sent140 dataset is presented in Figure 13 of Appendix. A.3.8. We observe that the largest k can be certified when is around 0.6298 in MNIST, 0.1451 in CIFAR-10, and 0.2247 in Sent140 which verifies the tradeoff between and certified accuracy as we discussed in Section 4.2. Advanced DP protocols that requires less noise while achieving similar level of privacy are favored to improve the privacy, utility, and certified accuracy simultaneously. Furthermore, we compare the certified accuracy of four different user-level DPFL methods (McMahan et al., 2018; Geyer et al., 2017) given the same privacy budget . As shown in Figure 14 and Figure 15 of Appendix. A.3.9, the models trained by different DPFL algorithms satisfying same have different certified robustness results. This is because even under the same , different DPFL algorithmsM produce trained modelsM(D) with different model performance, thus leading to different certified robustness. More discussion could be found in Appendix. A.3.9. Certified Attack Cost. In order to evaluate Theorem 3 and characterize the tightness of our theoretical lower bound J(D′), we compare it with the empirical attack cost J(D′) under different local poison fraction α , attack methods and scale factor γ in Figure 2. Note that when k = 0, the model is benign so the empirical cost equals to the certified one. We find that 1) when k increases, the attack ability grows, and both the empirical attack cost and theoretical lower bound decreases. 2) In Figure 2 row 1, given the same k, higher α, i.e., poisoning more local instances for each attacker, achieves a stronger attack, under which lower empirical J(D) can be achieved and is more close to the certified lower bound. This indicates that the lower bound appears tighter when the poisoning attack is stronger. 3) In Figure 2 row 2, we fix α = 100% and evaluate UserDP-FedAvg under different γ and attack methods. It turns out that DP serves as a strong defense empirically for FL, given that J(D) did not vary much under different γ(1, 50, 100) and different attack methods (BKD, DBA, LF). This is because the clipping operation restricts the magnitude of malicious updates, rendering the model replacement ineffective; the Gaussian noise perturbs the malicious updates and makes the DPFL model stable, and thus the FL model is less likely to memorize the poisoning instances. 4) In both rows, the lower bounds are tight when k is small. When k is large, there remains a gap between our theoretical lower bounds and empirical attack costs under different attacks, which will inspire more effective poisoning attacks or tighter robustness certification. Certified Attack Cost under Different . Here we further explore the impacts of different factors on the certified attack cost. Figure 3 presents the empirical attack cost and the certified attack cost lower bound given different on user-level DP. It is shown that as the privacy guarantee becomes stronger, i.e. smaller , the model is more robust achieving higher J(D′) and J(D′). In Figure 5 (a)(b), we train user-level ( , δ) DPFL models, calculate corresponding J(D), and plot the lower bound of k given different attack effectiveness hyperparameter τ according to Corollary 1. It shows that 1) when the required attack effectiveness is higher, i.e., τ is larger, more number of attackers is required. 2) To achieve the same effectiveness of attack, fewer number of attackers is needed for larger , which means that DPFL model with weaker privacy is more vulnerable to poisoning attacks. 6.2 ROBUSTNESS EVALUATION OF INSTANCE-LEVEL DPFL Certified Prediction. Figure 1(c)(d) show the instance-level certified accuracy under different . The optimal for K is around 0.3593 for MNIST and 0.6546 for CIFAR-10, which is aligned with our observation of the tradeoff between certified accuracy and privacy on user-level DPFL (Section 6.1). Certified Attack Cost. Figure 4 show the certified attack cost on CIFAR-10. From Figure 4 (a)(b), poisoning more instances (i.e., larger k) induces lower theoretical and empirical attack cost. From Figure 4 (c)(d), it is clear that instance-level DPFL with stronger privacy guarantee provides higher attack cost both empirically and theoretically, meaning that it is more robust against poisoning attacks. Results on MNIST are deferred to Appendix A.3. Figure 5 (c)(d) show the lower bound of k under different instance-level given different τ . Fewer poisoned instances are required to reduce the J(D′) to the similar level for a less private DPFL model, indicating that the model is easier to be attacked. 7 CONCLUSION In this paper, we present the first work on deriving certified robustness in DPFL for free against poisoning attacks. We propose two robustness certification criteria, based on which we prove that a FL model satisfying user-level (instance-level) DP is certifiably robust against a bounded number of adversarial users (instances). Our theoretical analysis characterizes the inherent relation between certified robustness and differential privacy of FL on both user and instance levels, which are empirically verified with extensive experiments. Our results can be used to improve the trustworthiness of DPFL. Ethics Statement. Our work study the robustness guarantee of differentially private federated learning models from theoretical and empirical perspectives. All the datasets and packages we use are open-sourced. We do not have ethical concerns in our paper. Reproducibility Statement. Our source code is available as the supplemental material for reproducibility purpose. Our experiments can be reproduced following our detailed training and evaluation setups in Appendix A.3. The complete proofs of privacy analysis and certified robustness analysis can be found in the Appendix A.1 and Appendix A.4, respectively. A APPENDIX The Appendix is organized as follows: • Appendix A.1 provides the DP definitions and the DPFL algorithms on both user and instance levels, and the proofs for corresponding privacy guarantees. • Appendix A.2 specifies our threat models. • Appendix A.3 provides more details on experimental setups for training and evaluation, the addition experimental results on certified accuracy with confidence level, robustness evaluation of InsDP-FedAvg on MNIST, robustness evaluation on 10-class classification, DP bound comparison between InsDP-FedSGD and Dopamine, certified accuracy of UserDP-FedAvg on Sent140 and certified accuracy comparison of different user-level DPFL algorithms. • Appendix A.4 provides the proofs for the certified robustness related analysis, including Lemma 1, Theorem 1, 2, 3, 5 and Corollary 1. • Appendix A.5 provides the comparison to related work (Lecuyer et al., 2019a; Ma et al., 2019). A.1 DIFFERENTIALLY PRIVATE FEDERATED LEARNING A.1.1 USERDP-FEDAVG Definition 2 (User-level ( , δ)-DP). Let B,B′ be two user sets with size N . Let D and D′ be the datasets that are the union of local training examples from all users inB andB′ respectively. Then,D and D′ are adjacent if B and B′ differ by one user. The mechanismM satisfies user-level ( , δ)-DP if it meets Definition 1 with D and D′ as adjacent datasets. Algorithm 1: UserDP-FedAvg. Input: Initial model w0, user sampling probability q, privacy parameter δ, clipping threshold S, noise level σ, local datasets D1, ..., DN , local epochs E, learning rate η. Output: FL model wT and privacy cost Server executes: for each round t = 1 to T do m← max(q ·N, 1); Ut ← (random subset of m users); for each user i ∈ Ut in parallel do ∆wit ← UserUpdate(i, wt−1) ; wt ← wt−1+ 1m (∑ i∈Ut Clip(∆w i t, S) +N ( 0, σ2S2 )) ; M.accum priv spending(σ, q, δ) ; =M.get privacy spent() ; return wT , Procedure UserUpdate(i, wt−1) w ← wt−1 ; for local epoch e = 1 to E do for batch b ∈ local dataset Di do w ← w − η∇l(w; b) ∆wit ← w − wt−1 ; return ∆wit Procedure Clip(∆, S) return ∆/max ( 1, ‖∆‖2 S ) In Algorithm 1,M.accum priv spending() andM.get privacy spent() are the calls on the moments accountantM refer to the API of (Abadi et al., 2016). Given the user sampling probability q, noise level σ, FL rounds T , and a δ > 0, UserDP-FedAvg satisfies ( , δ)-DP as below, which is a generalization of (Abadi et al., 2016). The aim is to analyze privacy budget , which is accumulated as T increases due to the continuous access to training data. Proposition 1 (UserDP-FedAvg Privacy Guarantee). There exist constants c1 and c2 so that given user sampling probability q, and FL rounds T , for any ε < c1q2T , if σ ≥ c2 q √ T log(1/δ) , the randomized mechanismM in Algorithm 1 is ( , δ)-DP for any δ > 0. Proof. The proof follows the proof of Theorem 1 in (Abadi et al., 2016), while the notations have slightly different meanings under FL settings. In Proposition 1, we use q to represent user-level sampling probability and T to represent FL training rounds. Note that the above privacy analysis can be further improved by Rényi Differential Privacy (Mironov et al., 2019). Discussion (Li et al., 2020b) divide the user-level privacy into global privacy (Geyer et al., 2017; McMahan et al., 2018) and local privacy (Agarwal et al., 2018). In both local and global privacy, the norm of each update is clipped. The difference lies in that the noise is added on the aggregated model updates in global privacy because a trusted server is assumed, while the noise is added on each local update in local privacy because it assumes that the central server might be malicious. Algorithm 1 belongs to global privacy. A.1.2 INSDP-FEDSGD Definition 3 (Instance-level ( , δ)-DP). Let D be the dataset that is the union of local training examples from all users. Then, D and D′ are adjacent if they differ by one instance. The mechanism M is instance-level ( , δ)-DP if it meets Definition 1 with D and D′ as adjacent datasets. Algorithm 2: InsDP-FedSGD. Input: Initial model w0, user sampling probability q, privacy parameter δ, local clipping threshold S, local noise level σ, local datasets D1, ..., DN , learning rate η, batch sampling probability p. Output: FL model wT and privacy cost Server executes: for each round t = 1 to T do m← max(q ·N, 1); Ut ← (random subset of m clients); for each user i ∈ Ut in parallel do ∆wit ← UserUpdate(i, wt−1) ; wt ← wt−1 + 1m ∑ i∈Ut ∆w i t ; M.accum priv spending( √ mσ, pq, δ) =M.get privacy spent() ; return wT , Procedure UserUpdate(i, wt−1) w ← wt−1 ; bit ←(uniformly sample a batch fromDi with probability p = L/|Di|); for each xj ∈ bit do g(xj)← ∇l(w;xj); ḡ(xj)← Clip(g(xj), S) ; g̃ ← 1L (∑ j ḡ(xj) +N ( 0, σ2S2 )) ; w ← w − ηg̃ ; ∆wit ← w − wt−1 ; return ∆wit Procedure Clip(∆, S) return ∆/max ( 1, ‖∆‖2 S ) Under FedSGD, when each local model performs one step of DP-SGD (Abadi et al., 2016), the randomized mechanismM that outputs the global model preserves the instance-level DP. We can regard the one-step update for the global model in Algorithm 2 as: wt ← wt−1 − 1 m ∑ i∈Ut η L ∑ xj∈bit ḡ(xj) +N ( 0, σ2S2 ) (5) Proposition 2 (InsDP-FedSGD Privacy Guarantee). There exist constants c1 and c2 so that given batch sampling probability p, and user sampling probability q, the number of selected users each round m, and FL rounds T , for any ε < c1(pq)2T , if σ ≥ c2 pq √ T log(1/δ) √ m , the randomized mechanismM in Algorithm 2 is ( , δ)-DP for any δ > 0. Proof. i) In instance-level DP, we consider the sampling probability of each instance under the combination of user-level sampling and batch-level sampling. Since the user-level sampling probability is q and the batch-level sampling probablity is p, each instance is sampled with probability pq. ii) Additionally, since the sensitivity of instance-wise gradient w.r.t one instance is S, after local gradient descent and server FL aggregation, the equivalent sensitivity of global model w.r.t one instance is S′ = ηSLm according to Eq (5). iii) Moreover, since the local noise is ni ∼ N (0, σ 2S2) , then the “virtual” global noise is n = ηmL ∑ i∈Ut ni according to Eq (5), so n ∼ N (0, η2σ2S2 mL2 ). Let η2σ2S2 mL2 = σ ′2S′ 2 such that n ∼ N (0, σ′2S′2). Because S′ = ηSLm , the equivalent global noise level is σ′2 = σ2m, i.e., σ′ = σ √ m. In Proposition 2, we use pq to represent instance-level sampling probability, T to represent FL training rounds, σ √ m to represent the equivalent global noise level. The rest of the proof follows the proof of Theorem 1 in (Abadi et al., 2016). We defer the DP bound evaluation comparison between InsDP-FedSGD and Dopamine to Appendix A.3.7. A.1.3 INSDP-FEDAVG Algorithm 3: InsDP-FedAvg. Input: Initial model w0, user sampling probability q, privacy parameter δ, local clipping threshold S, local noise level σ, local datasets D1, ..., DN , local steps V , learning rate η, batch sampling probability p. Output: FL model wT and privacy cost Server executes: for each round t = 1 to T do m← max(q ·N, 1); Ut ← (random subset of m users); for each user i ∈ Ut in parallel do ∆wit, i t ← UserUpdate(i, wt−1) ; for each user i /∈ Ut do it ← it−1 ; wt ← wt−1 + 1m ∑ i∈Ut ∆w i t ; t =M.parallel composition({ it}i∈[N ]) = T ; return wT , Procedure UserUpdate(i, wt−1) w ← wt−1 ; for each local step v = 1 to V do b ←(uniformly sample a batch from Di with probability p = L/|Di|); for each xj ∈ b do g(xj)← ∇l(w;xj); ḡ(xj)← Clip(g(xj), S) ; g̃ ← 1L ( ∑ j ḡ(xj) +N ( 0, σ2S2 ) ); w ← w − ηg̃ ; Mi.accum priv spending(σ, p, δ) ; it =Mi.get privacy spent() ; ∆wit ← w − wt−1 ; return ∆wit, it Procedure Clip(∆, S) return ∆/max ( 1, ‖∆‖2 S ) Lemma 2 (InsDP-FedAvg Privacy Guarantee when T = 1). In Algorithm 3, when T = 1, suppose local mechanismMi satisfies ( i, δ)-DP, then global mechanismM satisfies (maxi∈[N ] i, δ)-DP. Proof. We can regard federated learning as partitioning a dataset D into N disjoint subsets {D1, D2, . . . , DN}. N mechanisms {M1, . . . ,MN} are operated on these N parts separately and eachMi satisfies its own i-DP for i ∈ [1, N ]. Note that if i-th user is not selected , i = 0 because local dataset Di is not accessed and there is no privacy cost. Without loss of generality, we assume the modified data sample x′ (x → x′ causes D → D′) is in the local dataset of k-th client Dk. Let D,D′ be two neighboring datasets (Dk, D′k are also two neighboring datasets). M is randomized mechanism that outputs the global model, andMi is the randomized mechanism that outputs the local model update ∆wi. Suppose w0 is the initialized and deterministic global model, and {z1, . . . , zN} are randomized local updates. We have a sequence of computations {z1 = M1(D1), z2 = M2(D2; z1), z3 = M3(D3; z1, z2) . . .} and z = M(D) = w0 + ∑N i=1 zi. Note that if i-th user is not selected , zi = 0. According to the parallel composition (Tu), we have Pr[M(D) = z] = Pr[M1(D1) = z1] Pr[M2(D2; z1) = z2] . . .Pr[MN (DN ; z1, . . . , zN−1) = zN ] ≤ exp( k) Pr[Mk(D′k; z1, . . . , zk−1) = zk] ∏ i6=k Pr[Mi(Di; z1, . . . , zi−1) = zi] = exp( k) Pr[M(D′) = z] SoM satisfies k-DP when the modified data sample lies in the subset Dk. Consider the worst case of where the modified data sample could fall in, we know thatM satisfies (maxi∈[N ] i)-DP. We recall Theorem 4. Theorem 4 (InsDP-FedAvg Privacy Guarantee). In Algorithm 3, during round t, if the local mechanismMi satisfies ( it, δ)-DP, then the global mechanismM satisfies ( maxi∈[N ] i t, δ ) -DP. Proof. Again, without loss of generality, we assume the modified data sample x′ (x → x′ causes D → D′) is in the local dataset of k-th user Dk. We first consider the case when all users are selected. At each round t, N mechanisms are operated on N disjoint parts and eachMit satisfies own i-DP where i is the privacy cost for accessing the local dataset Di for one round (not accumulating over previous rounds). Let D,D′ be two neighboring datasets (Dk, D′k are also two neighboring datasets). Suppose z0 = Mt−1(D) is the aggregated randomized global model at round t − 1, and {z1, . . . , zN} are the randomized local updates at round t, we have a sequence of computations {z1 = M1t (D1; z0), z2 = M2t (D2; z0, z1), z3 = M3t (D3; z0, z1, z2) . . .} and z =Mt(D) = z0 + ∑N i zi. We first consider the sequential composition (Dwork & Roth, 2014) to accumulate the privacy cost over FL rounds. According to parallel composition, we have Pr[Mt(D) = z] = Pr[Mt−1(D) = z0] N∏ i=1 Pr[Mit(Di; z0, z1, . . . , zi−1) = zi] = Pr[Mt−1(D) = z0] Pr[Mkt (Dk; z0, z1, . . . , zk−1) = zk] ∏ i 6=k Pr[Mit(Di; z0, z1, . . . , zi−1) = zi] ≤ exp( t−1) Pr[Mt−1(D′) = z0] exp( k) Pr[Mkt (D′k; z0, z1, . . . , zk−1) = zk] ∏ i 6=k Pr[Mit(Di; z0, z1, . . . , zi−1) = zi] = exp( t−1 + k) Pr[Mt(D′) = z] Therefore,Mt satisfies t-DP, where t = t−1 + k. Because the modified data sample always lies in Dk over t rounds and 0 = 0, we can have t = t k, which means that the privacy guarantee of global mechanismMt is only determined by the local mechanism of k-th user over t rounds. Moreover, moment accountant (Abadi et al., 2016) is known to reduce the privacy cost from O(t) to O( √ t). We can use the more advanced composition, i.e., moment accountant, instead of the sequential composition, to accumulate the privacy cost for local mechanismMk over t FL rounds. In addition, we consider user subsampling. As described in Algorithm 3, if the user i is not selected at round t, then its local privacy cost is kept unchanged at this round. Take the worst case of where x′ could lie in, at round t,M satisfies t-DP, where t = maxi∈[N ] it, local mechanism M i satisfies it-DP, and the local privacy cost i t is accumulated via local moment accountant in i-th user over t rounds. A.2 THREAT MODELS We consider targeted poisoning attacks of two types. In backdoor attacks (Gu et al., 2019; Chen et al., 2017a), the goal is to embed a backdoor pattern (i.e., a trigger) during training such that any test input with such pattern will be mis-classified as the target. In label flipping attacks (Biggio et al., 2012; Huang et al., 2011), the labels of clean training examples from one source class are flipped to the target class while the features of the data are kept unchanged. In FL, the purpose of backdoor attacks is to manipulate local models with backdoored local data, so that the global model would behave normally on untampered data samples while achieving high attack success rate on clean data (Bagdasaryan et al., 2020). Given the same purpose, distributed backdoor attack (DBA) (Xie et al., 2019) decomposes the same backdoor pattern to several smaller ones and embeds them to different local training sets for different adversarial users. The goal of label flipping attack against FL is to manipulate local datasets with flipped labels such that the global model will mis-classify the test data in the source class as the target class. The model replacement (Bagdasaryan et al., 2020) is a more powerful approach to perform the above attacks, where the attackers first train the local models using the poisoned datasets and then scale the malicious updates before sending them to the server. This way, the attacker’s updates would have a stronger impact on the FL model. We use the model replacement method to perform poisoning attacks and study the effectiveness of DPFL. For UserDP-FedAvg, we consider backdoor, distributed backdoor, and label flipping attacks via the model replacement approach. Next, we formalize the attack process and introduce the notations. Suppose the attacker controls k adversarial users, i.e., there are k attackers out of N users. Let B be the original user set of N benign users, and B′ be the user set that contains k attackers. Let D := {D1, D2, . . . , DN} be the union of original benign local datasets across all users. For a data sample zij := {xij , yij} in Di, we denote its backdoored version as z′ i j := {xij + δx, y∗}, where δx is the backdoor pattern, y∗ is the targeted label; the distributed backdoor attack (DBA) version as z′ i j := {xij + δix, y∗}, where δix is the distributed backdoor pattern for attacker i; the label-flipped version as z′ij := {xij , y∗}. Note that the composition of all DBA patterns is equivalent to the backdoor pattern, i.e., ∑k i=1 δ i x = δx. We assume attacker i has αi fraction of poisoned samples in its local dataset D′i. Let D ′ := {D′1, . . . , D′k−1, D′k, Dk+1, . . . , DN} be the union of local datasets when k attackers are present. The adversarial user i performs model replacement by scaling the model update with hyperparameter γ before submitting it to the server, i.e., ∆wit ← γ∆wit. In our threat model, we consider the attacker that follows our training protocol and has no control over which users are sampled. For InsDP-FedAvg, we consider both backdoor and label flipping attacks. Since distributed backdoor and model replacement attacks are proposed for adversarial users rather than adversarial instances, we do not consider them for instance-level DPFL. There are k backdoored or label-flipped instances {z′1, z′2, . . . , z′k}, which could be controlled by same or multiple users. In our threat model, we consider the attacker that follows our training protocol and has no control over which data partition (or batch) is sampled. Note that we do not assume that the adversaries’ poisoning data always be sampled. In our algorithms, each batch is randomly subsampled, so the adversaries cannot control if poisoned data are sampled in each step. A.3 EXPERIMENTAL DETAILS AND ADDITIONAL RESULTS A.3.1 DATASETS AND MODELS We evaluate our robustness certification results with two datasets: MNIST (LeCun & Cortes, 2010) and CIFAR-10 (Krizhevsky, 2009). For each dataset, we use corresponding standard CNN architectures in the differential privacy library (opa, 2021) of PyTorch (Paszke et al., 2019). MNIST: We study an image classification problem of handwritten digits in MNIST. It is a dataset of 70000 28x28 pixel images of digits in 10 classes, split into a train set of 60000 images and a test set of 10000 images. Except Section A.3.6, we consider binary classification on classes 0 and 1, making our train set contain 12665 samples, and the test set 2115 samples. The model consists of two Conv-ReLu-MaxPooling layers and two linear layers. CIFAR-10: We study image classification of vehicles and animals in CIFAR-10. This is a harder dataset than MNIST, consisting of 60000 32x32x3 images, split into a train set of 50000 and a test set of 10000. Except Section A.3.6, we consider binary classification on class airplane and bird, making our train set contain 10000 samples, and the test set 2000 samples. The model consists of four Conv-ReLu-AveragePooling layers and one linear layer. When training on CIFAR10, we follow the standard practice for differential privacy (Abadi et al., 2016; Jagielski et al., 2020) and fine-tune a whole model pre-trained non-privately on the more complex CIFAR100, a similarly sized but more complex benchmark dataset. We can achieve reasonable performance on CIFAR-10 datasets by only training (fine-tuning) few rounds. Sent140: We consider a text sentiment analysis task on tweets from Sentiment140 (Go et al.) (Sent140) which involves classifying Twitter posts as positive or negative. We use a two layer LSTM binary classifier containing 256 hidden units with pretrained 300D GloVe embedding (Pennington et al., 2014). Each twitter account corresponds to a device. We use the same network architecture, non-iid dataset partition method, number of selected user per round, learning rate, batch size, etc. as in (Li et al., 2018), which are summarized in Table 1. A.3.2 TRAINING DETAILS We simulate the federated learning setup by splitting the training datasets for N FL users in an i.i.d manner. FL users run SGD with learning rate η, momentum 0.9, weight decay 0.0005 to update the local models. The training parameter setups are summarized in Table 1. Following (McMahan et al., 2018) that use δ ≈ 1N1.1 as privacy parameter, for UserDP-FedAvg we set δ = 0.0029 according to the total number of users, and for InsDP-FedAvg we set δ = 0.00001 according the total number of training samples. Next we summarize the privacy guarantees and clean accuracy offered when we study the certified prediction and certified attack cost, which are also the training parameters setups when k = 0 in Figure 1, 2, 3, 4, 5, 8. User-level DPFL In order to study the user-level certified prediction under different privacy guarantee, for MNIST, we set to be 0.2808, 0.4187, 0.6298, 0.8694, 1.8504, 2.8305, 4.8913, 6.9269, which are obtained by training UserDP-FedAvg FL model for 3 rounds with noise level σ = 3.0, 2.3, 1.8, 1.5, 1.0, 0.8, 0.6, 0.5, respectively (Figure 1(a)). For CIFAR-10, we set to be 0.1083, 0.1179, 0.1451, 0.2444, 0.3663, 0.4527, 0.5460, 0.8781, which are obtained by training UserDP-FedAvg FL model for one round with noise level σ = 10.0, 8.0, 6.0, 4.0, 3.0, 2.6, 2.3, 1.7, respectively (Figure 1(b)). The clean accuracy (average over 1000 runs) of UserDP-FedAvg under non-DP training ( = ∞) and DP training (varying ) on MNIST and CIFAR-10 are reported in Table. 2 and Table. 3 respectively. To certify the attack cost under different number of adversarial users k (Figure 2), for MNIST, we set the noise level σ to be 2.5. When k = 0, after training UserDP-FedAvg for T = 3, 4, 5 rounds, we obtain FL models with privacy guarantee = 0.3672, 0.4025, 0.4344 and clean accuracy (average over M runs) 86.69%, 88.76%, 88.99%. For CIFAR-10, we set the noise level σ to be 3.0. After training UserDP-FedAvg for T = 3, 4 rounds under k = 0, we obtain FL models with privacy guarantee = 0.5346, 0.5978 and clean accuracy 78.63%, 78.46%. With the interest of certifying attack cost under different user-level DP guarantee (Figure 3, Figure 5), we explore the empirical attack cost and the certified attack cost lower bound given different . For MNIST, we set the privacy guarantee to be 1.2716, 0.8794, 0.6608, 0.5249, 0.4344, which are obtained by training UserDP-FedAvg FL models for 5 rounds under noise level σ = 1.3, 1.6, 1.9, 2.2, 2.5, respectively, and the clean accuracy for the corresponding models are 99.50%, 99.06%, 96.52%, 93.39%, 88.99%. For CIFAR-10, we set the privacy guarantee to be 1.600, 1.2127, 1.0395.0.8530, 0.7616, 0.6543, 0.5978, which are obtained by training UserDP-FedAvg FL models for 4 rounds under noise level σ = 1.5, 1.8, 2.0, 2.3, 2.5, 2.8, 3.0, respectively, and the clean accuracy for the corresponding models are 85.59%, 84.52%, 83.23%, 81.90%, 81.27%, 79.23%, 78.46%. Instance-level DPFL To certify the prediction for instance-level DPFL under different privacy guarantee, for MNIST, we set privacy cost to be 0.2029, 0.2251, 0.2484, 0.3593, 0.4589, 0.6373, 1.0587, 3.5691, which are obtained by training InsDP-FedAvg FL models for 3 rounds with noise level σ = 15, 10, 8, 5, 4, 3, 2, 1, respectively (Figure 1(c)). For CIFAR-10, we set privacy cost to be 0.3158, 0.3587, 0.4221, 0.5130, 0.6546, 0.9067, 1.4949, 4.6978, which are obtained by training InsDP-FedAvg FL models for one round with noise level σ = 8, 7, 6, 5, 4, 3, 2, 1, respectively (Figure 1(d)). The clean accuracy (average over 1000 runs) of InsDP-FedAvg under non-DP training ( =∞) and DP training (varying ) on MNIST and CIFAR-10 are reported in Table. 4 and Table. 5 respectively. With the aim to study certified attack cost under different number of adversarial instances k, for MNIST, we set the noise level σ to be 10. When k = 0, after training InsDP-FedAvg for T = 4, 9 rounds, we obtain FL models with privacy guarantee = 0.2383, 0.304 and clean accuracy (average over M runs) 96.40%, 96.93% (Figure 8(a)(b)). For CIFAR-10, we set the noise level σ to be 8.0. After training InsDP-FedAvg for one round under k = 0, we obtain FL models with privacy guarantee = 0.3158 and clean accuracy 61.78% (Figure 4(a)(b)). In order to study the empirical attack cost and certified attack cost lower bound under different instance-level DP guarantee, we set the privacy guarantee to be 0.5016, 0.311, 0.2646, 0.2318, 0.2202, 0.2096, 0.205 for MNIST, which are obtained by training InsDP-FedAvg FL models for 6 rounds under noise level σ = 5, 8, 10, 13, 15, 18, 20, respectively, and the clean accuracy for the corresponding models are 99.60%, 98.81%, 97.34%, 92.29%, 88.01%, 80.94%, 79.60% (Figure 8 (c)(d)). For CIFAR-10, we set the privacy guarantee to be 1.261, 0.9146, 0.7187, 0.5923, 0.5038, 0.4385, which are obtained by training InsDP-FedAvg FL models for 2 rounds under noise level σ = 3, 4, 5, 6, 7, 8, respectively, and the clean accuracy for the corresponding models are 84.47%, 80.99%, 76.01%, 68.65%, 63.07%, 60.65% (Figure 4 (c)(d)). With the intention of exploring the upper bound for k given τ under different instance-level DP guarantee, for MNIST, we set noise level σ to be 5, 8, 10, 13, 20, respectively, to obtain instance-DP FL models after 10 rounds with privacy guarantee = 0.6439, 0.3937, 0.3172, 0.2626, 0.2179 and clean accuracy 99.58%, 98.83%, 97.58%, 95.23%, 85.72% (Figure 5(c)). For CIFAR-10, we set noise level σ to be 3, 4, 5, 6, 7, 8 and train InsDP-FedAvg for T = 3 rounds, to obtain FL models with privacy guarantee = 1.5365, 1.1162, 0.8777, 0.7238, 0.6159, 0.5361 and clean accuracy 84.34%, 80.27%, 74.62%, 66.94%, 62.14%, 59.75% (Figure 5(d)). A.3.3 ADDITIONAL IMPLEMENTATION DETAILS (Threat Models) For the attacks against UserDP-FedAvg, by default, the local poison fraction α = 100%, and the scale factor γ = 50. We use same parameters setups for all k attackers. In terms of label flipping attacks, the attackers swap the label of images in source class (digit 1 for MNIST; bird for CIFAR-10) into the target label (digit 0 for MNIST; airplane for CIFAR-10). In terms of backdoor attacks in MNIST and CIFAR-10, the attackers add a backdoor pattern, as shown in Figure 6 (left), in images and swap the label of any sample with such pattern into the target label (digit 0 for MNIST; airplane for CIFAR-10). In terms of distributed backdoor attacks, Figure 6 (right) shows an example when the triangle pattern is evenly decomposed into k = 4 parts, and they are used as the distributed patterns for k = 4 attackers respectively. For the cases where there are more or fewer distributed attackers, the similar decomposition strategy is adopted. For the attacks against InsDP-FedAvg, the same target classes and backdoor patterns are used as UserDP-FedAvg. The parameters setups are the same for all k poisoned instances. (Robustness Certification) We certified 2115/2000/1122 test samples from the MNIST/CIFAR10/Sent140 test sets. In Theorem 3 and Corollary 1 that are related to certified attack cost, C̄ specifies the range of C(·). In the implementation, C̄ is set to be larger than the maximum empirical attack cost evaluated on the test sets (see Table 1 for details). For each dataset, we use the same C̄ for cost function C defined in Example 1 and Example 2. When using Monte-Carlo sampling, we run M = 1000 times for certified accuracy, and M = 100 times for certified attack cost in all experiments. (Machines) We simulate the federated learning setup (1 server and N users) on a Linux machine with Intel® Xe
1. What is the main contribution of the paper regarding federated learning? 2. What are the strengths and weaknesses of the paper's results and experiments? 3. What are the concerns regarding the paper's novelty and focus on federated learning? 4. How does the reviewer assess the paper's overall quality and readiness for publication?
Summary Of The Paper Review
Summary Of The Paper This paper proves that differential privacy indicates certified robustness against poisoning attacks in federated learning. Review I was very intrigued by the paper at first but ended up having a mixed feeling after reading it. On the one hand, the results presented are correct, the experiments are solid, and the paper is rather complete from theory to evaluation. On the other hand, I have several concerns about the paper. First, the main result presented is very similar to [1]. Although the attack scenarios are different, the intuition is almost the same. This raises my concern about the novelty of the work. Second, I do not understand why the authors choose to limit the results to the federated setting. IIUC, the robustness claims should generally hold against data poisoning attacks and there is no specific optimization for federated learning. User-level DP is also not limited to federated learning. I would say FL is more of an application scenario here. Third, if the authors really want to focus on federated learning, then the instance-level DP section does not make sense because all federated protocols should preserve user-level DP in practice. Overall, I think the paper is not ready for publication for now. I encourage the authors to tweak the presentation carefully to make it coherent. [1] Lecuyer, Mathias, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. "Certified robustness to adversarial examples with differential privacy." In 2019 IEEE Symposium on Security and Privacy (SP), pp. 656-672. IEEE, 2019.
ICLR
Title Combining Diverse Feature Priors Abstract To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly. In this work, we explore the design space of leveraging such feature priors by viewing them as distinct perspectives on the data. Specifically, we find that models trained with diverse sets of feature priors have less overlapping failure modes, and can thus be combined more effectively. Moreover, we demonstrate that jointly training such models on additional (unlabeled) data allows them to correct each other’s mistakes, which, in turn, leads to better generalization and resilience to spurious correlations. 1 INTRODUCTION The driving force behind deep learning’s success is its ability to automatically discover predictive features in complex high-dimensional datasets. In fact, these features can generalize beyond the specific task at hand, thus enabling models to transfer to other (yet similar) tasks (Donahue et al., 2014). At the same time, the set of features that the model learns has a large impact on how well it will perform on unseen inputs, especially in the presence of distribution shift (Ponce et al., 2006; Torralba & Efros, 2011; Sagawa et al., 2020) or spurious correlations (Heinze-Deml & Meinshausen, 2017; Beery et al., 2018; Meinshausen, 2018). Motivated by this, recent work focuses on encouraging specific modes of behavior by preventing the models from relying on certain features. Examples include suppressing texture features (Geirhos et al., 2019; Wang et al., 2019), avoiding `p-non-robust features (Tsipras et al., 2019; Engstrom et al., 2019), or utilizing different parts of the frequency spectrum (Yin et al., 2019). At a high level, these methods can be thought of as ways of imposing a feature prior on the learning process, so as to bias the model towards acquiring features that generalize better. This makes the choice of the feature prior to impose a key design decision. The goal of this work is thus to explore the underlying design space of feature priors and, specifically, to understand: How can we effectively harness the diversity of feature priors? OUR CONTRIBUTIONS In this paper, we cast diverse feature priors as different perspectives on the data and study how they can complement each other. In particular, we aim to understand whether training with distinct priors result in models with non-overlapping failure modes and how such models can be combined to improve generalization. This is particularly relevant in settings where the data is unreliable— e.g, when the training data contains a spurious correlation. From this perspective, we focus our study on two priors that arise naturally in the context of image classification, shape and texture, and investigate the following: Feature diversity. We demonstrate that training models with diverse feature priors results in them making mistakes on different parts of the data distribution, even if they perform similarly in terms of overall accuracy. Further, one can harness this diversity to build model ensembles that are more accurate than those based on combining models which have the same feature prior. Combining feature priors on unlabeled data. When learning from unlabeled data, the choice of feature prior can be especially important. For strategies such as self-training, sub-optimal prediction rules learned from sparse labeled data can be reinforced when pseudo-labeling the unlabeled data. We show that, in such settings, we can leverage the diversity of feature priors to address these issues. By jointly training models with different feature priors on the unlabeled data through the framework of co-training Blum & Mitchell (1998), we find that the models can correct each other’s mistakes to learn prediction rules that generalize better. Learning in the presence of spurious correlations. Finally, we want to understand whether combining diverse priors during training, as described above, can prevent models from relying on correlations that are spurious, i.e., correlations that do not hold on the actual distribution of interest. To model such scenarios, we consider a setting where a spurious correlation is present in the training data but we also have access to (unlabeled) data where this correlation does not hold. In this setting, we find that co-training models with diverse feature priors can actually steer them away from such correlations and thus enable them to generalize to the underlying distribution. Overall, our findings highlight the potential of incorporating distinct feature priors into the training process. We believe that further work along this direction will lead us to models that generalize more reliably. 2 BACKGROUND: FEATURE PRIORS IN COMPUTER VISION When learning from structurally complex data, such as images, relying on raw input features alone (e.g., pixels) is not particularly useful. There has thus been a long line of work on extracting input patterns that can be more effective for prediction. While early approaches, such as SIFT (Lowe, 1999) and HOG (Dalal & Triggs, 2005), leveraged hand-crafted features, these have been by now largely replaced by features that are automatically learned in an end-to-end fashion (Krizhevsky, 2009; Ciregan et al., 2012; Krizhevsky et al., 2012). Nevertheless, even when features are learned, model designers still tune their models to better suit a particular task via changes in the architecture or training methodology. Such modifications can be thought of as imposing feature priors, i.e., priors that bias a model towards a particular set of features. One prominent example here are convolutional neural networks, which are biased towards learning a hierarchy of localized features Fukushima (1980); LeCun et al. (1989). Indeed, such a convolutional prior can be quite powerful: it is sufficient to enable many image synthesis tasks without any training Ulyanov et al. (2017). More recently, there has been work exploring the impact of explicitly restricting the set of features utilized by the model. For instance, Geirhos et al. (2019) demonstrate that training models on stylized inputs (and hence suppressing texture information) can improve model robustness to common corruptions. In a similar vein, Wang et al. (2019) penalize the predictive power of local features to learn shape-biased models that generalize better between image styles. A parallel line of work focuses on training models to be robust to small, worst-case input perturbations using, for example, adversarial training Goodfellow et al. (2015); Madry et al. (2018) or randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019). Such training biases these models away from non-robust features (Tsipras et al., 2019; Ilyas et al., 2019; Engstrom et al., 2019), which tends to result in them being more aligned with human perception (Tsipras et al., 2019; Kaur et al., 2019), more resilient to certain input corruptions (Ford et al., 2019; Kireev et al., 2021), and better suited for transfer to downstream tasks Utrera et al. (2020); Salman et al. (2020). 3 FEATURE PRIORS AS DIFFERENT PERSPECTIVES As we discussed, the choice of feature prior can have a large effect on what features a model relies on and, by extension, on how well it generalizes to unseen inputs. In fact, one can view such priors as distinct perspectives on the data, capturing different information about the input. In this section, we provide evidence to support this view; specifically, we examine a case study on a pair of feature priors that arise naturally in the context of image classification: shape and texture. 3.1 TRAINING SHAPE- AND TEXTURE-BIASED MODELS In order to train shape- and texture-biased models, we either pre-process the model input or modify the model architecture as follows: Shape-biased models. To suppress texture information in the images, we pre-process our inputs by applying an edge detection algorithm. We consider two such canonical algorithms: the Canny algorithm Ding & Goshtasby (2001) which produces a binary edge mask, and the Sobel algorithm Sobel & Feldman (1968) which provide a softer edge detection, hence retaining some texture information (see Figures 1b and 1c). Texture-biased models. To prevent the model from relying on the global structure of the image, we utilize a variant of the BagNet architecture Brendel & Bethge (2019). This architecture deliberately limits the receptive field of the model, thus forcing it to rely on local features (see Figure 1d). We visualize all of these priors in Figure 1 and provide implementation details in Appendix A. 3.2 DIVERSITY OF FEATURE-BIASED MODELS After training models with shape and texture biases as outlined above, we evaluate whether these models indeed capture complementary information about the input. Specifically, we train models on a small subset (100 examples per class) of the CIFAR-10 (Krizhevsky, 2009) and STL-10 (Coates et al., 2011) datasets, and measure the correlation between which test examples they correctly classify. We find that pairs consisting of a shape-biased model and a texture-biased model (i.e., Canny and BagNet, or Sobel and BagNet) indeed have the least correlated predictions—cf. Table 2. In other words, the mistakes that these models make are more diverse than those made by identical models trained from different random initializations. At the same time, different shape-biased models (Sobel and Canny) are relatively well-correlated with each other, which corroborates the fact that models trained on similar features of the input are likely to make similar mistakes. Model ensembles. Having shown that training models with these feature priors results in diverse prediction rules, we examine if we can now combine them to improve our generalization. The canonical approach for doing so is to incorporate these models into an ensemble. We find that the diversity of models trained with different feature priors indeed directly translates into an improved performance when combining them into an ensemble—cf. Table 3. In fact, we find that the performance of the ensemble is tightly connected to prediction similarity of its constituents (as measured in Table 2), i.e., more diverse ensembles tend to perform better. For instance, the best ensemble for the STL-10 dataset is the one combining a shape-biased (Canny) and a texture-biased model (BagNet) which were the models with the least aligned predictions. 4 COMBINING DIVERSE PRIORS ON UNLABELED DATA In the previous section, we saw that training models with different feature priors (e.g., shape- and texture-biased models) can lead to prediction rules with less overlapping failure modes—which, in turn, can lead to more effective model ensembles. However, ensembles only combine model predictions post hoc and thus cannot take advantage of diversity during the training process. In this section, we instead focus on utilizing diversity during training. Specifically, we will leverage the diversity introduced through feature priors in the context of self-training Lee et al. (2013)—a framework commonly used when the labeled data is insufficient to learn a well-generalizing model. This framework utilizes unlabeled data, which are then pseudo-labeled using an existing model and used for further training. While such methods can often improve the overall model performance, they suffer from a significant drawback: models tend to reinforce suboptimal prediction rules even when these rules do not generalize to the underlying distribution Arazo et al. (2020). Our goal here is thus to leverage diverse feature priors to address this exact shortcoming. Specifically, we will jointly train models with different priors on the unlabeled data through the framework of co-training Blum & Mitchell (1998). Since these models capture complementary information about the input (cf. Table 2), we expect them to correct each other’s mistakes and improve their prediction rules. As we will see in this section, this approach can indeed have a significant impact on the performance of the resulting model, outperforming ensembles that combine such models only at evaluation time—see summary in Figure 4. Setup. We base our analysis on the CIFAR-10 and STL-10 datasets. Specifically, we treat a small fraction of the training set as labeled examples (100 examples per class), another fraction as our validation set for tuning hyperparameters (10% of the total training examples), and the rest as unlabeled data. We report our results on the standard test set of each dataset. (See Appendix A for experimental details, and Appendix B.6 for experiments with varying levels of labeled data.) 4.1 SELF-TRAINING AND ENSEMBLES Before outlining our method for jointly training models with multiple priors, we first describe the standard approach to self-training a single model. At a high level, the predictions of the model on the unlabeled data are treated as correct labels and are then used to further train the same model Lee et al. (2013); Iscen et al. (2019); Zou et al. (2019); Xie et al. (2020). The underlying intuition is that the classifier will predict the correct labels for that data better than chance, and thus these pseudo-labels can be used to expand the training set. In practice, however, these pseudo-labels tend to be noisy. Thus, a common approach is to only use the labels to which the model assigns the highest probability Lee et al. (2013). This process is repeated, self-training on increasingly larger fractions of the unlabeled data until all of it is used. We refer to each such training phase as an era. Ensembles of diverse self-trained models. Similarly to our results in Table 3, we find that ensembles comprised of self-trained models with diverse feature priors outperform those that use the same prior from different random initializations (see Figure 4 for a summary and Appendix B.3 for the full results). This demonstrates that, after self-training, these models continue to capture complementary information about the input that can be leveraged to improve performance. 4.2 CO-TRAINING MODELS WITH DIFFERENT FEATURE PRIORS Moving beyond self-training with a single feature prior, our goal in this section is to leverage multiple feature priors by jointly training them on the unlabeled data. This idea naturally fits into the framework of co-training: a method used to learn from unlabeled data when inputs correspond to multiple independent sets of features Blum & Mitchell (1998). Concretely, we first train a model for each feature prior. Then, we collect the pseudo-labels on the unlabeled data that were assigned the highest probability for each model—including duplicates with potentially different labels—to form a new training set which we use for further training. Similarly to the self-training case, we repeat this process over several eras, increasing the fraction of the unlabeled dataset used at each era. Intuitively, this iterative process allows the models to bootstrap off of each other’s predictions, learning correlations that they might fail to learn from the labeled data alone. At the end of this process, we are left with two models, one for each prior, which we combine into a single classifier by training a standard model from scratch on the combined pseudo-labels. We provide a more detailed explanation of the methodology in Appendix A.5. Co-training performance. We find that co-training with shape- and texture-based priors can significantly improve the test accuracy of the final model compared to self-training with any of the priors alone (Table 5). This is despite the fact that, when using self-training alone, the standard model outperforms all other models (Column 4, Table 5). Moreover, co-training models with diverse priors improves upon simply combining them in an ensemble (Appendix B.3). In Appendix B.5, we report the performance of co-training with every pair of priors. We find that co-training with shape- and texture-based priors together (Canny + BagNet for STL-10 and Sobel + BagNet for CIFAR-10) outperform every other prior combination. Note that this is the case even though, when only ensembling models with different priors (c.f Table 3 and Appendix B.3), Standard + Sobel is consistently the best performing pair for CIFAR-10. Overall, these results indicate that the diversity of shape- and texture-biased models allows them to improve each other over training. Additionally, we find that, even when training a single model on the pseudo-labels of another model, prior diversity can still help. Specifically, we compare the performance of a standard model trained from scratch using pseudo-labels from various self-trained models (Column 5, Table 5). In this setting, using a self-trained shape- or texture-biased model for pseudo-labeling outperforms using a self-trained standard model. This is despite the fact that, in isolation, the standard model has higher accuracy than the shape- or texture-biased ones (Column 4, Table 5). Model alignment over co-training. To further explore the dynamics of co-training, we evaluate how the correlation between model predictions evolves as the eras progress in Figure 6 (using the prediction alignment measure of Table 2). We find that shape- and texture-biased models exhibit low correlation at the start of co-training, but this correlation increases as co-training progresses. This is in contrast to self-training each model on its own, where the correlation remains relatively low. It is also worth noting that the correlation appears to plateau at a lower value when co-training models with distinct feature priors as opposed to co-training two standard models. Finally, we find that a standard model trained on the pseudo-labels of other models correlates well with the models themselves (see Appendix B.7). Overall, these findings indicate that models trained on each other’s pseudo-labels end up behaving more similarly. 5 USING CO-TRAINING TO AVOID SPURIOUS CORRELATIONS A major challenge when training models for real-world deployment is avoiding spurious correlations: associations which are predictive on the training data but not valid for the actual task. Since models are typically trained to maximize train accuracy, they are quite likely to rely on such spurious correlations Gururangan et al. (2018); Beery et al. (2018); Geirhos et al. (2020); Xiao et al. (2020). In this section, our goal is to leverage diverse feature priors to control the sensitivity of the training process to such spurious correlations. Specifically, we will assume that the spurious correlation does not hold on the unlabeled data (which is likely since unlabeled data can often be collected at a larger scale). Without this assumption, the unlabeled contains no examples that could (potentially) contradict the spurious correlation (we investigate the setting where the unlabeled data is also similarly skewed in Appendix B.10). As we will see, if the problematic correlation is not easily captured by one of the priors, the corresponding model generates pseudo-labels that are inconsistent with this correlation, thus steering other models away from this correlation during co-training. Setup. We study spurious correlations in two settings. First, we create a synthetic dataset by tinting each image of the STL-10 labeled dataset in a class-specific way. This encourages models to rely on the tint, as it is highly predictive on the training set. However, this prediction rule does not generalize to the test set where this correlation is absent. Second, similar to Sagawa et al. (2020), we consider a gender classification task based on CelebA (Liu et al., 2015) where hair color (“blond” vs. “non-blond”) is predictive on the labeled data but not on the unlabeled and test data. While gender and hair color are independent attributes on the unlabeled dataset, the labeled dataset consists only of blond females and non-blond males. Similarly to the synthetic case, the labeled data encourages a prediction rule based only on hair color. See Appendix A.1 for details. Performance on datasets with spurious features. We find that, when trained only on the labeled data (where the correlation is fully predictive), both the standard and BagNet models generalize poorly in comparison to the shape-biased models (see Table 7). This behavior is expected: the spurious attribute in both datasets is color-related and hence mostly suppressed by the edge detection algorithms used to train shape-based models. Even after self-training on the unlabeled data (where the correlation is absent), the performance of the standard and BagNet models does not improve significantly. Finally, simply ensembling self-trained models post hoc does not improve their performance. Indeed as the texture-biased and standard models are significantly less accurate than the shape-biased one, they end up lowering the overall accuracy of the ensemble (see Appendix B.8). In contrast, when we co-train a texture-biased model with a shape-biased one, the texture-biased model improves substantially. For instance, when co-trained with a Canny model, the BagNet model improves over self-training by 42% on the tinted STL-10 dataset and 27% on the CelebA dataset. This improvement can be attributed to the fact that the predictions of the shape-biased model are not consistent with the spurious correlation on the unlabeled data. Hence, by being trained on pseudolabels from that model, the BagNet model is forced to rely on alternative, non-spurious features. Moreover, particularly on CelebA, the shape-biased model also improves when co-trained with a texture-biased model. This indicates that even though the texture-biased model relies on the spurious correlation, it also captures non-spurious features that, through pseudo-labeling, improve the performance of the shape-based model. In Appendix B.9, we find that these improvements are concentrated on inputs where the spurious correlation does not hold. 6 ADDITIONAL RELATED WORK In Section 2, we discussed the most relevant prior work on implicit or explicit feature priors. Here, we discuss additional related work and how it connects to our approach. Shape-biased models. Several other methods aim to bias models towards shape-based features: input stylization Geirhos et al. (2019); Somavarapu et al. (2020); Li et al. (2021), penalizing early layer predictiveness Wang et al. (2019), jigsaw puzzles Carlucci et al. (2019); Asadi et al. (2019), dropout Shi et al. (2020), or data augmentation Hermann et al. (2020). While, in our work, we choose to suppress texture information via edge detection algorithms, any of these methods can be substituted to generate the shape-based model for our analysis. Avoiding spurious correlations. Other methods that can prevent models from learning spurious correlations include: learning representations that are simultaneously optimal across domains (Arjovsky et al., 2019), enforcing robustness to group shifts (Sagawa et al., 2020), and utilizing multiple data points corresponding to a single physical entity (Heinze-Deml & Meinshausen, 2017). Similar in spirit to our work, these methods aim to learn prediction rules that are supported by multiple views of the data. However, we do not rely on annotations or multiple sources and instead impose feature priors through the model architecture and input preprocessing. Pseudo-labeling. Since the initial proposal of pseudo-labeling for neural networks Lee et al. (2013), there has been a number of more sophisticated pseudo-labeling schemes aimed at improving the accuracy and diversity of the labels Iscen et al. (2019); Augustin & Hein (2020); Xie et al. (2020); Rizve et al. (2021); Huang et al. (2021). In our work, we focus on the simplest scheme for self-labeling—i.e., confidence based example selection. Nevertheless, most of these schemes can be directly incorporated into our framework to potentially improve its overall performance. A recent line of work explores self-training by analyzing it under different assumptions on the data (Mobahi et al., 2020; Wei et al., 2021; Allen-Zhu & Li, 2020; Kumar et al., 2020). Closest to our work, Chen et al. (2020b) show that self-training on unlabeled data can reduce reliance on spurious correlations under certain assumptions. In contrast, we demonstrate that by leveraging diverse feature priors, we can avoid spurious correlations even if a model heavily relies on them. Consistency regularization. In parallel to pseudo-labeling, consistency regularization is another canonical technique for leveraging unlabeled data. Here, a model is trained to be invariant to a set of input transformations. These transformations might stem from data augmentations and architecture stochasticity Laine & Aila (2017); Berthelot et al. (2019); Chen et al. (2020a); Sohn et al. (2020); Prabhu et al. (2021) or using adversarial examples Miyato et al. (2018). Co-training. One line of work studies co-training from a theoretical perspective (Nigam & Ghani, 2000; Balcan et al., 2005; Goldman & Zhou, 2000). Other work aims to improve co-training by either expanding the settings where it can be applied (Chen et al., 2011) or by improving its stability (Ma et al., 2020; Zhang & Zhou, 2011). Finally, a third line of work applies co-training to images. Since images cannot be separated into disjoint feature sets, one would apply co-training by training multiple models Han et al. (2018), either regularized to be diverse through adversarial examples Qiao et al. (2018) or each trained using a different method Yang et al. (2020). Our method is complementary to these approaches as it relies on explicit feature priors to obtain different views. 7 CONCLUSION In this work, we explored the benefits of combining feature priors with non-overlapping failure modes. By capturing complementary perspectives on the data, models trained with diverse feature priors can offset each others mistakes when combined through methods such as ensembles. Moreover, in the presence of unlabeled data, we can leverage prior diversity by jointly boostrapping models with different priors through co-training. This allows the models to correct each other during training, thus improving pseudo-labeling and controlling for correlations that do not generalize well. We believe that our work is only the first step in exploring the design space of creating, manipulating, and combining feature priors to improve generalization. In particular, our framework is quite flexible and allows for a number of different design choices, such as choosing other feature priors (cf. Sections 2 and 6), using other methods for pseudo-label selection (e.g., using uncertainty estimation (Lee et al., 2018; Rizve et al., 2021)), and combining pseudo-labels via different ensembling methods. More broadly, we believe that exploring the synthesis of explicit feature priors in new applications is an exciting avenue for further research. A EXPERIMENTAL DETAILS A.1 DATASETS For our first set of experiments (Section 4), we focus on a canonical setting where a small portion of the training set if labeled and we have access to a pool of unlabeled data. STL-10. The STL-10 Coates et al. (2011) dataset contains 5,000 training and 8,000 test images of size 96×96 from 10 classes. We designate 1,000 of the 5,000 (20%) training examples to be the labeled training set, 500 (10%) to be the validation set, and the rest are used as unlabeled data. CIFAR-10. The CIFAR-10 Krizhevsky (2009) dataset contains 50,000 training and 8,000 test images of size 32×32 from 10 classes. We designate 1,000 of the 50,000 (2%) training examples to be the labeled training set, 5000 (10%) to be the validation set, and the rest as unlabeled data. In both cases, we report the final performance on the standard test set of that dataset. We also create two datasets that each contain a different spurious correlation. Tinted STL-10. We reuse the STL-10 setup described above, but we add a class-specific tint to each image in the (labeled) training set. Specifically, we hand-pick a different color for each of the 10 classes and then add this color to each of the pixels (ensuring that each RGB channel remains within the valid range)—see Figure 8 for examples. This tint is only present in the labeled part of the training set, the unlabeled and test parts of the dataset are left unaltered. Biased CelebA. We consider the task of predicting gender in the CelebA Liu et al. (2015) dataset. In order to create a biased training set, we choose a random sample of 500 non-blond males and 500 blond females. We then use a balanced unlabeled dataset consisting of 1,000 random samples for each of: blond males, blond females, non-blond males, and non-blond females. We use the standard CelebA test set which consists of 12.41% blond females, 48.92% non-blond females, 0.90% blond males, and 37.77% non-blond males. (Note that a classifier predicting purely based on hair color with have an accuracy of 50.18% on that test set.) All of the datasets that we use are freely available for non-commercial research purposes. Moreover, to the best of our knowledge, they do not contain offensive content or identifiable information (other than publicly available celebrity photos). A.2 MODEL ARCHITECTURES AND INPUT PREPROCESSING For both the standard model and the models trained on images processed by edge detection algorithm, we use a standard model architecture—namely, VGG16 Simonyan & Zisserman (2015) with the addition of batch normalization Ioffe & Szegedy (2015) (often referred to as VGG16-BN). We describe the exact edge detection process as well as the architecture of the BagNet model (texture prior) below. We visualize these priors in Figure 10. Canny edge detection. Given an image, we first smooth it with a 5 pixel bilateral filter Tomasi & Manduchi (1998), with filter σ in the coordinate and color space set to 75. After smoothing, the image is converted to gray-scale. Finally, a Canny filter Canny (1986) is applied to the image, with hysteresis thresholds 100 and 200, to extract the edges. Sobel edge detection. Given an image, we first upsample it to 128×128 pixels. Then we convert it to gray-scale and apply a Gaussian blur (kernel size=5, σ = 5). The image is then passed through a Sobel filter Sobel & Feldman (1968) with a kernel size of 3 in both the horizontal and the vertical direction to extract the image gradients. BagNet. For our texture-biased model, we use a slimmed down version of the BagNet architecture from Brendel & Bethge (2019). The goal of this architecture is to limit the receptive field of the model, hence forcing it to make predictions based on local features. The exact architecture we used is shown in Figure 9. Intuitively, the top half of the network—i.e., the green and blue blocks— construct features on patches of size 20×20 for 96×96 images and 10×10 for 32×32 images. The rest of the network consists only of 1×1 convolutions and max-pooling, hence not utilizing the image’s spatial structure. Custom BagNet20 Custom BagNet10 A.3 TRAINING SETUP A.3.1 BASIC TRAINING We train all our models using stochastic gradient descent (SGD) with momentum (a coefficient of 0.9) and a decaying learning rate. We add weight decay regularization with a coefficient of 10−4. In terms of data augmentation, we apply random cropping with a padding of 4 pixels, random horizontal flips, and a random rotation of ±2 degrees. These transformations are applied after the edge detection processing. We train all models with a batch size of 64 for 96×96-sized images and 128 for 32×32-sized images for a total of 300 epochs. All our experiments are performed using our internal cluster which mainly consists of NVIDIA 1080 Ti GTX GPUs. Hyperparameter tuning. To ensure a fair comparison across feature priors, we selected the hyperparameters for each dataset-prior pair separately, using the held-out validation set (separate from the final test used for reporting performance). Specifically, we performed a grid search choosing the learning rate (LR) from [0.1, 0.05, 0.02, 0.01, 0.005], the number of epochs between each learning rate drop (K) from [50, 100, 300] and the factor with which the learning rate is multiplied (γ) from [0.5, 1]. The parameters chosen are shown in Table 11. We found that all models achieved nearoptimal performance strictly within the range of each hyperparameters. Thus, we did not consider a wider grid. A.4 ENSEMBLES In order to leverage prior diversity, we ensemble models trained with (potentially) different priors. We use the following ensembles: 1. Take Max: Predict based on the model assigning the highest probability on this example. 2. Average: Average the (softmax) output probabilities of the models, predict the class assigned the highest probability. 3. Rank: Each model ranks all test examples based on the probability assigned to their predicted labels. Then, for each example, we predict using the model which has a lower rank on this example. We then report the maximum of these ensemble methods in Table 3. A.5 SELF-TRAINING AND CO-TRAINING SCHEMES In the setting that we are focusing on, we are provided with a labeled dataset X and an unlabeled dataset U, where typically there is much more unlabeled data (|U| |X|). We are then choosing a set of (one or more) feature priors each of which corresponds to a different way of training a model (e.g., using edge detection preprocessing). General methodology. We start by training each of these models on the labeled dataset. Then, we combine the predictions of these models to produce pseudo-labels for the unlabeled dataset. Finally, we choose a fraction of the unlabeled data and train the models on that set using the produced pseudo-labels (in additional to the original labeled set X). This process is repeated using increasing fractions of the unlabeled dataset until, eventually, models are trained on its entirety. We refer to each such phase as an era. We include an additional 5% of the unlabeled data per era, resulting in a total of 20 eras. During each era, we use the training process described in Appendix A.3.1 without re-initializing the models (warm start). After completing this process, we train a standard model from scratch using both the labeled set and resulting pseudo-labels. The methodology used for choosing and combining pseudo-labels is described below for each scheme. Self-training. Since we are only training one model, we only need to decide how to choose the pseudo-labels to use for each era. We do this in the simplest way: at ear t, we pick the subset Ut ⊆ U of examples that are assigned the highest probability on their predicted label. We attempt to produce a class-balanced training set by applying this process separately on each class (as predicted by the model). The pseudocode for the method is provided in Algorithm 1. Algorithm 1: Self-training Parameters: Number of eras T . Fraction added per era k. Input : Labeled data X with n classes, unlabeled data U, model trained on X. for era t ∈ 1...T do forward-pass U through the model to create pseudo-labels Ut = [] for each class c do Select the kt|U|n most confident examples from U predicted by the model as class c Add those examples to Ut with class c Re-train (warm start) the model on X ∪Ut until convergence Train a standard model from scratch on X ∪UT. Standard co-training. Here, we train multiple models (in our experiments two) based on a common pool of pseudo-labeled examples in each era. In each era t, each model labels the unlabeled dataset U. Then, for each class, we alternate between models, adding the next most confident example predicted as that class for that model to Ut, until we reach a fixed number of unique examples have been added for that class (5% of the size of the unlabeled dataset per era). Note that this process allows both conflicts and duplicates: if multiple models are confident about a specific example, that example may be added more than once (potentially with a different label each time). Finally, we train each model (without re-initializing) on X∪Ut. The pseudocode for this method can be found in Algorithm 2. Algorithm 2: Standard Co-Training Parameters: Number of eras T . Fraction added per era k. Input : Labeled data X with n classes, unlabeled data U, models trained on X. for era t ∈ 1...T do forward-pass U through each model to create pseudo-labels Ut = [] for each class c do U (c) t = [] while the number of unique examples in U(c)t < kt|U| n do for each model m do Add the next most confident example predicted by m as class c to U(c)t Add U(c)t to Ut Re-train (warm start) each model on X ∪Ut until convergence Train a standard model from scratch on X ∪UT. B ADDITIONAL EXPERIMENTS B.1 EXPERIMENT ORGANIZATION We now provide the full experimental results used to create the plots in the main body as well as additional analysis. Specifically, in Appendix B.2 and B.3 we present the performance of individual ensemble schemes for pre-trained and self-trained models respectively. Then, in Appendix B.5 we present the performance of co-training for each combination of feature priors. In Appendix B.7 we analyse the effect that co-training has on model similarity after training. Finally, in Appendix B.8 we evaluate model ensembles on datasets with spurious correlations and in Appendix B.9 we breakdown the performance of co-training on the skewed CelebA dataset according to different input attributes. B.2 FULL PRE-TRAINED ENSEMBLE RESULTS In Table 3, we reported the best ensemble method for each pair of models trained with different priors on the labeled data. In Table 12, we report the full results over the individual ensembles. B.3 ENSEMBLING SELF-TRAINED MODELS In Table 13, we report the best ensemble method for pairs of self-trained models with different priors. In Table 14, we report the full results over the individual ensembles. We find that, similar to the ensembles of models trained on the labeled data, models with diverse priors gain more from ensembling. However, co-training models with diverse priors together still outperforms ensembling self-trained models. B.4 STACKED ENSEMBLING Here we consider an ensembling technique that leverages a validation set. We implement stacking (also called blending) Töscher et al. (2009); Sill et al. (2009), which takes in the outputs of the member models as input, and then trains a second model to produce the final layer. Here, we take the logits of each model in the ensemble, and train the secondary model using logistic regression on the validation set for the dataset. We report accuracies of the ensemble on the test set below. We again find that prior diversity is important for the performance of the ensemble. B.5 SELF-TRAINING AND CO-TRAINING ON STL-10 AND CIFAR-10 B.6 CO-TRAINING WITH VARYING AMOUNTS OF LABELED DATA. In Table 19, we study how the efficacy of combining diverse priors through cotraining changes as the number of labeled examples increase for STL-10. As one might expect, when labeled data is sparse, the feature priors learned by the models alone are relatively brittle: thus, leveraging diverse priors against each other on unlabeled data improves generalization. As the number of labeled examples increases, the models with single feature priors learn more reliable prediction rules that can already generalize, so the additional benefit of combining feature priors diminishes. However, even in settings with plentiful data, combining diverse feature priors can aid generalization if there is a spurious correlation in the labeled data (see Section 5.) B.7 CORRELATION BETWEEN THE INDIVIDUAL FEATURE-BIASED MODELS AND THE FINAL STANDARD MODEL B.8 ENSEMBLES FOR SPURIOUS DATASETS In Table 21 (full table in Table 22), we ensemble the self-trained priors for the Tinted STL-10 dataset and the CelebA dataset as in Section 5. Both of these datasets have a spurious correlation base on color, which results in a weak Standard and BagNet model. As a result, the ensembles with the Standard or BagNet models do not perform well on the test set. However, in Section 7, we find that co-training in this setting allows the BagNet model to improve when jointly trained with a shape model, thus boosting the final performance. B.9 BREAKDOWN OF TEST ACCURACY FOR CO-TRAINING ON CELEBA B.10 WHAT IF THE UNLABELED DATA ALSO CONTAINED THE SPURIOUS CORRELATION? In Section 5, we assume that the unlabeled data does not contain the spurious correlation present in the labeled data. This is often the case when unlabeled data can be collected through a more diverse process than labeled data (for example, by scraping the web large scales or by passively collecting data during deployment). This assumption is important: in order to successfully steer models away from the spurious correlation during co-training, the process needs to surface examples which contradict the spurious correlation. However, if the unlabeled data is also heavily skewed, such examples might be rare or non-existent. What happens if the unlabeled data is as heavily skewed as the labeled data? We return the setting of a spurious association between hair color and gender in CelebA. However, unlike in Section 5, we use an unlabeled dataset that also perfectly correlates hair color and gender – it contains 2000 non-blond males and 2000 blond females. The unlabeled data thus has the same distribution as the labeled data, and contains no examples that reject the spurious correlation (blond males or non-blond females). Self-Training: Since the unlabeled data follows the spurious correlation between hair color and gender, the standard and BagNet models almost perfectly pseudo-label the unlabeled data. Thus, they are simply increasing the number of examples in the training dataset but maintaining the same overall distribution. Self-training thus does not change the accuracy for models with these priors significantly. In contrast, in the setting in Section 5, there were examples in the unlabeled data which did not align with the spurious correlation (blond males and non-blond females). Since they relied mostly on hair color, the standard and BagNet models actively mislabeled these examples (i.e, by labeling a blond male as female). Training on these erroneous pseudo-labels actively suppressed any features that were not hair color, causing the standard and Bagnet models to perform worse after self-training. Co-Training: In contrast, when performing co-training with the Canny and BagNet priors, the Canny model (which cannot detect hair color) will make mistakes on the unlabeled dataset. These mistakes help are inconsistent with a reliance on hair color: due to this regularization, the BagNet’s accuracy improves from 69.35% to 76.52%. Overall, though the gain is not as significant as the setting with a balanced unlabeled dataset, the Canny + BagNet co-trained model can mitigate the pitfalls of the BagNet’s reliance on hair color and outperform even the canny self-trained model.
1. What is the main contribution of the paper regarding feature priors and combining models? 2. What are the strengths of the proposed approach, particularly in terms of shape and texture feature priors? 3. What are the weaknesses of the paper, especially regarding ensemble techniques and assumptions about unlabeled data? 4. Do you have any concerns or suggestions for improving the paper, such as adding more sophisticated ensembling methods or presenting the absence-of-spurious-correlation assumption more cautiously? 5. Would it be beneficial to include cotraining results on another domain aside from image classification, and if so, how could this be achieved given space constraints?
Summary Of The Paper Review
Summary Of The Paper The paper presents multiple techniques for training models with different feature priors (i.e. inclinations to focus on different aspects of the training data) and combining them, either post hoc via ensembles or by allowing the models to provide augmented pseudo-labelled training data to each other via co-training. When using simple ensembling techniques, ensembles with a diversity of feature priors are show to perform better than ensembles where the individual models have similar feature priors. Co-training is shown to boost performance substantially when models with diverse feature priors supply pseudo-labels to each other. The problem domain is image classification. The feature priors concern shape and texture. Different preprocessing and or architecture constraint techniques are used for different models so as to predispose them to focus on shape but not texture or vice versa. Review The implementation of shape feature priors (via edge detection preprocessing) and texture feature priors (via limited receptive field via bagnet) makes a lot of sense and does a good job of illustrating a concrete example of a collection of different feature priors. Some of the co-training experimental results are strong. One concern I have is that the ensemble results presented in section 3.2 are generated using very primitive ensembling techniques. Appendix A.4 says the combination techniques were simply max, average and lowest rank. It is more common to treat this kind of ensembling as a 2nd-level machine learning problem with the outputs of the models forming the inputs to a 2nd-level model. I would not have expected a fancy 2nd-level model but I was hoping at a minimum that the first-level models would be combined via. e.g. linear regression on first-level outputs from a held-out validation set. Fancier combinations (e.g. neural nets) are also possible, of course. I would encourage the authors to read a few writeups by winners of Kaggle competitions and/or read about the ensembling done in the Netflix Prize to get a better sense of what constitutes state-of-the-art ensemble combination techniques. I was also concerned about the assumption made on page 7 that spurious correlations are likely not to exist in unlabelled data, because unlabelled data supposedly comes from a more diverse collection process. While this may be true in some cases, there will also be many real-world situations where all the input data, whether labelled or unlabelled, comes from the same distribution and it may all have the same spurious correlation. It is often the case that a small portion of the data is labelled simply because it is very time intensive for humans to do the labeling but nonetheless, the remaining unlabelled data comes from the same distribution. I hope that in future versions of this work, the authors make clear that practitioners should think hard about whether their unlabelled data will have the same spurious correlations as their labelled data, rather than assuming that this is likely the case. For these reasons, I can only give the paper a 5. I would prefer that the ensemble section be redone with more sophisticated ensembling and/or removed and I would prefer that the absence-of-spurious-correlation-in-unlabelled data assumption be presented more cautiously. A minor complaint: from a presentation point of view, it is non-standard and a bit strange to add additional related work in section 6. I don't normally expect to read about related work after the results and towards the end of the paper. Related work is normally presented earlier in a paper. The authors might consider moving this section to an earlier point in the paper. On page 8, I found the bolding of +BagNet cotraining results to be a bit confusing. Normally the 'winning' algorithm results are bolded, which in this case is Canny. I realize that the message of the huge boost of cotraining for +BagNet is what is intended but it still confused me that the bolded numbers were not the best numbers. It also would be nice to show the method on another domain aside from image classification, although I realize space constraints might make this difficult. The authors might consider removing the ensembling section in future versions of the work and instead using that space for cotraining results on another type of problem. *** Update after author rebuttal *** In light of the addition of stacking experiments for the ensembling, I have raised my score to a 6.
ICLR
Title Combining Diverse Feature Priors Abstract To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly. In this work, we explore the design space of leveraging such feature priors by viewing them as distinct perspectives on the data. Specifically, we find that models trained with diverse sets of feature priors have less overlapping failure modes, and can thus be combined more effectively. Moreover, we demonstrate that jointly training such models on additional (unlabeled) data allows them to correct each other’s mistakes, which, in turn, leads to better generalization and resilience to spurious correlations. 1 INTRODUCTION The driving force behind deep learning’s success is its ability to automatically discover predictive features in complex high-dimensional datasets. In fact, these features can generalize beyond the specific task at hand, thus enabling models to transfer to other (yet similar) tasks (Donahue et al., 2014). At the same time, the set of features that the model learns has a large impact on how well it will perform on unseen inputs, especially in the presence of distribution shift (Ponce et al., 2006; Torralba & Efros, 2011; Sagawa et al., 2020) or spurious correlations (Heinze-Deml & Meinshausen, 2017; Beery et al., 2018; Meinshausen, 2018). Motivated by this, recent work focuses on encouraging specific modes of behavior by preventing the models from relying on certain features. Examples include suppressing texture features (Geirhos et al., 2019; Wang et al., 2019), avoiding `p-non-robust features (Tsipras et al., 2019; Engstrom et al., 2019), or utilizing different parts of the frequency spectrum (Yin et al., 2019). At a high level, these methods can be thought of as ways of imposing a feature prior on the learning process, so as to bias the model towards acquiring features that generalize better. This makes the choice of the feature prior to impose a key design decision. The goal of this work is thus to explore the underlying design space of feature priors and, specifically, to understand: How can we effectively harness the diversity of feature priors? OUR CONTRIBUTIONS In this paper, we cast diverse feature priors as different perspectives on the data and study how they can complement each other. In particular, we aim to understand whether training with distinct priors result in models with non-overlapping failure modes and how such models can be combined to improve generalization. This is particularly relevant in settings where the data is unreliable— e.g, when the training data contains a spurious correlation. From this perspective, we focus our study on two priors that arise naturally in the context of image classification, shape and texture, and investigate the following: Feature diversity. We demonstrate that training models with diverse feature priors results in them making mistakes on different parts of the data distribution, even if they perform similarly in terms of overall accuracy. Further, one can harness this diversity to build model ensembles that are more accurate than those based on combining models which have the same feature prior. Combining feature priors on unlabeled data. When learning from unlabeled data, the choice of feature prior can be especially important. For strategies such as self-training, sub-optimal prediction rules learned from sparse labeled data can be reinforced when pseudo-labeling the unlabeled data. We show that, in such settings, we can leverage the diversity of feature priors to address these issues. By jointly training models with different feature priors on the unlabeled data through the framework of co-training Blum & Mitchell (1998), we find that the models can correct each other’s mistakes to learn prediction rules that generalize better. Learning in the presence of spurious correlations. Finally, we want to understand whether combining diverse priors during training, as described above, can prevent models from relying on correlations that are spurious, i.e., correlations that do not hold on the actual distribution of interest. To model such scenarios, we consider a setting where a spurious correlation is present in the training data but we also have access to (unlabeled) data where this correlation does not hold. In this setting, we find that co-training models with diverse feature priors can actually steer them away from such correlations and thus enable them to generalize to the underlying distribution. Overall, our findings highlight the potential of incorporating distinct feature priors into the training process. We believe that further work along this direction will lead us to models that generalize more reliably. 2 BACKGROUND: FEATURE PRIORS IN COMPUTER VISION When learning from structurally complex data, such as images, relying on raw input features alone (e.g., pixels) is not particularly useful. There has thus been a long line of work on extracting input patterns that can be more effective for prediction. While early approaches, such as SIFT (Lowe, 1999) and HOG (Dalal & Triggs, 2005), leveraged hand-crafted features, these have been by now largely replaced by features that are automatically learned in an end-to-end fashion (Krizhevsky, 2009; Ciregan et al., 2012; Krizhevsky et al., 2012). Nevertheless, even when features are learned, model designers still tune their models to better suit a particular task via changes in the architecture or training methodology. Such modifications can be thought of as imposing feature priors, i.e., priors that bias a model towards a particular set of features. One prominent example here are convolutional neural networks, which are biased towards learning a hierarchy of localized features Fukushima (1980); LeCun et al. (1989). Indeed, such a convolutional prior can be quite powerful: it is sufficient to enable many image synthesis tasks without any training Ulyanov et al. (2017). More recently, there has been work exploring the impact of explicitly restricting the set of features utilized by the model. For instance, Geirhos et al. (2019) demonstrate that training models on stylized inputs (and hence suppressing texture information) can improve model robustness to common corruptions. In a similar vein, Wang et al. (2019) penalize the predictive power of local features to learn shape-biased models that generalize better between image styles. A parallel line of work focuses on training models to be robust to small, worst-case input perturbations using, for example, adversarial training Goodfellow et al. (2015); Madry et al. (2018) or randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019). Such training biases these models away from non-robust features (Tsipras et al., 2019; Ilyas et al., 2019; Engstrom et al., 2019), which tends to result in them being more aligned with human perception (Tsipras et al., 2019; Kaur et al., 2019), more resilient to certain input corruptions (Ford et al., 2019; Kireev et al., 2021), and better suited for transfer to downstream tasks Utrera et al. (2020); Salman et al. (2020). 3 FEATURE PRIORS AS DIFFERENT PERSPECTIVES As we discussed, the choice of feature prior can have a large effect on what features a model relies on and, by extension, on how well it generalizes to unseen inputs. In fact, one can view such priors as distinct perspectives on the data, capturing different information about the input. In this section, we provide evidence to support this view; specifically, we examine a case study on a pair of feature priors that arise naturally in the context of image classification: shape and texture. 3.1 TRAINING SHAPE- AND TEXTURE-BIASED MODELS In order to train shape- and texture-biased models, we either pre-process the model input or modify the model architecture as follows: Shape-biased models. To suppress texture information in the images, we pre-process our inputs by applying an edge detection algorithm. We consider two such canonical algorithms: the Canny algorithm Ding & Goshtasby (2001) which produces a binary edge mask, and the Sobel algorithm Sobel & Feldman (1968) which provide a softer edge detection, hence retaining some texture information (see Figures 1b and 1c). Texture-biased models. To prevent the model from relying on the global structure of the image, we utilize a variant of the BagNet architecture Brendel & Bethge (2019). This architecture deliberately limits the receptive field of the model, thus forcing it to rely on local features (see Figure 1d). We visualize all of these priors in Figure 1 and provide implementation details in Appendix A. 3.2 DIVERSITY OF FEATURE-BIASED MODELS After training models with shape and texture biases as outlined above, we evaluate whether these models indeed capture complementary information about the input. Specifically, we train models on a small subset (100 examples per class) of the CIFAR-10 (Krizhevsky, 2009) and STL-10 (Coates et al., 2011) datasets, and measure the correlation between which test examples they correctly classify. We find that pairs consisting of a shape-biased model and a texture-biased model (i.e., Canny and BagNet, or Sobel and BagNet) indeed have the least correlated predictions—cf. Table 2. In other words, the mistakes that these models make are more diverse than those made by identical models trained from different random initializations. At the same time, different shape-biased models (Sobel and Canny) are relatively well-correlated with each other, which corroborates the fact that models trained on similar features of the input are likely to make similar mistakes. Model ensembles. Having shown that training models with these feature priors results in diverse prediction rules, we examine if we can now combine them to improve our generalization. The canonical approach for doing so is to incorporate these models into an ensemble. We find that the diversity of models trained with different feature priors indeed directly translates into an improved performance when combining them into an ensemble—cf. Table 3. In fact, we find that the performance of the ensemble is tightly connected to prediction similarity of its constituents (as measured in Table 2), i.e., more diverse ensembles tend to perform better. For instance, the best ensemble for the STL-10 dataset is the one combining a shape-biased (Canny) and a texture-biased model (BagNet) which were the models with the least aligned predictions. 4 COMBINING DIVERSE PRIORS ON UNLABELED DATA In the previous section, we saw that training models with different feature priors (e.g., shape- and texture-biased models) can lead to prediction rules with less overlapping failure modes—which, in turn, can lead to more effective model ensembles. However, ensembles only combine model predictions post hoc and thus cannot take advantage of diversity during the training process. In this section, we instead focus on utilizing diversity during training. Specifically, we will leverage the diversity introduced through feature priors in the context of self-training Lee et al. (2013)—a framework commonly used when the labeled data is insufficient to learn a well-generalizing model. This framework utilizes unlabeled data, which are then pseudo-labeled using an existing model and used for further training. While such methods can often improve the overall model performance, they suffer from a significant drawback: models tend to reinforce suboptimal prediction rules even when these rules do not generalize to the underlying distribution Arazo et al. (2020). Our goal here is thus to leverage diverse feature priors to address this exact shortcoming. Specifically, we will jointly train models with different priors on the unlabeled data through the framework of co-training Blum & Mitchell (1998). Since these models capture complementary information about the input (cf. Table 2), we expect them to correct each other’s mistakes and improve their prediction rules. As we will see in this section, this approach can indeed have a significant impact on the performance of the resulting model, outperforming ensembles that combine such models only at evaluation time—see summary in Figure 4. Setup. We base our analysis on the CIFAR-10 and STL-10 datasets. Specifically, we treat a small fraction of the training set as labeled examples (100 examples per class), another fraction as our validation set for tuning hyperparameters (10% of the total training examples), and the rest as unlabeled data. We report our results on the standard test set of each dataset. (See Appendix A for experimental details, and Appendix B.6 for experiments with varying levels of labeled data.) 4.1 SELF-TRAINING AND ENSEMBLES Before outlining our method for jointly training models with multiple priors, we first describe the standard approach to self-training a single model. At a high level, the predictions of the model on the unlabeled data are treated as correct labels and are then used to further train the same model Lee et al. (2013); Iscen et al. (2019); Zou et al. (2019); Xie et al. (2020). The underlying intuition is that the classifier will predict the correct labels for that data better than chance, and thus these pseudo-labels can be used to expand the training set. In practice, however, these pseudo-labels tend to be noisy. Thus, a common approach is to only use the labels to which the model assigns the highest probability Lee et al. (2013). This process is repeated, self-training on increasingly larger fractions of the unlabeled data until all of it is used. We refer to each such training phase as an era. Ensembles of diverse self-trained models. Similarly to our results in Table 3, we find that ensembles comprised of self-trained models with diverse feature priors outperform those that use the same prior from different random initializations (see Figure 4 for a summary and Appendix B.3 for the full results). This demonstrates that, after self-training, these models continue to capture complementary information about the input that can be leveraged to improve performance. 4.2 CO-TRAINING MODELS WITH DIFFERENT FEATURE PRIORS Moving beyond self-training with a single feature prior, our goal in this section is to leverage multiple feature priors by jointly training them on the unlabeled data. This idea naturally fits into the framework of co-training: a method used to learn from unlabeled data when inputs correspond to multiple independent sets of features Blum & Mitchell (1998). Concretely, we first train a model for each feature prior. Then, we collect the pseudo-labels on the unlabeled data that were assigned the highest probability for each model—including duplicates with potentially different labels—to form a new training set which we use for further training. Similarly to the self-training case, we repeat this process over several eras, increasing the fraction of the unlabeled dataset used at each era. Intuitively, this iterative process allows the models to bootstrap off of each other’s predictions, learning correlations that they might fail to learn from the labeled data alone. At the end of this process, we are left with two models, one for each prior, which we combine into a single classifier by training a standard model from scratch on the combined pseudo-labels. We provide a more detailed explanation of the methodology in Appendix A.5. Co-training performance. We find that co-training with shape- and texture-based priors can significantly improve the test accuracy of the final model compared to self-training with any of the priors alone (Table 5). This is despite the fact that, when using self-training alone, the standard model outperforms all other models (Column 4, Table 5). Moreover, co-training models with diverse priors improves upon simply combining them in an ensemble (Appendix B.3). In Appendix B.5, we report the performance of co-training with every pair of priors. We find that co-training with shape- and texture-based priors together (Canny + BagNet for STL-10 and Sobel + BagNet for CIFAR-10) outperform every other prior combination. Note that this is the case even though, when only ensembling models with different priors (c.f Table 3 and Appendix B.3), Standard + Sobel is consistently the best performing pair for CIFAR-10. Overall, these results indicate that the diversity of shape- and texture-biased models allows them to improve each other over training. Additionally, we find that, even when training a single model on the pseudo-labels of another model, prior diversity can still help. Specifically, we compare the performance of a standard model trained from scratch using pseudo-labels from various self-trained models (Column 5, Table 5). In this setting, using a self-trained shape- or texture-biased model for pseudo-labeling outperforms using a self-trained standard model. This is despite the fact that, in isolation, the standard model has higher accuracy than the shape- or texture-biased ones (Column 4, Table 5). Model alignment over co-training. To further explore the dynamics of co-training, we evaluate how the correlation between model predictions evolves as the eras progress in Figure 6 (using the prediction alignment measure of Table 2). We find that shape- and texture-biased models exhibit low correlation at the start of co-training, but this correlation increases as co-training progresses. This is in contrast to self-training each model on its own, where the correlation remains relatively low. It is also worth noting that the correlation appears to plateau at a lower value when co-training models with distinct feature priors as opposed to co-training two standard models. Finally, we find that a standard model trained on the pseudo-labels of other models correlates well with the models themselves (see Appendix B.7). Overall, these findings indicate that models trained on each other’s pseudo-labels end up behaving more similarly. 5 USING CO-TRAINING TO AVOID SPURIOUS CORRELATIONS A major challenge when training models for real-world deployment is avoiding spurious correlations: associations which are predictive on the training data but not valid for the actual task. Since models are typically trained to maximize train accuracy, they are quite likely to rely on such spurious correlations Gururangan et al. (2018); Beery et al. (2018); Geirhos et al. (2020); Xiao et al. (2020). In this section, our goal is to leverage diverse feature priors to control the sensitivity of the training process to such spurious correlations. Specifically, we will assume that the spurious correlation does not hold on the unlabeled data (which is likely since unlabeled data can often be collected at a larger scale). Without this assumption, the unlabeled contains no examples that could (potentially) contradict the spurious correlation (we investigate the setting where the unlabeled data is also similarly skewed in Appendix B.10). As we will see, if the problematic correlation is not easily captured by one of the priors, the corresponding model generates pseudo-labels that are inconsistent with this correlation, thus steering other models away from this correlation during co-training. Setup. We study spurious correlations in two settings. First, we create a synthetic dataset by tinting each image of the STL-10 labeled dataset in a class-specific way. This encourages models to rely on the tint, as it is highly predictive on the training set. However, this prediction rule does not generalize to the test set where this correlation is absent. Second, similar to Sagawa et al. (2020), we consider a gender classification task based on CelebA (Liu et al., 2015) where hair color (“blond” vs. “non-blond”) is predictive on the labeled data but not on the unlabeled and test data. While gender and hair color are independent attributes on the unlabeled dataset, the labeled dataset consists only of blond females and non-blond males. Similarly to the synthetic case, the labeled data encourages a prediction rule based only on hair color. See Appendix A.1 for details. Performance on datasets with spurious features. We find that, when trained only on the labeled data (where the correlation is fully predictive), both the standard and BagNet models generalize poorly in comparison to the shape-biased models (see Table 7). This behavior is expected: the spurious attribute in both datasets is color-related and hence mostly suppressed by the edge detection algorithms used to train shape-based models. Even after self-training on the unlabeled data (where the correlation is absent), the performance of the standard and BagNet models does not improve significantly. Finally, simply ensembling self-trained models post hoc does not improve their performance. Indeed as the texture-biased and standard models are significantly less accurate than the shape-biased one, they end up lowering the overall accuracy of the ensemble (see Appendix B.8). In contrast, when we co-train a texture-biased model with a shape-biased one, the texture-biased model improves substantially. For instance, when co-trained with a Canny model, the BagNet model improves over self-training by 42% on the tinted STL-10 dataset and 27% on the CelebA dataset. This improvement can be attributed to the fact that the predictions of the shape-biased model are not consistent with the spurious correlation on the unlabeled data. Hence, by being trained on pseudolabels from that model, the BagNet model is forced to rely on alternative, non-spurious features. Moreover, particularly on CelebA, the shape-biased model also improves when co-trained with a texture-biased model. This indicates that even though the texture-biased model relies on the spurious correlation, it also captures non-spurious features that, through pseudo-labeling, improve the performance of the shape-based model. In Appendix B.9, we find that these improvements are concentrated on inputs where the spurious correlation does not hold. 6 ADDITIONAL RELATED WORK In Section 2, we discussed the most relevant prior work on implicit or explicit feature priors. Here, we discuss additional related work and how it connects to our approach. Shape-biased models. Several other methods aim to bias models towards shape-based features: input stylization Geirhos et al. (2019); Somavarapu et al. (2020); Li et al. (2021), penalizing early layer predictiveness Wang et al. (2019), jigsaw puzzles Carlucci et al. (2019); Asadi et al. (2019), dropout Shi et al. (2020), or data augmentation Hermann et al. (2020). While, in our work, we choose to suppress texture information via edge detection algorithms, any of these methods can be substituted to generate the shape-based model for our analysis. Avoiding spurious correlations. Other methods that can prevent models from learning spurious correlations include: learning representations that are simultaneously optimal across domains (Arjovsky et al., 2019), enforcing robustness to group shifts (Sagawa et al., 2020), and utilizing multiple data points corresponding to a single physical entity (Heinze-Deml & Meinshausen, 2017). Similar in spirit to our work, these methods aim to learn prediction rules that are supported by multiple views of the data. However, we do not rely on annotations or multiple sources and instead impose feature priors through the model architecture and input preprocessing. Pseudo-labeling. Since the initial proposal of pseudo-labeling for neural networks Lee et al. (2013), there has been a number of more sophisticated pseudo-labeling schemes aimed at improving the accuracy and diversity of the labels Iscen et al. (2019); Augustin & Hein (2020); Xie et al. (2020); Rizve et al. (2021); Huang et al. (2021). In our work, we focus on the simplest scheme for self-labeling—i.e., confidence based example selection. Nevertheless, most of these schemes can be directly incorporated into our framework to potentially improve its overall performance. A recent line of work explores self-training by analyzing it under different assumptions on the data (Mobahi et al., 2020; Wei et al., 2021; Allen-Zhu & Li, 2020; Kumar et al., 2020). Closest to our work, Chen et al. (2020b) show that self-training on unlabeled data can reduce reliance on spurious correlations under certain assumptions. In contrast, we demonstrate that by leveraging diverse feature priors, we can avoid spurious correlations even if a model heavily relies on them. Consistency regularization. In parallel to pseudo-labeling, consistency regularization is another canonical technique for leveraging unlabeled data. Here, a model is trained to be invariant to a set of input transformations. These transformations might stem from data augmentations and architecture stochasticity Laine & Aila (2017); Berthelot et al. (2019); Chen et al. (2020a); Sohn et al. (2020); Prabhu et al. (2021) or using adversarial examples Miyato et al. (2018). Co-training. One line of work studies co-training from a theoretical perspective (Nigam & Ghani, 2000; Balcan et al., 2005; Goldman & Zhou, 2000). Other work aims to improve co-training by either expanding the settings where it can be applied (Chen et al., 2011) or by improving its stability (Ma et al., 2020; Zhang & Zhou, 2011). Finally, a third line of work applies co-training to images. Since images cannot be separated into disjoint feature sets, one would apply co-training by training multiple models Han et al. (2018), either regularized to be diverse through adversarial examples Qiao et al. (2018) or each trained using a different method Yang et al. (2020). Our method is complementary to these approaches as it relies on explicit feature priors to obtain different views. 7 CONCLUSION In this work, we explored the benefits of combining feature priors with non-overlapping failure modes. By capturing complementary perspectives on the data, models trained with diverse feature priors can offset each others mistakes when combined through methods such as ensembles. Moreover, in the presence of unlabeled data, we can leverage prior diversity by jointly boostrapping models with different priors through co-training. This allows the models to correct each other during training, thus improving pseudo-labeling and controlling for correlations that do not generalize well. We believe that our work is only the first step in exploring the design space of creating, manipulating, and combining feature priors to improve generalization. In particular, our framework is quite flexible and allows for a number of different design choices, such as choosing other feature priors (cf. Sections 2 and 6), using other methods for pseudo-label selection (e.g., using uncertainty estimation (Lee et al., 2018; Rizve et al., 2021)), and combining pseudo-labels via different ensembling methods. More broadly, we believe that exploring the synthesis of explicit feature priors in new applications is an exciting avenue for further research. A EXPERIMENTAL DETAILS A.1 DATASETS For our first set of experiments (Section 4), we focus on a canonical setting where a small portion of the training set if labeled and we have access to a pool of unlabeled data. STL-10. The STL-10 Coates et al. (2011) dataset contains 5,000 training and 8,000 test images of size 96×96 from 10 classes. We designate 1,000 of the 5,000 (20%) training examples to be the labeled training set, 500 (10%) to be the validation set, and the rest are used as unlabeled data. CIFAR-10. The CIFAR-10 Krizhevsky (2009) dataset contains 50,000 training and 8,000 test images of size 32×32 from 10 classes. We designate 1,000 of the 50,000 (2%) training examples to be the labeled training set, 5000 (10%) to be the validation set, and the rest as unlabeled data. In both cases, we report the final performance on the standard test set of that dataset. We also create two datasets that each contain a different spurious correlation. Tinted STL-10. We reuse the STL-10 setup described above, but we add a class-specific tint to each image in the (labeled) training set. Specifically, we hand-pick a different color for each of the 10 classes and then add this color to each of the pixels (ensuring that each RGB channel remains within the valid range)—see Figure 8 for examples. This tint is only present in the labeled part of the training set, the unlabeled and test parts of the dataset are left unaltered. Biased CelebA. We consider the task of predicting gender in the CelebA Liu et al. (2015) dataset. In order to create a biased training set, we choose a random sample of 500 non-blond males and 500 blond females. We then use a balanced unlabeled dataset consisting of 1,000 random samples for each of: blond males, blond females, non-blond males, and non-blond females. We use the standard CelebA test set which consists of 12.41% blond females, 48.92% non-blond females, 0.90% blond males, and 37.77% non-blond males. (Note that a classifier predicting purely based on hair color with have an accuracy of 50.18% on that test set.) All of the datasets that we use are freely available for non-commercial research purposes. Moreover, to the best of our knowledge, they do not contain offensive content or identifiable information (other than publicly available celebrity photos). A.2 MODEL ARCHITECTURES AND INPUT PREPROCESSING For both the standard model and the models trained on images processed by edge detection algorithm, we use a standard model architecture—namely, VGG16 Simonyan & Zisserman (2015) with the addition of batch normalization Ioffe & Szegedy (2015) (often referred to as VGG16-BN). We describe the exact edge detection process as well as the architecture of the BagNet model (texture prior) below. We visualize these priors in Figure 10. Canny edge detection. Given an image, we first smooth it with a 5 pixel bilateral filter Tomasi & Manduchi (1998), with filter σ in the coordinate and color space set to 75. After smoothing, the image is converted to gray-scale. Finally, a Canny filter Canny (1986) is applied to the image, with hysteresis thresholds 100 and 200, to extract the edges. Sobel edge detection. Given an image, we first upsample it to 128×128 pixels. Then we convert it to gray-scale and apply a Gaussian blur (kernel size=5, σ = 5). The image is then passed through a Sobel filter Sobel & Feldman (1968) with a kernel size of 3 in both the horizontal and the vertical direction to extract the image gradients. BagNet. For our texture-biased model, we use a slimmed down version of the BagNet architecture from Brendel & Bethge (2019). The goal of this architecture is to limit the receptive field of the model, hence forcing it to make predictions based on local features. The exact architecture we used is shown in Figure 9. Intuitively, the top half of the network—i.e., the green and blue blocks— construct features on patches of size 20×20 for 96×96 images and 10×10 for 32×32 images. The rest of the network consists only of 1×1 convolutions and max-pooling, hence not utilizing the image’s spatial structure. Custom BagNet20 Custom BagNet10 A.3 TRAINING SETUP A.3.1 BASIC TRAINING We train all our models using stochastic gradient descent (SGD) with momentum (a coefficient of 0.9) and a decaying learning rate. We add weight decay regularization with a coefficient of 10−4. In terms of data augmentation, we apply random cropping with a padding of 4 pixels, random horizontal flips, and a random rotation of ±2 degrees. These transformations are applied after the edge detection processing. We train all models with a batch size of 64 for 96×96-sized images and 128 for 32×32-sized images for a total of 300 epochs. All our experiments are performed using our internal cluster which mainly consists of NVIDIA 1080 Ti GTX GPUs. Hyperparameter tuning. To ensure a fair comparison across feature priors, we selected the hyperparameters for each dataset-prior pair separately, using the held-out validation set (separate from the final test used for reporting performance). Specifically, we performed a grid search choosing the learning rate (LR) from [0.1, 0.05, 0.02, 0.01, 0.005], the number of epochs between each learning rate drop (K) from [50, 100, 300] and the factor with which the learning rate is multiplied (γ) from [0.5, 1]. The parameters chosen are shown in Table 11. We found that all models achieved nearoptimal performance strictly within the range of each hyperparameters. Thus, we did not consider a wider grid. A.4 ENSEMBLES In order to leverage prior diversity, we ensemble models trained with (potentially) different priors. We use the following ensembles: 1. Take Max: Predict based on the model assigning the highest probability on this example. 2. Average: Average the (softmax) output probabilities of the models, predict the class assigned the highest probability. 3. Rank: Each model ranks all test examples based on the probability assigned to their predicted labels. Then, for each example, we predict using the model which has a lower rank on this example. We then report the maximum of these ensemble methods in Table 3. A.5 SELF-TRAINING AND CO-TRAINING SCHEMES In the setting that we are focusing on, we are provided with a labeled dataset X and an unlabeled dataset U, where typically there is much more unlabeled data (|U| |X|). We are then choosing a set of (one or more) feature priors each of which corresponds to a different way of training a model (e.g., using edge detection preprocessing). General methodology. We start by training each of these models on the labeled dataset. Then, we combine the predictions of these models to produce pseudo-labels for the unlabeled dataset. Finally, we choose a fraction of the unlabeled data and train the models on that set using the produced pseudo-labels (in additional to the original labeled set X). This process is repeated using increasing fractions of the unlabeled dataset until, eventually, models are trained on its entirety. We refer to each such phase as an era. We include an additional 5% of the unlabeled data per era, resulting in a total of 20 eras. During each era, we use the training process described in Appendix A.3.1 without re-initializing the models (warm start). After completing this process, we train a standard model from scratch using both the labeled set and resulting pseudo-labels. The methodology used for choosing and combining pseudo-labels is described below for each scheme. Self-training. Since we are only training one model, we only need to decide how to choose the pseudo-labels to use for each era. We do this in the simplest way: at ear t, we pick the subset Ut ⊆ U of examples that are assigned the highest probability on their predicted label. We attempt to produce a class-balanced training set by applying this process separately on each class (as predicted by the model). The pseudocode for the method is provided in Algorithm 1. Algorithm 1: Self-training Parameters: Number of eras T . Fraction added per era k. Input : Labeled data X with n classes, unlabeled data U, model trained on X. for era t ∈ 1...T do forward-pass U through the model to create pseudo-labels Ut = [] for each class c do Select the kt|U|n most confident examples from U predicted by the model as class c Add those examples to Ut with class c Re-train (warm start) the model on X ∪Ut until convergence Train a standard model from scratch on X ∪UT. Standard co-training. Here, we train multiple models (in our experiments two) based on a common pool of pseudo-labeled examples in each era. In each era t, each model labels the unlabeled dataset U. Then, for each class, we alternate between models, adding the next most confident example predicted as that class for that model to Ut, until we reach a fixed number of unique examples have been added for that class (5% of the size of the unlabeled dataset per era). Note that this process allows both conflicts and duplicates: if multiple models are confident about a specific example, that example may be added more than once (potentially with a different label each time). Finally, we train each model (without re-initializing) on X∪Ut. The pseudocode for this method can be found in Algorithm 2. Algorithm 2: Standard Co-Training Parameters: Number of eras T . Fraction added per era k. Input : Labeled data X with n classes, unlabeled data U, models trained on X. for era t ∈ 1...T do forward-pass U through each model to create pseudo-labels Ut = [] for each class c do U (c) t = [] while the number of unique examples in U(c)t < kt|U| n do for each model m do Add the next most confident example predicted by m as class c to U(c)t Add U(c)t to Ut Re-train (warm start) each model on X ∪Ut until convergence Train a standard model from scratch on X ∪UT. B ADDITIONAL EXPERIMENTS B.1 EXPERIMENT ORGANIZATION We now provide the full experimental results used to create the plots in the main body as well as additional analysis. Specifically, in Appendix B.2 and B.3 we present the performance of individual ensemble schemes for pre-trained and self-trained models respectively. Then, in Appendix B.5 we present the performance of co-training for each combination of feature priors. In Appendix B.7 we analyse the effect that co-training has on model similarity after training. Finally, in Appendix B.8 we evaluate model ensembles on datasets with spurious correlations and in Appendix B.9 we breakdown the performance of co-training on the skewed CelebA dataset according to different input attributes. B.2 FULL PRE-TRAINED ENSEMBLE RESULTS In Table 3, we reported the best ensemble method for each pair of models trained with different priors on the labeled data. In Table 12, we report the full results over the individual ensembles. B.3 ENSEMBLING SELF-TRAINED MODELS In Table 13, we report the best ensemble method for pairs of self-trained models with different priors. In Table 14, we report the full results over the individual ensembles. We find that, similar to the ensembles of models trained on the labeled data, models with diverse priors gain more from ensembling. However, co-training models with diverse priors together still outperforms ensembling self-trained models. B.4 STACKED ENSEMBLING Here we consider an ensembling technique that leverages a validation set. We implement stacking (also called blending) Töscher et al. (2009); Sill et al. (2009), which takes in the outputs of the member models as input, and then trains a second model to produce the final layer. Here, we take the logits of each model in the ensemble, and train the secondary model using logistic regression on the validation set for the dataset. We report accuracies of the ensemble on the test set below. We again find that prior diversity is important for the performance of the ensemble. B.5 SELF-TRAINING AND CO-TRAINING ON STL-10 AND CIFAR-10 B.6 CO-TRAINING WITH VARYING AMOUNTS OF LABELED DATA. In Table 19, we study how the efficacy of combining diverse priors through cotraining changes as the number of labeled examples increase for STL-10. As one might expect, when labeled data is sparse, the feature priors learned by the models alone are relatively brittle: thus, leveraging diverse priors against each other on unlabeled data improves generalization. As the number of labeled examples increases, the models with single feature priors learn more reliable prediction rules that can already generalize, so the additional benefit of combining feature priors diminishes. However, even in settings with plentiful data, combining diverse feature priors can aid generalization if there is a spurious correlation in the labeled data (see Section 5.) B.7 CORRELATION BETWEEN THE INDIVIDUAL FEATURE-BIASED MODELS AND THE FINAL STANDARD MODEL B.8 ENSEMBLES FOR SPURIOUS DATASETS In Table 21 (full table in Table 22), we ensemble the self-trained priors for the Tinted STL-10 dataset and the CelebA dataset as in Section 5. Both of these datasets have a spurious correlation base on color, which results in a weak Standard and BagNet model. As a result, the ensembles with the Standard or BagNet models do not perform well on the test set. However, in Section 7, we find that co-training in this setting allows the BagNet model to improve when jointly trained with a shape model, thus boosting the final performance. B.9 BREAKDOWN OF TEST ACCURACY FOR CO-TRAINING ON CELEBA B.10 WHAT IF THE UNLABELED DATA ALSO CONTAINED THE SPURIOUS CORRELATION? In Section 5, we assume that the unlabeled data does not contain the spurious correlation present in the labeled data. This is often the case when unlabeled data can be collected through a more diverse process than labeled data (for example, by scraping the web large scales or by passively collecting data during deployment). This assumption is important: in order to successfully steer models away from the spurious correlation during co-training, the process needs to surface examples which contradict the spurious correlation. However, if the unlabeled data is also heavily skewed, such examples might be rare or non-existent. What happens if the unlabeled data is as heavily skewed as the labeled data? We return the setting of a spurious association between hair color and gender in CelebA. However, unlike in Section 5, we use an unlabeled dataset that also perfectly correlates hair color and gender – it contains 2000 non-blond males and 2000 blond females. The unlabeled data thus has the same distribution as the labeled data, and contains no examples that reject the spurious correlation (blond males or non-blond females). Self-Training: Since the unlabeled data follows the spurious correlation between hair color and gender, the standard and BagNet models almost perfectly pseudo-label the unlabeled data. Thus, they are simply increasing the number of examples in the training dataset but maintaining the same overall distribution. Self-training thus does not change the accuracy for models with these priors significantly. In contrast, in the setting in Section 5, there were examples in the unlabeled data which did not align with the spurious correlation (blond males and non-blond females). Since they relied mostly on hair color, the standard and BagNet models actively mislabeled these examples (i.e, by labeling a blond male as female). Training on these erroneous pseudo-labels actively suppressed any features that were not hair color, causing the standard and Bagnet models to perform worse after self-training. Co-Training: In contrast, when performing co-training with the Canny and BagNet priors, the Canny model (which cannot detect hair color) will make mistakes on the unlabeled dataset. These mistakes help are inconsistent with a reliance on hair color: due to this regularization, the BagNet’s accuracy improves from 69.35% to 76.52%. Overall, though the gain is not as significant as the setting with a balanced unlabeled dataset, the Canny + BagNet co-trained model can mitigate the pitfalls of the BagNet’s reliance on hair color and outperform even the canny self-trained model.
1. What is the focus of the paper in computer vision? 2. What are the strengths and weaknesses of the proposed approach in solving various computer vision tasks? 3. How does the reviewer assess the technical novelty and framework of the paper? 4. What are the limitations of the experimental evaluation and comparisons in the paper? 5. How does the reviewer perceive the overall quality of the paper in relation to the standards of ICLR?
Summary Of The Paper Review
Summary Of The Paper The paper is an empirical study of combining multiple feature priors along with some pre-processing to solve a variety of computer vision tasks. Review Positives The study seems to be interesting and maybe useful for practitioners. Concerns Very meagre contribution in terms of technical novelty and framework. Looks like an empirical study without much conviction and direction. Experimental evaluation and comparisons seem dated, not state of the art. The work is very much below the expected standards of ICLR.
ICLR
Title Combining Diverse Feature Priors Abstract To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly. In this work, we explore the design space of leveraging such feature priors by viewing them as distinct perspectives on the data. Specifically, we find that models trained with diverse sets of feature priors have less overlapping failure modes, and can thus be combined more effectively. Moreover, we demonstrate that jointly training such models on additional (unlabeled) data allows them to correct each other’s mistakes, which, in turn, leads to better generalization and resilience to spurious correlations. 1 INTRODUCTION The driving force behind deep learning’s success is its ability to automatically discover predictive features in complex high-dimensional datasets. In fact, these features can generalize beyond the specific task at hand, thus enabling models to transfer to other (yet similar) tasks (Donahue et al., 2014). At the same time, the set of features that the model learns has a large impact on how well it will perform on unseen inputs, especially in the presence of distribution shift (Ponce et al., 2006; Torralba & Efros, 2011; Sagawa et al., 2020) or spurious correlations (Heinze-Deml & Meinshausen, 2017; Beery et al., 2018; Meinshausen, 2018). Motivated by this, recent work focuses on encouraging specific modes of behavior by preventing the models from relying on certain features. Examples include suppressing texture features (Geirhos et al., 2019; Wang et al., 2019), avoiding `p-non-robust features (Tsipras et al., 2019; Engstrom et al., 2019), or utilizing different parts of the frequency spectrum (Yin et al., 2019). At a high level, these methods can be thought of as ways of imposing a feature prior on the learning process, so as to bias the model towards acquiring features that generalize better. This makes the choice of the feature prior to impose a key design decision. The goal of this work is thus to explore the underlying design space of feature priors and, specifically, to understand: How can we effectively harness the diversity of feature priors? OUR CONTRIBUTIONS In this paper, we cast diverse feature priors as different perspectives on the data and study how they can complement each other. In particular, we aim to understand whether training with distinct priors result in models with non-overlapping failure modes and how such models can be combined to improve generalization. This is particularly relevant in settings where the data is unreliable— e.g, when the training data contains a spurious correlation. From this perspective, we focus our study on two priors that arise naturally in the context of image classification, shape and texture, and investigate the following: Feature diversity. We demonstrate that training models with diverse feature priors results in them making mistakes on different parts of the data distribution, even if they perform similarly in terms of overall accuracy. Further, one can harness this diversity to build model ensembles that are more accurate than those based on combining models which have the same feature prior. Combining feature priors on unlabeled data. When learning from unlabeled data, the choice of feature prior can be especially important. For strategies such as self-training, sub-optimal prediction rules learned from sparse labeled data can be reinforced when pseudo-labeling the unlabeled data. We show that, in such settings, we can leverage the diversity of feature priors to address these issues. By jointly training models with different feature priors on the unlabeled data through the framework of co-training Blum & Mitchell (1998), we find that the models can correct each other’s mistakes to learn prediction rules that generalize better. Learning in the presence of spurious correlations. Finally, we want to understand whether combining diverse priors during training, as described above, can prevent models from relying on correlations that are spurious, i.e., correlations that do not hold on the actual distribution of interest. To model such scenarios, we consider a setting where a spurious correlation is present in the training data but we also have access to (unlabeled) data where this correlation does not hold. In this setting, we find that co-training models with diverse feature priors can actually steer them away from such correlations and thus enable them to generalize to the underlying distribution. Overall, our findings highlight the potential of incorporating distinct feature priors into the training process. We believe that further work along this direction will lead us to models that generalize more reliably. 2 BACKGROUND: FEATURE PRIORS IN COMPUTER VISION When learning from structurally complex data, such as images, relying on raw input features alone (e.g., pixels) is not particularly useful. There has thus been a long line of work on extracting input patterns that can be more effective for prediction. While early approaches, such as SIFT (Lowe, 1999) and HOG (Dalal & Triggs, 2005), leveraged hand-crafted features, these have been by now largely replaced by features that are automatically learned in an end-to-end fashion (Krizhevsky, 2009; Ciregan et al., 2012; Krizhevsky et al., 2012). Nevertheless, even when features are learned, model designers still tune their models to better suit a particular task via changes in the architecture or training methodology. Such modifications can be thought of as imposing feature priors, i.e., priors that bias a model towards a particular set of features. One prominent example here are convolutional neural networks, which are biased towards learning a hierarchy of localized features Fukushima (1980); LeCun et al. (1989). Indeed, such a convolutional prior can be quite powerful: it is sufficient to enable many image synthesis tasks without any training Ulyanov et al. (2017). More recently, there has been work exploring the impact of explicitly restricting the set of features utilized by the model. For instance, Geirhos et al. (2019) demonstrate that training models on stylized inputs (and hence suppressing texture information) can improve model robustness to common corruptions. In a similar vein, Wang et al. (2019) penalize the predictive power of local features to learn shape-biased models that generalize better between image styles. A parallel line of work focuses on training models to be robust to small, worst-case input perturbations using, for example, adversarial training Goodfellow et al. (2015); Madry et al. (2018) or randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019). Such training biases these models away from non-robust features (Tsipras et al., 2019; Ilyas et al., 2019; Engstrom et al., 2019), which tends to result in them being more aligned with human perception (Tsipras et al., 2019; Kaur et al., 2019), more resilient to certain input corruptions (Ford et al., 2019; Kireev et al., 2021), and better suited for transfer to downstream tasks Utrera et al. (2020); Salman et al. (2020). 3 FEATURE PRIORS AS DIFFERENT PERSPECTIVES As we discussed, the choice of feature prior can have a large effect on what features a model relies on and, by extension, on how well it generalizes to unseen inputs. In fact, one can view such priors as distinct perspectives on the data, capturing different information about the input. In this section, we provide evidence to support this view; specifically, we examine a case study on a pair of feature priors that arise naturally in the context of image classification: shape and texture. 3.1 TRAINING SHAPE- AND TEXTURE-BIASED MODELS In order to train shape- and texture-biased models, we either pre-process the model input or modify the model architecture as follows: Shape-biased models. To suppress texture information in the images, we pre-process our inputs by applying an edge detection algorithm. We consider two such canonical algorithms: the Canny algorithm Ding & Goshtasby (2001) which produces a binary edge mask, and the Sobel algorithm Sobel & Feldman (1968) which provide a softer edge detection, hence retaining some texture information (see Figures 1b and 1c). Texture-biased models. To prevent the model from relying on the global structure of the image, we utilize a variant of the BagNet architecture Brendel & Bethge (2019). This architecture deliberately limits the receptive field of the model, thus forcing it to rely on local features (see Figure 1d). We visualize all of these priors in Figure 1 and provide implementation details in Appendix A. 3.2 DIVERSITY OF FEATURE-BIASED MODELS After training models with shape and texture biases as outlined above, we evaluate whether these models indeed capture complementary information about the input. Specifically, we train models on a small subset (100 examples per class) of the CIFAR-10 (Krizhevsky, 2009) and STL-10 (Coates et al., 2011) datasets, and measure the correlation between which test examples they correctly classify. We find that pairs consisting of a shape-biased model and a texture-biased model (i.e., Canny and BagNet, or Sobel and BagNet) indeed have the least correlated predictions—cf. Table 2. In other words, the mistakes that these models make are more diverse than those made by identical models trained from different random initializations. At the same time, different shape-biased models (Sobel and Canny) are relatively well-correlated with each other, which corroborates the fact that models trained on similar features of the input are likely to make similar mistakes. Model ensembles. Having shown that training models with these feature priors results in diverse prediction rules, we examine if we can now combine them to improve our generalization. The canonical approach for doing so is to incorporate these models into an ensemble. We find that the diversity of models trained with different feature priors indeed directly translates into an improved performance when combining them into an ensemble—cf. Table 3. In fact, we find that the performance of the ensemble is tightly connected to prediction similarity of its constituents (as measured in Table 2), i.e., more diverse ensembles tend to perform better. For instance, the best ensemble for the STL-10 dataset is the one combining a shape-biased (Canny) and a texture-biased model (BagNet) which were the models with the least aligned predictions. 4 COMBINING DIVERSE PRIORS ON UNLABELED DATA In the previous section, we saw that training models with different feature priors (e.g., shape- and texture-biased models) can lead to prediction rules with less overlapping failure modes—which, in turn, can lead to more effective model ensembles. However, ensembles only combine model predictions post hoc and thus cannot take advantage of diversity during the training process. In this section, we instead focus on utilizing diversity during training. Specifically, we will leverage the diversity introduced through feature priors in the context of self-training Lee et al. (2013)—a framework commonly used when the labeled data is insufficient to learn a well-generalizing model. This framework utilizes unlabeled data, which are then pseudo-labeled using an existing model and used for further training. While such methods can often improve the overall model performance, they suffer from a significant drawback: models tend to reinforce suboptimal prediction rules even when these rules do not generalize to the underlying distribution Arazo et al. (2020). Our goal here is thus to leverage diverse feature priors to address this exact shortcoming. Specifically, we will jointly train models with different priors on the unlabeled data through the framework of co-training Blum & Mitchell (1998). Since these models capture complementary information about the input (cf. Table 2), we expect them to correct each other’s mistakes and improve their prediction rules. As we will see in this section, this approach can indeed have a significant impact on the performance of the resulting model, outperforming ensembles that combine such models only at evaluation time—see summary in Figure 4. Setup. We base our analysis on the CIFAR-10 and STL-10 datasets. Specifically, we treat a small fraction of the training set as labeled examples (100 examples per class), another fraction as our validation set for tuning hyperparameters (10% of the total training examples), and the rest as unlabeled data. We report our results on the standard test set of each dataset. (See Appendix A for experimental details, and Appendix B.6 for experiments with varying levels of labeled data.) 4.1 SELF-TRAINING AND ENSEMBLES Before outlining our method for jointly training models with multiple priors, we first describe the standard approach to self-training a single model. At a high level, the predictions of the model on the unlabeled data are treated as correct labels and are then used to further train the same model Lee et al. (2013); Iscen et al. (2019); Zou et al. (2019); Xie et al. (2020). The underlying intuition is that the classifier will predict the correct labels for that data better than chance, and thus these pseudo-labels can be used to expand the training set. In practice, however, these pseudo-labels tend to be noisy. Thus, a common approach is to only use the labels to which the model assigns the highest probability Lee et al. (2013). This process is repeated, self-training on increasingly larger fractions of the unlabeled data until all of it is used. We refer to each such training phase as an era. Ensembles of diverse self-trained models. Similarly to our results in Table 3, we find that ensembles comprised of self-trained models with diverse feature priors outperform those that use the same prior from different random initializations (see Figure 4 for a summary and Appendix B.3 for the full results). This demonstrates that, after self-training, these models continue to capture complementary information about the input that can be leveraged to improve performance. 4.2 CO-TRAINING MODELS WITH DIFFERENT FEATURE PRIORS Moving beyond self-training with a single feature prior, our goal in this section is to leverage multiple feature priors by jointly training them on the unlabeled data. This idea naturally fits into the framework of co-training: a method used to learn from unlabeled data when inputs correspond to multiple independent sets of features Blum & Mitchell (1998). Concretely, we first train a model for each feature prior. Then, we collect the pseudo-labels on the unlabeled data that were assigned the highest probability for each model—including duplicates with potentially different labels—to form a new training set which we use for further training. Similarly to the self-training case, we repeat this process over several eras, increasing the fraction of the unlabeled dataset used at each era. Intuitively, this iterative process allows the models to bootstrap off of each other’s predictions, learning correlations that they might fail to learn from the labeled data alone. At the end of this process, we are left with two models, one for each prior, which we combine into a single classifier by training a standard model from scratch on the combined pseudo-labels. We provide a more detailed explanation of the methodology in Appendix A.5. Co-training performance. We find that co-training with shape- and texture-based priors can significantly improve the test accuracy of the final model compared to self-training with any of the priors alone (Table 5). This is despite the fact that, when using self-training alone, the standard model outperforms all other models (Column 4, Table 5). Moreover, co-training models with diverse priors improves upon simply combining them in an ensemble (Appendix B.3). In Appendix B.5, we report the performance of co-training with every pair of priors. We find that co-training with shape- and texture-based priors together (Canny + BagNet for STL-10 and Sobel + BagNet for CIFAR-10) outperform every other prior combination. Note that this is the case even though, when only ensembling models with different priors (c.f Table 3 and Appendix B.3), Standard + Sobel is consistently the best performing pair for CIFAR-10. Overall, these results indicate that the diversity of shape- and texture-biased models allows them to improve each other over training. Additionally, we find that, even when training a single model on the pseudo-labels of another model, prior diversity can still help. Specifically, we compare the performance of a standard model trained from scratch using pseudo-labels from various self-trained models (Column 5, Table 5). In this setting, using a self-trained shape- or texture-biased model for pseudo-labeling outperforms using a self-trained standard model. This is despite the fact that, in isolation, the standard model has higher accuracy than the shape- or texture-biased ones (Column 4, Table 5). Model alignment over co-training. To further explore the dynamics of co-training, we evaluate how the correlation between model predictions evolves as the eras progress in Figure 6 (using the prediction alignment measure of Table 2). We find that shape- and texture-biased models exhibit low correlation at the start of co-training, but this correlation increases as co-training progresses. This is in contrast to self-training each model on its own, where the correlation remains relatively low. It is also worth noting that the correlation appears to plateau at a lower value when co-training models with distinct feature priors as opposed to co-training two standard models. Finally, we find that a standard model trained on the pseudo-labels of other models correlates well with the models themselves (see Appendix B.7). Overall, these findings indicate that models trained on each other’s pseudo-labels end up behaving more similarly. 5 USING CO-TRAINING TO AVOID SPURIOUS CORRELATIONS A major challenge when training models for real-world deployment is avoiding spurious correlations: associations which are predictive on the training data but not valid for the actual task. Since models are typically trained to maximize train accuracy, they are quite likely to rely on such spurious correlations Gururangan et al. (2018); Beery et al. (2018); Geirhos et al. (2020); Xiao et al. (2020). In this section, our goal is to leverage diverse feature priors to control the sensitivity of the training process to such spurious correlations. Specifically, we will assume that the spurious correlation does not hold on the unlabeled data (which is likely since unlabeled data can often be collected at a larger scale). Without this assumption, the unlabeled contains no examples that could (potentially) contradict the spurious correlation (we investigate the setting where the unlabeled data is also similarly skewed in Appendix B.10). As we will see, if the problematic correlation is not easily captured by one of the priors, the corresponding model generates pseudo-labels that are inconsistent with this correlation, thus steering other models away from this correlation during co-training. Setup. We study spurious correlations in two settings. First, we create a synthetic dataset by tinting each image of the STL-10 labeled dataset in a class-specific way. This encourages models to rely on the tint, as it is highly predictive on the training set. However, this prediction rule does not generalize to the test set where this correlation is absent. Second, similar to Sagawa et al. (2020), we consider a gender classification task based on CelebA (Liu et al., 2015) where hair color (“blond” vs. “non-blond”) is predictive on the labeled data but not on the unlabeled and test data. While gender and hair color are independent attributes on the unlabeled dataset, the labeled dataset consists only of blond females and non-blond males. Similarly to the synthetic case, the labeled data encourages a prediction rule based only on hair color. See Appendix A.1 for details. Performance on datasets with spurious features. We find that, when trained only on the labeled data (where the correlation is fully predictive), both the standard and BagNet models generalize poorly in comparison to the shape-biased models (see Table 7). This behavior is expected: the spurious attribute in both datasets is color-related and hence mostly suppressed by the edge detection algorithms used to train shape-based models. Even after self-training on the unlabeled data (where the correlation is absent), the performance of the standard and BagNet models does not improve significantly. Finally, simply ensembling self-trained models post hoc does not improve their performance. Indeed as the texture-biased and standard models are significantly less accurate than the shape-biased one, they end up lowering the overall accuracy of the ensemble (see Appendix B.8). In contrast, when we co-train a texture-biased model with a shape-biased one, the texture-biased model improves substantially. For instance, when co-trained with a Canny model, the BagNet model improves over self-training by 42% on the tinted STL-10 dataset and 27% on the CelebA dataset. This improvement can be attributed to the fact that the predictions of the shape-biased model are not consistent with the spurious correlation on the unlabeled data. Hence, by being trained on pseudolabels from that model, the BagNet model is forced to rely on alternative, non-spurious features. Moreover, particularly on CelebA, the shape-biased model also improves when co-trained with a texture-biased model. This indicates that even though the texture-biased model relies on the spurious correlation, it also captures non-spurious features that, through pseudo-labeling, improve the performance of the shape-based model. In Appendix B.9, we find that these improvements are concentrated on inputs where the spurious correlation does not hold. 6 ADDITIONAL RELATED WORK In Section 2, we discussed the most relevant prior work on implicit or explicit feature priors. Here, we discuss additional related work and how it connects to our approach. Shape-biased models. Several other methods aim to bias models towards shape-based features: input stylization Geirhos et al. (2019); Somavarapu et al. (2020); Li et al. (2021), penalizing early layer predictiveness Wang et al. (2019), jigsaw puzzles Carlucci et al. (2019); Asadi et al. (2019), dropout Shi et al. (2020), or data augmentation Hermann et al. (2020). While, in our work, we choose to suppress texture information via edge detection algorithms, any of these methods can be substituted to generate the shape-based model for our analysis. Avoiding spurious correlations. Other methods that can prevent models from learning spurious correlations include: learning representations that are simultaneously optimal across domains (Arjovsky et al., 2019), enforcing robustness to group shifts (Sagawa et al., 2020), and utilizing multiple data points corresponding to a single physical entity (Heinze-Deml & Meinshausen, 2017). Similar in spirit to our work, these methods aim to learn prediction rules that are supported by multiple views of the data. However, we do not rely on annotations or multiple sources and instead impose feature priors through the model architecture and input preprocessing. Pseudo-labeling. Since the initial proposal of pseudo-labeling for neural networks Lee et al. (2013), there has been a number of more sophisticated pseudo-labeling schemes aimed at improving the accuracy and diversity of the labels Iscen et al. (2019); Augustin & Hein (2020); Xie et al. (2020); Rizve et al. (2021); Huang et al. (2021). In our work, we focus on the simplest scheme for self-labeling—i.e., confidence based example selection. Nevertheless, most of these schemes can be directly incorporated into our framework to potentially improve its overall performance. A recent line of work explores self-training by analyzing it under different assumptions on the data (Mobahi et al., 2020; Wei et al., 2021; Allen-Zhu & Li, 2020; Kumar et al., 2020). Closest to our work, Chen et al. (2020b) show that self-training on unlabeled data can reduce reliance on spurious correlations under certain assumptions. In contrast, we demonstrate that by leveraging diverse feature priors, we can avoid spurious correlations even if a model heavily relies on them. Consistency regularization. In parallel to pseudo-labeling, consistency regularization is another canonical technique for leveraging unlabeled data. Here, a model is trained to be invariant to a set of input transformations. These transformations might stem from data augmentations and architecture stochasticity Laine & Aila (2017); Berthelot et al. (2019); Chen et al. (2020a); Sohn et al. (2020); Prabhu et al. (2021) or using adversarial examples Miyato et al. (2018). Co-training. One line of work studies co-training from a theoretical perspective (Nigam & Ghani, 2000; Balcan et al., 2005; Goldman & Zhou, 2000). Other work aims to improve co-training by either expanding the settings where it can be applied (Chen et al., 2011) or by improving its stability (Ma et al., 2020; Zhang & Zhou, 2011). Finally, a third line of work applies co-training to images. Since images cannot be separated into disjoint feature sets, one would apply co-training by training multiple models Han et al. (2018), either regularized to be diverse through adversarial examples Qiao et al. (2018) or each trained using a different method Yang et al. (2020). Our method is complementary to these approaches as it relies on explicit feature priors to obtain different views. 7 CONCLUSION In this work, we explored the benefits of combining feature priors with non-overlapping failure modes. By capturing complementary perspectives on the data, models trained with diverse feature priors can offset each others mistakes when combined through methods such as ensembles. Moreover, in the presence of unlabeled data, we can leverage prior diversity by jointly boostrapping models with different priors through co-training. This allows the models to correct each other during training, thus improving pseudo-labeling and controlling for correlations that do not generalize well. We believe that our work is only the first step in exploring the design space of creating, manipulating, and combining feature priors to improve generalization. In particular, our framework is quite flexible and allows for a number of different design choices, such as choosing other feature priors (cf. Sections 2 and 6), using other methods for pseudo-label selection (e.g., using uncertainty estimation (Lee et al., 2018; Rizve et al., 2021)), and combining pseudo-labels via different ensembling methods. More broadly, we believe that exploring the synthesis of explicit feature priors in new applications is an exciting avenue for further research. A EXPERIMENTAL DETAILS A.1 DATASETS For our first set of experiments (Section 4), we focus on a canonical setting where a small portion of the training set if labeled and we have access to a pool of unlabeled data. STL-10. The STL-10 Coates et al. (2011) dataset contains 5,000 training and 8,000 test images of size 96×96 from 10 classes. We designate 1,000 of the 5,000 (20%) training examples to be the labeled training set, 500 (10%) to be the validation set, and the rest are used as unlabeled data. CIFAR-10. The CIFAR-10 Krizhevsky (2009) dataset contains 50,000 training and 8,000 test images of size 32×32 from 10 classes. We designate 1,000 of the 50,000 (2%) training examples to be the labeled training set, 5000 (10%) to be the validation set, and the rest as unlabeled data. In both cases, we report the final performance on the standard test set of that dataset. We also create two datasets that each contain a different spurious correlation. Tinted STL-10. We reuse the STL-10 setup described above, but we add a class-specific tint to each image in the (labeled) training set. Specifically, we hand-pick a different color for each of the 10 classes and then add this color to each of the pixels (ensuring that each RGB channel remains within the valid range)—see Figure 8 for examples. This tint is only present in the labeled part of the training set, the unlabeled and test parts of the dataset are left unaltered. Biased CelebA. We consider the task of predicting gender in the CelebA Liu et al. (2015) dataset. In order to create a biased training set, we choose a random sample of 500 non-blond males and 500 blond females. We then use a balanced unlabeled dataset consisting of 1,000 random samples for each of: blond males, blond females, non-blond males, and non-blond females. We use the standard CelebA test set which consists of 12.41% blond females, 48.92% non-blond females, 0.90% blond males, and 37.77% non-blond males. (Note that a classifier predicting purely based on hair color with have an accuracy of 50.18% on that test set.) All of the datasets that we use are freely available for non-commercial research purposes. Moreover, to the best of our knowledge, they do not contain offensive content or identifiable information (other than publicly available celebrity photos). A.2 MODEL ARCHITECTURES AND INPUT PREPROCESSING For both the standard model and the models trained on images processed by edge detection algorithm, we use a standard model architecture—namely, VGG16 Simonyan & Zisserman (2015) with the addition of batch normalization Ioffe & Szegedy (2015) (often referred to as VGG16-BN). We describe the exact edge detection process as well as the architecture of the BagNet model (texture prior) below. We visualize these priors in Figure 10. Canny edge detection. Given an image, we first smooth it with a 5 pixel bilateral filter Tomasi & Manduchi (1998), with filter σ in the coordinate and color space set to 75. After smoothing, the image is converted to gray-scale. Finally, a Canny filter Canny (1986) is applied to the image, with hysteresis thresholds 100 and 200, to extract the edges. Sobel edge detection. Given an image, we first upsample it to 128×128 pixels. Then we convert it to gray-scale and apply a Gaussian blur (kernel size=5, σ = 5). The image is then passed through a Sobel filter Sobel & Feldman (1968) with a kernel size of 3 in both the horizontal and the vertical direction to extract the image gradients. BagNet. For our texture-biased model, we use a slimmed down version of the BagNet architecture from Brendel & Bethge (2019). The goal of this architecture is to limit the receptive field of the model, hence forcing it to make predictions based on local features. The exact architecture we used is shown in Figure 9. Intuitively, the top half of the network—i.e., the green and blue blocks— construct features on patches of size 20×20 for 96×96 images and 10×10 for 32×32 images. The rest of the network consists only of 1×1 convolutions and max-pooling, hence not utilizing the image’s spatial structure. Custom BagNet20 Custom BagNet10 A.3 TRAINING SETUP A.3.1 BASIC TRAINING We train all our models using stochastic gradient descent (SGD) with momentum (a coefficient of 0.9) and a decaying learning rate. We add weight decay regularization with a coefficient of 10−4. In terms of data augmentation, we apply random cropping with a padding of 4 pixels, random horizontal flips, and a random rotation of ±2 degrees. These transformations are applied after the edge detection processing. We train all models with a batch size of 64 for 96×96-sized images and 128 for 32×32-sized images for a total of 300 epochs. All our experiments are performed using our internal cluster which mainly consists of NVIDIA 1080 Ti GTX GPUs. Hyperparameter tuning. To ensure a fair comparison across feature priors, we selected the hyperparameters for each dataset-prior pair separately, using the held-out validation set (separate from the final test used for reporting performance). Specifically, we performed a grid search choosing the learning rate (LR) from [0.1, 0.05, 0.02, 0.01, 0.005], the number of epochs between each learning rate drop (K) from [50, 100, 300] and the factor with which the learning rate is multiplied (γ) from [0.5, 1]. The parameters chosen are shown in Table 11. We found that all models achieved nearoptimal performance strictly within the range of each hyperparameters. Thus, we did not consider a wider grid. A.4 ENSEMBLES In order to leverage prior diversity, we ensemble models trained with (potentially) different priors. We use the following ensembles: 1. Take Max: Predict based on the model assigning the highest probability on this example. 2. Average: Average the (softmax) output probabilities of the models, predict the class assigned the highest probability. 3. Rank: Each model ranks all test examples based on the probability assigned to their predicted labels. Then, for each example, we predict using the model which has a lower rank on this example. We then report the maximum of these ensemble methods in Table 3. A.5 SELF-TRAINING AND CO-TRAINING SCHEMES In the setting that we are focusing on, we are provided with a labeled dataset X and an unlabeled dataset U, where typically there is much more unlabeled data (|U| |X|). We are then choosing a set of (one or more) feature priors each of which corresponds to a different way of training a model (e.g., using edge detection preprocessing). General methodology. We start by training each of these models on the labeled dataset. Then, we combine the predictions of these models to produce pseudo-labels for the unlabeled dataset. Finally, we choose a fraction of the unlabeled data and train the models on that set using the produced pseudo-labels (in additional to the original labeled set X). This process is repeated using increasing fractions of the unlabeled dataset until, eventually, models are trained on its entirety. We refer to each such phase as an era. We include an additional 5% of the unlabeled data per era, resulting in a total of 20 eras. During each era, we use the training process described in Appendix A.3.1 without re-initializing the models (warm start). After completing this process, we train a standard model from scratch using both the labeled set and resulting pseudo-labels. The methodology used for choosing and combining pseudo-labels is described below for each scheme. Self-training. Since we are only training one model, we only need to decide how to choose the pseudo-labels to use for each era. We do this in the simplest way: at ear t, we pick the subset Ut ⊆ U of examples that are assigned the highest probability on their predicted label. We attempt to produce a class-balanced training set by applying this process separately on each class (as predicted by the model). The pseudocode for the method is provided in Algorithm 1. Algorithm 1: Self-training Parameters: Number of eras T . Fraction added per era k. Input : Labeled data X with n classes, unlabeled data U, model trained on X. for era t ∈ 1...T do forward-pass U through the model to create pseudo-labels Ut = [] for each class c do Select the kt|U|n most confident examples from U predicted by the model as class c Add those examples to Ut with class c Re-train (warm start) the model on X ∪Ut until convergence Train a standard model from scratch on X ∪UT. Standard co-training. Here, we train multiple models (in our experiments two) based on a common pool of pseudo-labeled examples in each era. In each era t, each model labels the unlabeled dataset U. Then, for each class, we alternate between models, adding the next most confident example predicted as that class for that model to Ut, until we reach a fixed number of unique examples have been added for that class (5% of the size of the unlabeled dataset per era). Note that this process allows both conflicts and duplicates: if multiple models are confident about a specific example, that example may be added more than once (potentially with a different label each time). Finally, we train each model (without re-initializing) on X∪Ut. The pseudocode for this method can be found in Algorithm 2. Algorithm 2: Standard Co-Training Parameters: Number of eras T . Fraction added per era k. Input : Labeled data X with n classes, unlabeled data U, models trained on X. for era t ∈ 1...T do forward-pass U through each model to create pseudo-labels Ut = [] for each class c do U (c) t = [] while the number of unique examples in U(c)t < kt|U| n do for each model m do Add the next most confident example predicted by m as class c to U(c)t Add U(c)t to Ut Re-train (warm start) each model on X ∪Ut until convergence Train a standard model from scratch on X ∪UT. B ADDITIONAL EXPERIMENTS B.1 EXPERIMENT ORGANIZATION We now provide the full experimental results used to create the plots in the main body as well as additional analysis. Specifically, in Appendix B.2 and B.3 we present the performance of individual ensemble schemes for pre-trained and self-trained models respectively. Then, in Appendix B.5 we present the performance of co-training for each combination of feature priors. In Appendix B.7 we analyse the effect that co-training has on model similarity after training. Finally, in Appendix B.8 we evaluate model ensembles on datasets with spurious correlations and in Appendix B.9 we breakdown the performance of co-training on the skewed CelebA dataset according to different input attributes. B.2 FULL PRE-TRAINED ENSEMBLE RESULTS In Table 3, we reported the best ensemble method for each pair of models trained with different priors on the labeled data. In Table 12, we report the full results over the individual ensembles. B.3 ENSEMBLING SELF-TRAINED MODELS In Table 13, we report the best ensemble method for pairs of self-trained models with different priors. In Table 14, we report the full results over the individual ensembles. We find that, similar to the ensembles of models trained on the labeled data, models with diverse priors gain more from ensembling. However, co-training models with diverse priors together still outperforms ensembling self-trained models. B.4 STACKED ENSEMBLING Here we consider an ensembling technique that leverages a validation set. We implement stacking (also called blending) Töscher et al. (2009); Sill et al. (2009), which takes in the outputs of the member models as input, and then trains a second model to produce the final layer. Here, we take the logits of each model in the ensemble, and train the secondary model using logistic regression on the validation set for the dataset. We report accuracies of the ensemble on the test set below. We again find that prior diversity is important for the performance of the ensemble. B.5 SELF-TRAINING AND CO-TRAINING ON STL-10 AND CIFAR-10 B.6 CO-TRAINING WITH VARYING AMOUNTS OF LABELED DATA. In Table 19, we study how the efficacy of combining diverse priors through cotraining changes as the number of labeled examples increase for STL-10. As one might expect, when labeled data is sparse, the feature priors learned by the models alone are relatively brittle: thus, leveraging diverse priors against each other on unlabeled data improves generalization. As the number of labeled examples increases, the models with single feature priors learn more reliable prediction rules that can already generalize, so the additional benefit of combining feature priors diminishes. However, even in settings with plentiful data, combining diverse feature priors can aid generalization if there is a spurious correlation in the labeled data (see Section 5.) B.7 CORRELATION BETWEEN THE INDIVIDUAL FEATURE-BIASED MODELS AND THE FINAL STANDARD MODEL B.8 ENSEMBLES FOR SPURIOUS DATASETS In Table 21 (full table in Table 22), we ensemble the self-trained priors for the Tinted STL-10 dataset and the CelebA dataset as in Section 5. Both of these datasets have a spurious correlation base on color, which results in a weak Standard and BagNet model. As a result, the ensembles with the Standard or BagNet models do not perform well on the test set. However, in Section 7, we find that co-training in this setting allows the BagNet model to improve when jointly trained with a shape model, thus boosting the final performance. B.9 BREAKDOWN OF TEST ACCURACY FOR CO-TRAINING ON CELEBA B.10 WHAT IF THE UNLABELED DATA ALSO CONTAINED THE SPURIOUS CORRELATION? In Section 5, we assume that the unlabeled data does not contain the spurious correlation present in the labeled data. This is often the case when unlabeled data can be collected through a more diverse process than labeled data (for example, by scraping the web large scales or by passively collecting data during deployment). This assumption is important: in order to successfully steer models away from the spurious correlation during co-training, the process needs to surface examples which contradict the spurious correlation. However, if the unlabeled data is also heavily skewed, such examples might be rare or non-existent. What happens if the unlabeled data is as heavily skewed as the labeled data? We return the setting of a spurious association between hair color and gender in CelebA. However, unlike in Section 5, we use an unlabeled dataset that also perfectly correlates hair color and gender – it contains 2000 non-blond males and 2000 blond females. The unlabeled data thus has the same distribution as the labeled data, and contains no examples that reject the spurious correlation (blond males or non-blond females). Self-Training: Since the unlabeled data follows the spurious correlation between hair color and gender, the standard and BagNet models almost perfectly pseudo-label the unlabeled data. Thus, they are simply increasing the number of examples in the training dataset but maintaining the same overall distribution. Self-training thus does not change the accuracy for models with these priors significantly. In contrast, in the setting in Section 5, there were examples in the unlabeled data which did not align with the spurious correlation (blond males and non-blond females). Since they relied mostly on hair color, the standard and BagNet models actively mislabeled these examples (i.e, by labeling a blond male as female). Training on these erroneous pseudo-labels actively suppressed any features that were not hair color, causing the standard and Bagnet models to perform worse after self-training. Co-Training: In contrast, when performing co-training with the Canny and BagNet priors, the Canny model (which cannot detect hair color) will make mistakes on the unlabeled dataset. These mistakes help are inconsistent with a reliance on hair color: due to this regularization, the BagNet’s accuracy improves from 69.35% to 76.52%. Overall, though the gain is not as significant as the setting with a balanced unlabeled dataset, the Canny + BagNet co-trained model can mitigate the pitfalls of the BagNet’s reliance on hair color and outperform even the canny self-trained model.
1. What is the main contribution of the paper regarding feature extraction in deep visual processing models? 2. What are the strengths of the paper, particularly in its formulation, investigation, and experimentation? 3. What is the weakness of the paper regarding the choice of datasets for experiments? 4. How does the reviewer assess the clarity, organization, technical correctness, and readability of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a formalized framework for imposing priors on the feature extraction in deep visual processing models. There has been earlier work on encouraging certain feature representations (e.g. suppressing the focus on texture in feature extraction) and also making feature representations robust to domain shift. The core contribution of this paper is the systematic formulation and investigation of how different, distinct feature priors leads to complementary feature representations that can be combined to provide more robust data representations - in other words, creating synthesized multi-view data representations. The paper ties back to early (1998) work on co-training (which essentially is multi-modal bootstrapping) and ties this to the more recent body of work on self-supervision and self-training. Experiments are performed with classical shape- and texture-biased models, and show that the hypothesis - that diverse feature priors are able to robustly create a set of complementary data views - holds. Review This paper has a number of strengths, that combined makes me recommend the paper for acceptance: The topic of this paper, creating and combining robust, generalizable and diverse feature representations, is of high relevance to a large portion of the ICLR audience. It provides an interesting and valuable formal framework for steering feature representations in different directions, creating multi-view representations of the data. It is well written, well organized, technically correct, and easy to read. The experimental design is sound and well done. One weakness can be pointed out, not however any cause for not accepting this paper in my opinion: The experiments are performed on old datasets, CIFAR-10 and STL-10, both with quite clear class structure and simplistic image setting (e.g. the object centered in the image). It would be interesting to see experiments on more difficult data with fine-grained and hierarchical class structure for example.
ICLR
Title Combining Diverse Feature Priors Abstract To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly. In this work, we explore the design space of leveraging such feature priors by viewing them as distinct perspectives on the data. Specifically, we find that models trained with diverse sets of feature priors have less overlapping failure modes, and can thus be combined more effectively. Moreover, we demonstrate that jointly training such models on additional (unlabeled) data allows them to correct each other’s mistakes, which, in turn, leads to better generalization and resilience to spurious correlations. 1 INTRODUCTION The driving force behind deep learning’s success is its ability to automatically discover predictive features in complex high-dimensional datasets. In fact, these features can generalize beyond the specific task at hand, thus enabling models to transfer to other (yet similar) tasks (Donahue et al., 2014). At the same time, the set of features that the model learns has a large impact on how well it will perform on unseen inputs, especially in the presence of distribution shift (Ponce et al., 2006; Torralba & Efros, 2011; Sagawa et al., 2020) or spurious correlations (Heinze-Deml & Meinshausen, 2017; Beery et al., 2018; Meinshausen, 2018). Motivated by this, recent work focuses on encouraging specific modes of behavior by preventing the models from relying on certain features. Examples include suppressing texture features (Geirhos et al., 2019; Wang et al., 2019), avoiding `p-non-robust features (Tsipras et al., 2019; Engstrom et al., 2019), or utilizing different parts of the frequency spectrum (Yin et al., 2019). At a high level, these methods can be thought of as ways of imposing a feature prior on the learning process, so as to bias the model towards acquiring features that generalize better. This makes the choice of the feature prior to impose a key design decision. The goal of this work is thus to explore the underlying design space of feature priors and, specifically, to understand: How can we effectively harness the diversity of feature priors? OUR CONTRIBUTIONS In this paper, we cast diverse feature priors as different perspectives on the data and study how they can complement each other. In particular, we aim to understand whether training with distinct priors result in models with non-overlapping failure modes and how such models can be combined to improve generalization. This is particularly relevant in settings where the data is unreliable— e.g, when the training data contains a spurious correlation. From this perspective, we focus our study on two priors that arise naturally in the context of image classification, shape and texture, and investigate the following: Feature diversity. We demonstrate that training models with diverse feature priors results in them making mistakes on different parts of the data distribution, even if they perform similarly in terms of overall accuracy. Further, one can harness this diversity to build model ensembles that are more accurate than those based on combining models which have the same feature prior. Combining feature priors on unlabeled data. When learning from unlabeled data, the choice of feature prior can be especially important. For strategies such as self-training, sub-optimal prediction rules learned from sparse labeled data can be reinforced when pseudo-labeling the unlabeled data. We show that, in such settings, we can leverage the diversity of feature priors to address these issues. By jointly training models with different feature priors on the unlabeled data through the framework of co-training Blum & Mitchell (1998), we find that the models can correct each other’s mistakes to learn prediction rules that generalize better. Learning in the presence of spurious correlations. Finally, we want to understand whether combining diverse priors during training, as described above, can prevent models from relying on correlations that are spurious, i.e., correlations that do not hold on the actual distribution of interest. To model such scenarios, we consider a setting where a spurious correlation is present in the training data but we also have access to (unlabeled) data where this correlation does not hold. In this setting, we find that co-training models with diverse feature priors can actually steer them away from such correlations and thus enable them to generalize to the underlying distribution. Overall, our findings highlight the potential of incorporating distinct feature priors into the training process. We believe that further work along this direction will lead us to models that generalize more reliably. 2 BACKGROUND: FEATURE PRIORS IN COMPUTER VISION When learning from structurally complex data, such as images, relying on raw input features alone (e.g., pixels) is not particularly useful. There has thus been a long line of work on extracting input patterns that can be more effective for prediction. While early approaches, such as SIFT (Lowe, 1999) and HOG (Dalal & Triggs, 2005), leveraged hand-crafted features, these have been by now largely replaced by features that are automatically learned in an end-to-end fashion (Krizhevsky, 2009; Ciregan et al., 2012; Krizhevsky et al., 2012). Nevertheless, even when features are learned, model designers still tune their models to better suit a particular task via changes in the architecture or training methodology. Such modifications can be thought of as imposing feature priors, i.e., priors that bias a model towards a particular set of features. One prominent example here are convolutional neural networks, which are biased towards learning a hierarchy of localized features Fukushima (1980); LeCun et al. (1989). Indeed, such a convolutional prior can be quite powerful: it is sufficient to enable many image synthesis tasks without any training Ulyanov et al. (2017). More recently, there has been work exploring the impact of explicitly restricting the set of features utilized by the model. For instance, Geirhos et al. (2019) demonstrate that training models on stylized inputs (and hence suppressing texture information) can improve model robustness to common corruptions. In a similar vein, Wang et al. (2019) penalize the predictive power of local features to learn shape-biased models that generalize better between image styles. A parallel line of work focuses on training models to be robust to small, worst-case input perturbations using, for example, adversarial training Goodfellow et al. (2015); Madry et al. (2018) or randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019). Such training biases these models away from non-robust features (Tsipras et al., 2019; Ilyas et al., 2019; Engstrom et al., 2019), which tends to result in them being more aligned with human perception (Tsipras et al., 2019; Kaur et al., 2019), more resilient to certain input corruptions (Ford et al., 2019; Kireev et al., 2021), and better suited for transfer to downstream tasks Utrera et al. (2020); Salman et al. (2020). 3 FEATURE PRIORS AS DIFFERENT PERSPECTIVES As we discussed, the choice of feature prior can have a large effect on what features a model relies on and, by extension, on how well it generalizes to unseen inputs. In fact, one can view such priors as distinct perspectives on the data, capturing different information about the input. In this section, we provide evidence to support this view; specifically, we examine a case study on a pair of feature priors that arise naturally in the context of image classification: shape and texture. 3.1 TRAINING SHAPE- AND TEXTURE-BIASED MODELS In order to train shape- and texture-biased models, we either pre-process the model input or modify the model architecture as follows: Shape-biased models. To suppress texture information in the images, we pre-process our inputs by applying an edge detection algorithm. We consider two such canonical algorithms: the Canny algorithm Ding & Goshtasby (2001) which produces a binary edge mask, and the Sobel algorithm Sobel & Feldman (1968) which provide a softer edge detection, hence retaining some texture information (see Figures 1b and 1c). Texture-biased models. To prevent the model from relying on the global structure of the image, we utilize a variant of the BagNet architecture Brendel & Bethge (2019). This architecture deliberately limits the receptive field of the model, thus forcing it to rely on local features (see Figure 1d). We visualize all of these priors in Figure 1 and provide implementation details in Appendix A. 3.2 DIVERSITY OF FEATURE-BIASED MODELS After training models with shape and texture biases as outlined above, we evaluate whether these models indeed capture complementary information about the input. Specifically, we train models on a small subset (100 examples per class) of the CIFAR-10 (Krizhevsky, 2009) and STL-10 (Coates et al., 2011) datasets, and measure the correlation between which test examples they correctly classify. We find that pairs consisting of a shape-biased model and a texture-biased model (i.e., Canny and BagNet, or Sobel and BagNet) indeed have the least correlated predictions—cf. Table 2. In other words, the mistakes that these models make are more diverse than those made by identical models trained from different random initializations. At the same time, different shape-biased models (Sobel and Canny) are relatively well-correlated with each other, which corroborates the fact that models trained on similar features of the input are likely to make similar mistakes. Model ensembles. Having shown that training models with these feature priors results in diverse prediction rules, we examine if we can now combine them to improve our generalization. The canonical approach for doing so is to incorporate these models into an ensemble. We find that the diversity of models trained with different feature priors indeed directly translates into an improved performance when combining them into an ensemble—cf. Table 3. In fact, we find that the performance of the ensemble is tightly connected to prediction similarity of its constituents (as measured in Table 2), i.e., more diverse ensembles tend to perform better. For instance, the best ensemble for the STL-10 dataset is the one combining a shape-biased (Canny) and a texture-biased model (BagNet) which were the models with the least aligned predictions. 4 COMBINING DIVERSE PRIORS ON UNLABELED DATA In the previous section, we saw that training models with different feature priors (e.g., shape- and texture-biased models) can lead to prediction rules with less overlapping failure modes—which, in turn, can lead to more effective model ensembles. However, ensembles only combine model predictions post hoc and thus cannot take advantage of diversity during the training process. In this section, we instead focus on utilizing diversity during training. Specifically, we will leverage the diversity introduced through feature priors in the context of self-training Lee et al. (2013)—a framework commonly used when the labeled data is insufficient to learn a well-generalizing model. This framework utilizes unlabeled data, which are then pseudo-labeled using an existing model and used for further training. While such methods can often improve the overall model performance, they suffer from a significant drawback: models tend to reinforce suboptimal prediction rules even when these rules do not generalize to the underlying distribution Arazo et al. (2020). Our goal here is thus to leverage diverse feature priors to address this exact shortcoming. Specifically, we will jointly train models with different priors on the unlabeled data through the framework of co-training Blum & Mitchell (1998). Since these models capture complementary information about the input (cf. Table 2), we expect them to correct each other’s mistakes and improve their prediction rules. As we will see in this section, this approach can indeed have a significant impact on the performance of the resulting model, outperforming ensembles that combine such models only at evaluation time—see summary in Figure 4. Setup. We base our analysis on the CIFAR-10 and STL-10 datasets. Specifically, we treat a small fraction of the training set as labeled examples (100 examples per class), another fraction as our validation set for tuning hyperparameters (10% of the total training examples), and the rest as unlabeled data. We report our results on the standard test set of each dataset. (See Appendix A for experimental details, and Appendix B.6 for experiments with varying levels of labeled data.) 4.1 SELF-TRAINING AND ENSEMBLES Before outlining our method for jointly training models with multiple priors, we first describe the standard approach to self-training a single model. At a high level, the predictions of the model on the unlabeled data are treated as correct labels and are then used to further train the same model Lee et al. (2013); Iscen et al. (2019); Zou et al. (2019); Xie et al. (2020). The underlying intuition is that the classifier will predict the correct labels for that data better than chance, and thus these pseudo-labels can be used to expand the training set. In practice, however, these pseudo-labels tend to be noisy. Thus, a common approach is to only use the labels to which the model assigns the highest probability Lee et al. (2013). This process is repeated, self-training on increasingly larger fractions of the unlabeled data until all of it is used. We refer to each such training phase as an era. Ensembles of diverse self-trained models. Similarly to our results in Table 3, we find that ensembles comprised of self-trained models with diverse feature priors outperform those that use the same prior from different random initializations (see Figure 4 for a summary and Appendix B.3 for the full results). This demonstrates that, after self-training, these models continue to capture complementary information about the input that can be leveraged to improve performance. 4.2 CO-TRAINING MODELS WITH DIFFERENT FEATURE PRIORS Moving beyond self-training with a single feature prior, our goal in this section is to leverage multiple feature priors by jointly training them on the unlabeled data. This idea naturally fits into the framework of co-training: a method used to learn from unlabeled data when inputs correspond to multiple independent sets of features Blum & Mitchell (1998). Concretely, we first train a model for each feature prior. Then, we collect the pseudo-labels on the unlabeled data that were assigned the highest probability for each model—including duplicates with potentially different labels—to form a new training set which we use for further training. Similarly to the self-training case, we repeat this process over several eras, increasing the fraction of the unlabeled dataset used at each era. Intuitively, this iterative process allows the models to bootstrap off of each other’s predictions, learning correlations that they might fail to learn from the labeled data alone. At the end of this process, we are left with two models, one for each prior, which we combine into a single classifier by training a standard model from scratch on the combined pseudo-labels. We provide a more detailed explanation of the methodology in Appendix A.5. Co-training performance. We find that co-training with shape- and texture-based priors can significantly improve the test accuracy of the final model compared to self-training with any of the priors alone (Table 5). This is despite the fact that, when using self-training alone, the standard model outperforms all other models (Column 4, Table 5). Moreover, co-training models with diverse priors improves upon simply combining them in an ensemble (Appendix B.3). In Appendix B.5, we report the performance of co-training with every pair of priors. We find that co-training with shape- and texture-based priors together (Canny + BagNet for STL-10 and Sobel + BagNet for CIFAR-10) outperform every other prior combination. Note that this is the case even though, when only ensembling models with different priors (c.f Table 3 and Appendix B.3), Standard + Sobel is consistently the best performing pair for CIFAR-10. Overall, these results indicate that the diversity of shape- and texture-biased models allows them to improve each other over training. Additionally, we find that, even when training a single model on the pseudo-labels of another model, prior diversity can still help. Specifically, we compare the performance of a standard model trained from scratch using pseudo-labels from various self-trained models (Column 5, Table 5). In this setting, using a self-trained shape- or texture-biased model for pseudo-labeling outperforms using a self-trained standard model. This is despite the fact that, in isolation, the standard model has higher accuracy than the shape- or texture-biased ones (Column 4, Table 5). Model alignment over co-training. To further explore the dynamics of co-training, we evaluate how the correlation between model predictions evolves as the eras progress in Figure 6 (using the prediction alignment measure of Table 2). We find that shape- and texture-biased models exhibit low correlation at the start of co-training, but this correlation increases as co-training progresses. This is in contrast to self-training each model on its own, where the correlation remains relatively low. It is also worth noting that the correlation appears to plateau at a lower value when co-training models with distinct feature priors as opposed to co-training two standard models. Finally, we find that a standard model trained on the pseudo-labels of other models correlates well with the models themselves (see Appendix B.7). Overall, these findings indicate that models trained on each other’s pseudo-labels end up behaving more similarly. 5 USING CO-TRAINING TO AVOID SPURIOUS CORRELATIONS A major challenge when training models for real-world deployment is avoiding spurious correlations: associations which are predictive on the training data but not valid for the actual task. Since models are typically trained to maximize train accuracy, they are quite likely to rely on such spurious correlations Gururangan et al. (2018); Beery et al. (2018); Geirhos et al. (2020); Xiao et al. (2020). In this section, our goal is to leverage diverse feature priors to control the sensitivity of the training process to such spurious correlations. Specifically, we will assume that the spurious correlation does not hold on the unlabeled data (which is likely since unlabeled data can often be collected at a larger scale). Without this assumption, the unlabeled contains no examples that could (potentially) contradict the spurious correlation (we investigate the setting where the unlabeled data is also similarly skewed in Appendix B.10). As we will see, if the problematic correlation is not easily captured by one of the priors, the corresponding model generates pseudo-labels that are inconsistent with this correlation, thus steering other models away from this correlation during co-training. Setup. We study spurious correlations in two settings. First, we create a synthetic dataset by tinting each image of the STL-10 labeled dataset in a class-specific way. This encourages models to rely on the tint, as it is highly predictive on the training set. However, this prediction rule does not generalize to the test set where this correlation is absent. Second, similar to Sagawa et al. (2020), we consider a gender classification task based on CelebA (Liu et al., 2015) where hair color (“blond” vs. “non-blond”) is predictive on the labeled data but not on the unlabeled and test data. While gender and hair color are independent attributes on the unlabeled dataset, the labeled dataset consists only of blond females and non-blond males. Similarly to the synthetic case, the labeled data encourages a prediction rule based only on hair color. See Appendix A.1 for details. Performance on datasets with spurious features. We find that, when trained only on the labeled data (where the correlation is fully predictive), both the standard and BagNet models generalize poorly in comparison to the shape-biased models (see Table 7). This behavior is expected: the spurious attribute in both datasets is color-related and hence mostly suppressed by the edge detection algorithms used to train shape-based models. Even after self-training on the unlabeled data (where the correlation is absent), the performance of the standard and BagNet models does not improve significantly. Finally, simply ensembling self-trained models post hoc does not improve their performance. Indeed as the texture-biased and standard models are significantly less accurate than the shape-biased one, they end up lowering the overall accuracy of the ensemble (see Appendix B.8). In contrast, when we co-train a texture-biased model with a shape-biased one, the texture-biased model improves substantially. For instance, when co-trained with a Canny model, the BagNet model improves over self-training by 42% on the tinted STL-10 dataset and 27% on the CelebA dataset. This improvement can be attributed to the fact that the predictions of the shape-biased model are not consistent with the spurious correlation on the unlabeled data. Hence, by being trained on pseudolabels from that model, the BagNet model is forced to rely on alternative, non-spurious features. Moreover, particularly on CelebA, the shape-biased model also improves when co-trained with a texture-biased model. This indicates that even though the texture-biased model relies on the spurious correlation, it also captures non-spurious features that, through pseudo-labeling, improve the performance of the shape-based model. In Appendix B.9, we find that these improvements are concentrated on inputs where the spurious correlation does not hold. 6 ADDITIONAL RELATED WORK In Section 2, we discussed the most relevant prior work on implicit or explicit feature priors. Here, we discuss additional related work and how it connects to our approach. Shape-biased models. Several other methods aim to bias models towards shape-based features: input stylization Geirhos et al. (2019); Somavarapu et al. (2020); Li et al. (2021), penalizing early layer predictiveness Wang et al. (2019), jigsaw puzzles Carlucci et al. (2019); Asadi et al. (2019), dropout Shi et al. (2020), or data augmentation Hermann et al. (2020). While, in our work, we choose to suppress texture information via edge detection algorithms, any of these methods can be substituted to generate the shape-based model for our analysis. Avoiding spurious correlations. Other methods that can prevent models from learning spurious correlations include: learning representations that are simultaneously optimal across domains (Arjovsky et al., 2019), enforcing robustness to group shifts (Sagawa et al., 2020), and utilizing multiple data points corresponding to a single physical entity (Heinze-Deml & Meinshausen, 2017). Similar in spirit to our work, these methods aim to learn prediction rules that are supported by multiple views of the data. However, we do not rely on annotations or multiple sources and instead impose feature priors through the model architecture and input preprocessing. Pseudo-labeling. Since the initial proposal of pseudo-labeling for neural networks Lee et al. (2013), there has been a number of more sophisticated pseudo-labeling schemes aimed at improving the accuracy and diversity of the labels Iscen et al. (2019); Augustin & Hein (2020); Xie et al. (2020); Rizve et al. (2021); Huang et al. (2021). In our work, we focus on the simplest scheme for self-labeling—i.e., confidence based example selection. Nevertheless, most of these schemes can be directly incorporated into our framework to potentially improve its overall performance. A recent line of work explores self-training by analyzing it under different assumptions on the data (Mobahi et al., 2020; Wei et al., 2021; Allen-Zhu & Li, 2020; Kumar et al., 2020). Closest to our work, Chen et al. (2020b) show that self-training on unlabeled data can reduce reliance on spurious correlations under certain assumptions. In contrast, we demonstrate that by leveraging diverse feature priors, we can avoid spurious correlations even if a model heavily relies on them. Consistency regularization. In parallel to pseudo-labeling, consistency regularization is another canonical technique for leveraging unlabeled data. Here, a model is trained to be invariant to a set of input transformations. These transformations might stem from data augmentations and architecture stochasticity Laine & Aila (2017); Berthelot et al. (2019); Chen et al. (2020a); Sohn et al. (2020); Prabhu et al. (2021) or using adversarial examples Miyato et al. (2018). Co-training. One line of work studies co-training from a theoretical perspective (Nigam & Ghani, 2000; Balcan et al., 2005; Goldman & Zhou, 2000). Other work aims to improve co-training by either expanding the settings where it can be applied (Chen et al., 2011) or by improving its stability (Ma et al., 2020; Zhang & Zhou, 2011). Finally, a third line of work applies co-training to images. Since images cannot be separated into disjoint feature sets, one would apply co-training by training multiple models Han et al. (2018), either regularized to be diverse through adversarial examples Qiao et al. (2018) or each trained using a different method Yang et al. (2020). Our method is complementary to these approaches as it relies on explicit feature priors to obtain different views. 7 CONCLUSION In this work, we explored the benefits of combining feature priors with non-overlapping failure modes. By capturing complementary perspectives on the data, models trained with diverse feature priors can offset each others mistakes when combined through methods such as ensembles. Moreover, in the presence of unlabeled data, we can leverage prior diversity by jointly boostrapping models with different priors through co-training. This allows the models to correct each other during training, thus improving pseudo-labeling and controlling for correlations that do not generalize well. We believe that our work is only the first step in exploring the design space of creating, manipulating, and combining feature priors to improve generalization. In particular, our framework is quite flexible and allows for a number of different design choices, such as choosing other feature priors (cf. Sections 2 and 6), using other methods for pseudo-label selection (e.g., using uncertainty estimation (Lee et al., 2018; Rizve et al., 2021)), and combining pseudo-labels via different ensembling methods. More broadly, we believe that exploring the synthesis of explicit feature priors in new applications is an exciting avenue for further research. A EXPERIMENTAL DETAILS A.1 DATASETS For our first set of experiments (Section 4), we focus on a canonical setting where a small portion of the training set if labeled and we have access to a pool of unlabeled data. STL-10. The STL-10 Coates et al. (2011) dataset contains 5,000 training and 8,000 test images of size 96×96 from 10 classes. We designate 1,000 of the 5,000 (20%) training examples to be the labeled training set, 500 (10%) to be the validation set, and the rest are used as unlabeled data. CIFAR-10. The CIFAR-10 Krizhevsky (2009) dataset contains 50,000 training and 8,000 test images of size 32×32 from 10 classes. We designate 1,000 of the 50,000 (2%) training examples to be the labeled training set, 5000 (10%) to be the validation set, and the rest as unlabeled data. In both cases, we report the final performance on the standard test set of that dataset. We also create two datasets that each contain a different spurious correlation. Tinted STL-10. We reuse the STL-10 setup described above, but we add a class-specific tint to each image in the (labeled) training set. Specifically, we hand-pick a different color for each of the 10 classes and then add this color to each of the pixels (ensuring that each RGB channel remains within the valid range)—see Figure 8 for examples. This tint is only present in the labeled part of the training set, the unlabeled and test parts of the dataset are left unaltered. Biased CelebA. We consider the task of predicting gender in the CelebA Liu et al. (2015) dataset. In order to create a biased training set, we choose a random sample of 500 non-blond males and 500 blond females. We then use a balanced unlabeled dataset consisting of 1,000 random samples for each of: blond males, blond females, non-blond males, and non-blond females. We use the standard CelebA test set which consists of 12.41% blond females, 48.92% non-blond females, 0.90% blond males, and 37.77% non-blond males. (Note that a classifier predicting purely based on hair color with have an accuracy of 50.18% on that test set.) All of the datasets that we use are freely available for non-commercial research purposes. Moreover, to the best of our knowledge, they do not contain offensive content or identifiable information (other than publicly available celebrity photos). A.2 MODEL ARCHITECTURES AND INPUT PREPROCESSING For both the standard model and the models trained on images processed by edge detection algorithm, we use a standard model architecture—namely, VGG16 Simonyan & Zisserman (2015) with the addition of batch normalization Ioffe & Szegedy (2015) (often referred to as VGG16-BN). We describe the exact edge detection process as well as the architecture of the BagNet model (texture prior) below. We visualize these priors in Figure 10. Canny edge detection. Given an image, we first smooth it with a 5 pixel bilateral filter Tomasi & Manduchi (1998), with filter σ in the coordinate and color space set to 75. After smoothing, the image is converted to gray-scale. Finally, a Canny filter Canny (1986) is applied to the image, with hysteresis thresholds 100 and 200, to extract the edges. Sobel edge detection. Given an image, we first upsample it to 128×128 pixels. Then we convert it to gray-scale and apply a Gaussian blur (kernel size=5, σ = 5). The image is then passed through a Sobel filter Sobel & Feldman (1968) with a kernel size of 3 in both the horizontal and the vertical direction to extract the image gradients. BagNet. For our texture-biased model, we use a slimmed down version of the BagNet architecture from Brendel & Bethge (2019). The goal of this architecture is to limit the receptive field of the model, hence forcing it to make predictions based on local features. The exact architecture we used is shown in Figure 9. Intuitively, the top half of the network—i.e., the green and blue blocks— construct features on patches of size 20×20 for 96×96 images and 10×10 for 32×32 images. The rest of the network consists only of 1×1 convolutions and max-pooling, hence not utilizing the image’s spatial structure. Custom BagNet20 Custom BagNet10 A.3 TRAINING SETUP A.3.1 BASIC TRAINING We train all our models using stochastic gradient descent (SGD) with momentum (a coefficient of 0.9) and a decaying learning rate. We add weight decay regularization with a coefficient of 10−4. In terms of data augmentation, we apply random cropping with a padding of 4 pixels, random horizontal flips, and a random rotation of ±2 degrees. These transformations are applied after the edge detection processing. We train all models with a batch size of 64 for 96×96-sized images and 128 for 32×32-sized images for a total of 300 epochs. All our experiments are performed using our internal cluster which mainly consists of NVIDIA 1080 Ti GTX GPUs. Hyperparameter tuning. To ensure a fair comparison across feature priors, we selected the hyperparameters for each dataset-prior pair separately, using the held-out validation set (separate from the final test used for reporting performance). Specifically, we performed a grid search choosing the learning rate (LR) from [0.1, 0.05, 0.02, 0.01, 0.005], the number of epochs between each learning rate drop (K) from [50, 100, 300] and the factor with which the learning rate is multiplied (γ) from [0.5, 1]. The parameters chosen are shown in Table 11. We found that all models achieved nearoptimal performance strictly within the range of each hyperparameters. Thus, we did not consider a wider grid. A.4 ENSEMBLES In order to leverage prior diversity, we ensemble models trained with (potentially) different priors. We use the following ensembles: 1. Take Max: Predict based on the model assigning the highest probability on this example. 2. Average: Average the (softmax) output probabilities of the models, predict the class assigned the highest probability. 3. Rank: Each model ranks all test examples based on the probability assigned to their predicted labels. Then, for each example, we predict using the model which has a lower rank on this example. We then report the maximum of these ensemble methods in Table 3. A.5 SELF-TRAINING AND CO-TRAINING SCHEMES In the setting that we are focusing on, we are provided with a labeled dataset X and an unlabeled dataset U, where typically there is much more unlabeled data (|U| |X|). We are then choosing a set of (one or more) feature priors each of which corresponds to a different way of training a model (e.g., using edge detection preprocessing). General methodology. We start by training each of these models on the labeled dataset. Then, we combine the predictions of these models to produce pseudo-labels for the unlabeled dataset. Finally, we choose a fraction of the unlabeled data and train the models on that set using the produced pseudo-labels (in additional to the original labeled set X). This process is repeated using increasing fractions of the unlabeled dataset until, eventually, models are trained on its entirety. We refer to each such phase as an era. We include an additional 5% of the unlabeled data per era, resulting in a total of 20 eras. During each era, we use the training process described in Appendix A.3.1 without re-initializing the models (warm start). After completing this process, we train a standard model from scratch using both the labeled set and resulting pseudo-labels. The methodology used for choosing and combining pseudo-labels is described below for each scheme. Self-training. Since we are only training one model, we only need to decide how to choose the pseudo-labels to use for each era. We do this in the simplest way: at ear t, we pick the subset Ut ⊆ U of examples that are assigned the highest probability on their predicted label. We attempt to produce a class-balanced training set by applying this process separately on each class (as predicted by the model). The pseudocode for the method is provided in Algorithm 1. Algorithm 1: Self-training Parameters: Number of eras T . Fraction added per era k. Input : Labeled data X with n classes, unlabeled data U, model trained on X. for era t ∈ 1...T do forward-pass U through the model to create pseudo-labels Ut = [] for each class c do Select the kt|U|n most confident examples from U predicted by the model as class c Add those examples to Ut with class c Re-train (warm start) the model on X ∪Ut until convergence Train a standard model from scratch on X ∪UT. Standard co-training. Here, we train multiple models (in our experiments two) based on a common pool of pseudo-labeled examples in each era. In each era t, each model labels the unlabeled dataset U. Then, for each class, we alternate between models, adding the next most confident example predicted as that class for that model to Ut, until we reach a fixed number of unique examples have been added for that class (5% of the size of the unlabeled dataset per era). Note that this process allows both conflicts and duplicates: if multiple models are confident about a specific example, that example may be added more than once (potentially with a different label each time). Finally, we train each model (without re-initializing) on X∪Ut. The pseudocode for this method can be found in Algorithm 2. Algorithm 2: Standard Co-Training Parameters: Number of eras T . Fraction added per era k. Input : Labeled data X with n classes, unlabeled data U, models trained on X. for era t ∈ 1...T do forward-pass U through each model to create pseudo-labels Ut = [] for each class c do U (c) t = [] while the number of unique examples in U(c)t < kt|U| n do for each model m do Add the next most confident example predicted by m as class c to U(c)t Add U(c)t to Ut Re-train (warm start) each model on X ∪Ut until convergence Train a standard model from scratch on X ∪UT. B ADDITIONAL EXPERIMENTS B.1 EXPERIMENT ORGANIZATION We now provide the full experimental results used to create the plots in the main body as well as additional analysis. Specifically, in Appendix B.2 and B.3 we present the performance of individual ensemble schemes for pre-trained and self-trained models respectively. Then, in Appendix B.5 we present the performance of co-training for each combination of feature priors. In Appendix B.7 we analyse the effect that co-training has on model similarity after training. Finally, in Appendix B.8 we evaluate model ensembles on datasets with spurious correlations and in Appendix B.9 we breakdown the performance of co-training on the skewed CelebA dataset according to different input attributes. B.2 FULL PRE-TRAINED ENSEMBLE RESULTS In Table 3, we reported the best ensemble method for each pair of models trained with different priors on the labeled data. In Table 12, we report the full results over the individual ensembles. B.3 ENSEMBLING SELF-TRAINED MODELS In Table 13, we report the best ensemble method for pairs of self-trained models with different priors. In Table 14, we report the full results over the individual ensembles. We find that, similar to the ensembles of models trained on the labeled data, models with diverse priors gain more from ensembling. However, co-training models with diverse priors together still outperforms ensembling self-trained models. B.4 STACKED ENSEMBLING Here we consider an ensembling technique that leverages a validation set. We implement stacking (also called blending) Töscher et al. (2009); Sill et al. (2009), which takes in the outputs of the member models as input, and then trains a second model to produce the final layer. Here, we take the logits of each model in the ensemble, and train the secondary model using logistic regression on the validation set for the dataset. We report accuracies of the ensemble on the test set below. We again find that prior diversity is important for the performance of the ensemble. B.5 SELF-TRAINING AND CO-TRAINING ON STL-10 AND CIFAR-10 B.6 CO-TRAINING WITH VARYING AMOUNTS OF LABELED DATA. In Table 19, we study how the efficacy of combining diverse priors through cotraining changes as the number of labeled examples increase for STL-10. As one might expect, when labeled data is sparse, the feature priors learned by the models alone are relatively brittle: thus, leveraging diverse priors against each other on unlabeled data improves generalization. As the number of labeled examples increases, the models with single feature priors learn more reliable prediction rules that can already generalize, so the additional benefit of combining feature priors diminishes. However, even in settings with plentiful data, combining diverse feature priors can aid generalization if there is a spurious correlation in the labeled data (see Section 5.) B.7 CORRELATION BETWEEN THE INDIVIDUAL FEATURE-BIASED MODELS AND THE FINAL STANDARD MODEL B.8 ENSEMBLES FOR SPURIOUS DATASETS In Table 21 (full table in Table 22), we ensemble the self-trained priors for the Tinted STL-10 dataset and the CelebA dataset as in Section 5. Both of these datasets have a spurious correlation base on color, which results in a weak Standard and BagNet model. As a result, the ensembles with the Standard or BagNet models do not perform well on the test set. However, in Section 7, we find that co-training in this setting allows the BagNet model to improve when jointly trained with a shape model, thus boosting the final performance. B.9 BREAKDOWN OF TEST ACCURACY FOR CO-TRAINING ON CELEBA B.10 WHAT IF THE UNLABELED DATA ALSO CONTAINED THE SPURIOUS CORRELATION? In Section 5, we assume that the unlabeled data does not contain the spurious correlation present in the labeled data. This is often the case when unlabeled data can be collected through a more diverse process than labeled data (for example, by scraping the web large scales or by passively collecting data during deployment). This assumption is important: in order to successfully steer models away from the spurious correlation during co-training, the process needs to surface examples which contradict the spurious correlation. However, if the unlabeled data is also heavily skewed, such examples might be rare or non-existent. What happens if the unlabeled data is as heavily skewed as the labeled data? We return the setting of a spurious association between hair color and gender in CelebA. However, unlike in Section 5, we use an unlabeled dataset that also perfectly correlates hair color and gender – it contains 2000 non-blond males and 2000 blond females. The unlabeled data thus has the same distribution as the labeled data, and contains no examples that reject the spurious correlation (blond males or non-blond females). Self-Training: Since the unlabeled data follows the spurious correlation between hair color and gender, the standard and BagNet models almost perfectly pseudo-label the unlabeled data. Thus, they are simply increasing the number of examples in the training dataset but maintaining the same overall distribution. Self-training thus does not change the accuracy for models with these priors significantly. In contrast, in the setting in Section 5, there were examples in the unlabeled data which did not align with the spurious correlation (blond males and non-blond females). Since they relied mostly on hair color, the standard and BagNet models actively mislabeled these examples (i.e, by labeling a blond male as female). Training on these erroneous pseudo-labels actively suppressed any features that were not hair color, causing the standard and Bagnet models to perform worse after self-training. Co-Training: In contrast, when performing co-training with the Canny and BagNet priors, the Canny model (which cannot detect hair color) will make mistakes on the unlabeled dataset. These mistakes help are inconsistent with a reliance on hair color: due to this regularization, the BagNet’s accuracy improves from 69.35% to 76.52%. Overall, though the gain is not as significant as the setting with a balanced unlabeled dataset, the Canny + BagNet co-trained model can mitigate the pitfalls of the BagNet’s reliance on hair color and outperform even the canny self-trained model.
1. What is the main contribution of the paper regarding model generalization? 2. What are the strengths of the paper, particularly in the experimental section? 3. What are the weaknesses of the paper regarding its clarity and definitions? 4. How does the reviewer interpret the first contribution of the paper regarding feature priors? 5. Are there any questions or concerns regarding the limited exploration of feature priors in the paper?
Summary Of The Paper Review
Summary Of The Paper The goal of the paper is to improve model generalisation. The authors consider feature priors as distinct perspectives on the data. The results show that models trained with diverse sets of various feature priors have less overlapping modes and are more efficiently combined. Review Strengths. The experimental part is relatively clear. Weaknesses. The paper is not very clearly written. First, I would appreciate some (even informal) definition of a feature prior. Later in the text, the co-training is mentioned, and it seems that using different priors = co-training using different views. Is it the idea of the paper? I did not really understand the first contribution: "We demonstrate that training models with diverse feature priors results in them making mistakes on different parts of the data distribution, even if their overall accuracy is similar." What is meant? As far as I understand, there are two "priors" only explored in the paper: shape and texture.
ICLR
Title Efficiently Computing Nash Equilibria in Adversarial Team Markov Games Abstract Computing Nash equilibrium policies is a central problem in multi-agent reinforcement learning that has received extensive attention both in theory and in practice. However, in light of computational intractability barriers in general-sum games, provable guarantees have been thus far either limited to fully competitive or cooperative scenarios, or impose strong assumptions that are difficult to meet in most practical applications. In this work, we depart from those prior results by investigating infinite-horizon adversarial team Markov games, a natural and well-motivated class of games in which a team of identically-interested players— in the absence of any explicit coordination or communication—is competing against an adversarial player. This setting allows for a unifying treatment of zero-sum Markov games and Markov potential games, and serves as a step to model more realistic strategic interactions that feature both competing and cooperative interests. Our main contribution is the first algorithm for computing stationary ε-approximate Nash equilibria in adversarial team Markov games with computational complexity that is polynomial in all the natural parameters of the game, as well as 1/ε. The proposed algorithm is based on performing independent policy gradient steps for each player in the team, in tandem with best responses from the side of the adversary; in turn, the policy for the adversary is then obtained by solving a carefully constructed linear program. Our analysis leverages non-standard techniques to establish the KKT optimality conditions for a nonlinear program with nonconvex constraints, thereby leading to a natural interpretation of the induced Lagrange multipliers. 1 INTRODUCTION Multi-agent reinforcement learning (MARL) offers a principled framework for analyzing competitive interactions in dynamic and stateful environments in which agents’ actions affect both the state of the world and the rewards of the other players. Strategic reasoning in such complex multi-agent settings has been guided by game-theoretic principles, leading to many recent landmark results in benchmark domains in AI (Bowling et al., 2015; Silver et al., 2017; Vinyals et al., 2019; Moravčı́k et al., 2017; Brown & Sandholm, 2019; 2018; Brown et al., 2020; Perolat et al., 2022). Most of these remarkable advances rely on scalable and decentralized algorithms for computing Nash equilibria (Nash, 1951)—a standard game-theoretic notion of rationality—in two-player zero-sum games. Nevertheless, while single-agent RL has enjoyed rapid theoretical progress over the last few years (e.g., see (Jin et al., 2018; Agarwal et al., 2020; Li et al., 2021; Luo et al., 2019; Sidford et al., 2018), and references therein), a comprehensive understanding of the multi-agent landscape still remains elusive. Indeed, provable guarantees for efficiently computing Nash equilibria have been thus far limited to either fully competitive settings, such as two-player zero-sum games (Daskalakis et al., 2020; Wei et al., 2021; Sayin et al., 2021; Cen et al., 2021; Sayin et al., 2020; Condon, 1993), or environments in which agents are striving to coordinate towards a common global objective (Claus ∗Correspondence to [email protected]. & Boutilier, 1998; Wang & Sandholm, 2002; Leonardos et al., 2021; Ding et al., 2022; Zhang et al., 2021b; Chen et al., 2022; Maheshwari et al., 2022; Fox et al., 2022). However, many real-world applications feature both shared and competing interests between the agents. Efficient algorithms for computing Nash equilibria in such settings are much more scarce, and typically impose restrictive assumptions that are difficult to meet in most applications (Hu & Wellman, 2003; Bowling, 2000). In fact, even in stateless two-player (normal-form) games, computing approximate Nash equilibria is computationally intractable (Daskalakis et al., 2009; Rubinstein, 2017; Chen et al., 2009; Etessami & Yannakakis, 2010)—subject to well-believed complexity-theoretic assumptions. As a result, it is common to investigate equilibrium concepts that are more permissive than Nash equilibria, such as coarse correlated equilibria (CCE) (Aumann, 1974; Moulin & Vial, 1978). Unfortunately, recent work has established strong lower bounds for computing even approximate (stationary) CCEs in turn-based stochastic two-player games (Daskalakis et al., 2022; Jin et al., 2022). Those negative results raise a central question: Are there natural multi-agent environments incorporating both competing and shared interests for which we can establish efficient algorithms for computing (stationary) Nash equilibria? (⋆) Our work makes concrete progress in this fundamental direction. Specifically, we establish the first efficient algorithm leading to Nash equilibria in adversarial team Markov games, a well-motivated and natural multi-agent setting in which a team of agents with a common objective is facing a competing adversary. 1.1 OUR RESULTS Before we state our main result, let us first briefly introduce the setting of adversarial team Markov games; a more precise description is deferred to Section 2.1. To address Question (⋆), we study an infinite-horizon Markov (stochastic) game with a finite state space S in which a team of agents NA := [n] with a common objective function is competing against a single adversary with opposing interests. Every agent k ∈ [n] has a (finite) set of available actions Ak, while B represents the adversary’s set of actions. We will also let γ ∈ [0, 1) be the discounting factor. Our goal will be to compute an (approximate) Nash equilibrium; that is, a strategy profile so that no player can improve via a unilateral deviation (see Definition 2.1). In this context, our main contribution is the first polynomial time algorithm for computing Nash equilibria in adversarial team Markov games: Theorem 1.1 (Informal). There is an algorithm (IPGMAX) that, for any ϵ > 0, computes an ϵapproximate stationary Nash equilibrium in adversarial team Markov games, and runs in time poly ( |S|, n∑ k=1 |Ak|+ |B|, 1 1− γ , 1 ϵ ) . A few remarks are in order. First, our guarantee significantly extends and unifies prior results that only applied to either two-player zero-sum Markov games or to Markov potential games; both of those settings can be cast as special cases of adversarial team Markov games (see Section 2.3). Further, the complexity of our algorithm, specified in Theorem 1.1, scales only with ∑ k∈NA |Ak| instead of ∏ k∈NA |Ak|, bypassing what is often referred to as the curse of multi-agents (Jin et al., 2021). Indeed, viewing the team as a single “meta-player” would induce an action space of size∏ k∈NA |Ak|, which is exponential in n even if each agent in the team has only two actions. In fact, our algorithm operates without requiring any (explicit) form of coordination or communication between the members of the team (beyond the structure of the game), a feature that has been motivated in practical applications (von Stengel & Koller, 1997). Namely, scenarios in which communication or coordination between the members of the team is either overly expensive, or even infeasible; for an in depth discussion regarding this point we refer to (Schulman & Vazirani, 2017). 1.2 OVERVIEW OF TECHNIQUES To establish Theorem 1.1, we propose a natural and decentraliezd algorithm we refer to as Independent Policy GradientMax (IPGMAX). IPGMAX works in turns. First, each player in the team performs one independent policy gradient step on their value function with an appropriately selected learning rate η > 0. In turn, the adversary best responds to the current policy of the team. This exchange is repeated for a sufficiently large number of iterations T . Finally, IPGMAX includes an auxiliary subroutine, namely AdvNashPolicy(), which computes the Nash policy of the adversary; this will be justified by Proposition 1.1 we describe below. Our analysis builds on the techniques of Lin et al. (2020)—developed for the saddle-point problem minx∈X maxy∈Y f(x,y)—for characterizing GDMAX. Specifically, GDMAX consists of performing gradient descent steps, specifically on the function ϕ(x) := maxy∈Y f(x,y). Lin et al. (2020) showed that GDMAX converges to a point ( x̂,y∗(x̂) ) such that x̂ is an approximate first-order stationary point of the Moreau envelope (see Definition 3.1) of ϕ(x), while y∗(x̂) is a best response to x̂. Now if f(x, ·) is strongly-concave, one can show (by Danskin’s theorem) that ( x̂,y∗(x) ) is an approximate first-order stationary point of f . However, our setting introduces further challenges since the value function Vρ(πteam,πadv) is nonconvex-nonconcave. For this reason, we take a more refined approach. We first show in Proposition 3.1 that IPGMAX is guaranteed to converge to a policy profile ( π̂team, · ) such that π̂team is an ϵ-nearly stationary point of maxπadv Vρ(πteam,πadv). Then, the next key step and the crux of the analysis is to show that π̂team can be extended to an O(ϵ)-approximate Nash equilibrium policy: Proposition 1.1 (Informal). If π̂team is an ϵ-nearly stationary point of maxπadv Vρ(πteam,πadv), there exists a policy for the adversary π̂adv so that (π̂team, π̂adv) is anO(ϵ)-approximate Nash equilibrium. In the special case of normal-form games, a similar extension theorem was recently obtained by Anagnostides et al. (2023). In particular, that result was derived by employing fairly standard linear programming techniques. In contrast, our more general setting introduces several new challenges, not least due to the nonconvexity-nonconcavity of the objective function. Indeed, our analysis leverages more refined techniques stemming from nonlinear programming. More precisely, while we make use of standard policy gradient properties, similar to the single-agent MDP setting (Agarwal et al., 2021; Xiao, 2022), our analysis does not rely on the so-called gradientdominance property (Bhandari & Russo, 2019), as that property does not hold in a team-wise sense. Instead, inspired by an alternative proof of Shapley’s theorem (Shapley, 1953) for two-person zerosum Markov games (Filar & Vrieze, 2012, Chapter 3), we employ mathematical programming. One of the central challenges is that the induced nonlinear program has a set of nonconvex constraints. As such, even the existence of (nonnegative) Lagrange multipliers satisfying the KKT conditions is not guaranteed, thereby necessitating more refined analysis techniques. To this end, we employ the Arrow-Hurwiz-Uzawa constraint qualification (Theorem A.1) in order to establish that the local optima are contained in the set of KKT points (Corollary B.1). Then, we leverage the structure of adversarial team Markov games to characterize the induced Lagrange multipliers, showing that a subset of these can be used to establish Proposition 1.1; incidentally, this also leads to an efficient algorithm for computing a (near-)optimal policy of the adversary. Finally, we also remark that controlling the approximation error—an inherent barrier under policy gradient methods—in Proposition 1.1 turns out to be challenging. We bypass this issue by constructing “relaxed” programs that incorporate some imprecision in the constraints. A more detailed overview of our algorithm and the analysis is given in Section 3. 2 PRELIMINARIES In this section, we introduce the relevant background and our notation. Section 2.1 describes adversarial team Markov games. Section 2.2 then defines some key concepts from multi-agent MDPs, while Section 2.3 describes a generalization of adversarial team Markov games, beyond identicallyinterested team players, allowing for a richer structure in the utilities of the team—namely, adversarial Markov potential games. Notation. We let [n] := {1, . . . , n}. We use superscripts to denote the (discrete) time index, and subscripts to index the players. We use boldface for vectors and matrices; scalars will be denoted by lightface variables. We denote by ∥ ·∥ := ∥ ·∥2 the Euclidean norm. For simplicity in the exposition, we may sometimes use theO(·) notation to suppress dependencies that are polynomial in the natural parameters of the game; precise statements are given in the Appendix. For the convenience of the reader, a comprehensive overview of our notation is given in A.3. 2.1 ADVERSARIAL TEAM MARKOV GAMES An adversarial team Markov game (or an adversarial team stochastic game) is the Markov game extension of static, normal-form adversarial team games (Von Stengel & Koller, 1997). The game is assumed to take place in an infinite-horizon discounted setting in which a team of identicallyinterested agents gain what the adversary loses. Formally, the game G is represented by a tuple G = (S,N ,A,B, r,P, γ, ρ) whose components are defined as follows. • S is a finite and nonempty set of states, with cardinality S := |S|; • N is the set of players, partitioned into a set of n team agentsNA := [n] and a single adversary • Ak is the action space of each player in the team k ∈ [n], so that A :=×k∈[n]Ak, while B is the action space of the adversary. We also let Ak := |Ak| and B := |B|;1 • r : S ×A×B → (0, 1) is the (deterministic) instantaneous reward function2 representing the (normalized) payoff of the adversary, so that for any (s,a, b) ∈ S ×A× B, r(s,a, b) + n∑ k=1 rk(s,a, b) = 0, (1) and for any k ∈ [n], rk(s,a, b) = rteam(s,a, b). (2) • P : S ×A×B → ∆(S) is the transition probability function, so that P(s′|s,a, b) denotes the probability of transitioning to state s′ ∈ S when the current state is s ∈ S under the action profile (a, b) ∈ A× B; • γ ∈ [0, 1) is the discount factor; and • ρ ∈ ∆(S) is the initial state distribution over the state space. We will assume that ρ is full- support, meaning that ρ(s) > 0 for all s ∈ S. In other words, an adversarial team Markov game is a subclass of general-sum infinite-horizon multi-agent discounted MDPs under the restriction that all but a single (adversarial) player have identical interests (see (2)), and the game is globally zero-sum—in the sense of (1). As we point out in Section 2.3, (2) can be relaxed in order to capture (adversarial) Markov potential games (Definition 2.2), without qualitatively altering our results. 2.2 POLICIES, VALUE FUNCTION, AND NASH EQUILIBRIA Policies. A stationary—that is, time-invariant—policy πk for an agent k is a function mapping a given state to a distribution over available actions, πk : S ∋ s 7→ πk(·|s) ∈ ∆(Ak). We will say that πk is deterministic if for every state there is some action that is selected with probability 1 under policy πk. For convenience, we will let Πteam : S → ∆(A) and Πadv : S → ∆(B) denote the policy space for the team and the adversary respectively. We may also write Π : S → ∆(A)×∆(B) to denote the joint policy space of all agents. Direct Parametrization. Throughout this paper we will assume that players employ direct policy parametrization. That is, for each player k ∈ [n], we let Xk := ∆(Ak)S and πk = xk so that xk,s,a = πk(a|s). Similarly, for the adversary, we let Y := ∆(B)S and πadv = y so that ys,a = πadv(a|s). (Extending our results to other policy parameterizations, such as soft-max (Agarwal et al., 2021), is left for future work.) Value Function. The value function Vs : Π ∋ (π1, . . . ,πn,πadv) 7→ R is defined as the expected cumulative discounted reward received by the adversary under the joint policy (πteam,πadv) ∈ Π and the initial state s ∈ S, where πteam := (π1, . . . ,πn). In symbols, Vs(πteam,πadv) := E(πteam,πadv) [ ∞∑ t=0 γtr(s(t),a(t), b(t)) ∣∣s0 = s] , (3) 1To ease the notation, and without any essential loss of generality, we will assume throughout that the action space does not depend on the state. 2Assuming that the reward is positive is without any loss of generality (see Claim D.6). where the expectation is taken over the trajectory distribution induced by πteam and πadv. When the initial state is drawn from a distribution ρ, the value function takes the form Vρ(πteam,πadv) := Es∼ρ [ Vs(πteam,πadv) ] . Nash Equilibrium. Our main goal is to compute a joint policy profile that is an (approximate) Nash equilibrium, a standard equilibrium concept in game theory formalized below. Definition 2.1 (Nash equilibrium). A joint policy profile ( π⋆team,π ⋆ adv ) ∈ Π is an ε-approximate Nash equilibrium, for ϵ ≥ 0, if{ Vρ(π ⋆ team,π ⋆ adv) ≤ Vρ((π′k,π⋆−k),π⋆adv ) + ε, ∀k ∈ [n],∀π′k ∈ Πk, Vρ(π ⋆ team,π ⋆ adv) ≥ Vρ(π⋆team,π′adv)− ε, ∀π′adv ∈ Πadv. That is, a joint policy profile is an (approximate) Nash equilibrium if no unilateral deviation from a player can result in a non-negligible—more than additive ϵ—improvement for that player. Nash equilibria always exist in multi-agent stochastic games (Fink, 1964); our main result implies an (efficient) constructive proof of that fact for the special case of adversarial team Markov games. 2.3 ADVERSARIAL MARKOV POTENTIAL GAMES A recent line of work has extended the fundamental class of potential normal-form games (Monderer & Shapley, 1996) to Markov potential games (Marden, 2012; Macua et al., 2018; Leonardos et al., 2021; Ding et al., 2022; Zhang et al., 2021b; Chen et al., 2022; Maheshwari et al., 2022; Fox et al., 2022). Importantly, our results readily carry over even if players in the team are not necessarily identically interested, but instead, there is some underlying potential function for the team; we will refer to such games as adversarial Markov potential games, formally introduced below. Definition 2.2. An adversarial Markov potential game G = (S,N ,A,B, {rk}k∈[n],P, γ, ρ) is a multi-agent discounted MDP that shares all the properties of adversarial team Markov games (Section 2.1), with the exception that (2) is relaxed in that there exists a potential function Φs, ∀s ∈ S , such that for any πadv ∈ Πadv, Φs(πk,π−k;πadv)− Φs(π′k,π−k;πadv) = Vk,s(πk,π−k;πadv)− Vk,s(π′k,π−k;πadv), for every agent k ∈ [n], every state s ∈ S, and all policies πk,πk′ ∈ Πk and π−k ∈ Π−k. 3 MAIN RESULT In this section, we sketch the main pieces required in the proof of our main result, Theorem 1.1. We begin by describing our algorithm in Section 3.1. Next, in Section 3.2, we characterize the strategy x̂ ∈ X for the team returned by IPGMAX, while Section 3.3 completes the proof by establishing that x̂ can be efficiently extended to an approximate Nash equilibrium. The formal proof of Theorem 1.1 is deferred to the Appendix. 3.1 OUR ALGORITHM In this subsection, we describe in detail our algorithm for computing ϵ-approximate Nash equilibria, IPGMAX, in adversarial team Markov games (Algorithm 1). IPGMAX takes as input a precision parameter ϵ > 0 (Line 1) and an initial strategy for the team (x(0)1 , . . . ,x (0) n ) = x(0) ∈ X := ×nk=1 Xk (Line 2). The algorithm then proceeds in two phases: • In the first phase the team players are performing independent policy gradient steps (Line 7) with learning rate η, as defined in Line 3, while the adversary is then best responding to their joint strategy (Line 6). Both of these steps can be performed in polynomial time under oracle access to the game (see Remark 2). This process is repeated for T iterations, with T as defined in Line 4. We note that Proj (·) in Line 7 stands for the Euclidean projection, ensuring that each player selects a valid strategy. The first phase is completed in Line 9, where we set x̂ according to the iterate at time t⋆, for some 0 ≤ t⋆ ≤ T − 1. As we explain in Section 3.2, selecting uniformly at random is a practical and theoretically sound way of setting t⋆. • In the second phase we are fixing the strategy of the team x̂ ∈ X , and the main goal is to determine a strategy ŷ ∈ Y so that (x̂, ŷ) is an O(ϵ)-approximate Nash equilibrium. This is accomplished in the subroutine AdvNashPolicy(x̂), which consists of solving a linear program—from the perspective of the adversary—that has polynomial size. Our analysis of the second phase of IPGMAX can be found in Section 3.3. It is worth stressing that under gradient feedback, IPGMAX requires no communication or coordination between the players in the team. Algorithm 1 Independent Policy GradientMax (IPGMAX) 1: Precision ϵ > 0 2: Initial Strategy x(0) ∈ X 3: Learning rate η := ϵ 2(1−γ)9 32S4D2( ∑n k=1 Ak+B) 3 4: Number of iterations T := 512S8D4( ∑n k=1 Ak+B) 4 ϵ4(1−γ)12 5: for t← 1, 2, . . . , T do 6: y(t) ← argmaxy∈Y Vρ ( x(t−1),y ) 7: x(t)k ← ProjXk ( x (t−1) k − η∇xkVρ ( x(t−1),y(t) )) ▷ for all agents i ∈ [n] 8: end for 9: x̂← x(t⋆) 10: ŷ← AdvNashPolicy(x̂) ▷ defined in Algorithm 2 11: return (x̂, ŷ) 3.2 ANALYZING INDEPENDENT POLICY GRADIENTMAX In this subsection, we establish that IPGMAX finds an ϵ-nearly stationary point x̂ of ϕ(x) := maxy∈Y Vρ(x,y) in a number of iterations T that is polynomial in the natural parameters of the game, as well as 1/ϵ; this is formalized in Proposition 3.1. First, we note the by-now standard property that the value function Vρ is L-Lipschitz continuous and ℓ-smooth, where L := √∑n k=1 Ak+B (1−γ)2 and ℓ := 2( ∑n k=1 Ak+B) (1−γ)3 (Lemma C.1). An important observation for the analysis is that IPGMAX is essentially performing gradient descent steps on ϕ(x). However, the challenge is that ϕ(x) is not necessarily differentiable; thus, our analysis relies on the Moreau envelope of ϕ, defined as follows. Definition 3.1 (Moreau Envelope). Let ϕ(x) := maxy∈Y Vρ(x,y). For any 0 < λ < 1ℓ the Moreau envelope ϕλ of ϕ is defined as ϕλ(x) := min x′∈X { ϕ(x′) + 1 2λ ∥x− x′∥2 } . (4) We will let λ := 12ℓ . Crucially, the Moreau envelope ϕλ, as introduced in (4), is ℓ-strongly convex; this follows immediately from the fact that ϕ(x) is ℓ-weakly convex, in the sense that ϕ(x) + ℓ2∥x∥ 2 is convex (see Lemma A.1). A related notion that will be useful to measure the progress of IPGMAX is the proximal mapping of a function f , defined as proxf : X ∋ x 7→ argminx′∈X { f(x′) + 12∥x ′ − x∥2 } ; the proximal point of ϕ/(2ℓ) is well-defined since ϕ is ℓ-weakly convex (Proposition A.1). We are now ready to state the convergence guarantee of IPGMAX. Proposition 3.1. Consider any ϵ > 0. If η = 2ϵ2(1− γ) and T = (1−γ) 4 8ϵ4( ∑n k=1 Ak+B) 2 , there exists an iterate t⋆, with 0 ≤ t⋆ ≤ T − 1, such that ∥∥x(t⋆) − x̃(t⋆)∥∥ 2 ≤ ϵ, where x̃(t⋆) := proxϕ/(2ℓ)(x(t ⋆)). The proof relies on the techniques of Lin et al. (2020), and it is deferred to Appendix C. The main takeaway is that O(1/ϵ4) iterations suffice in order to reach an ϵ-nearly stationary point of ϕ— in the sense that it is ϵ-far in ℓ2 distance from its proximal point. A delicate issue here is that Proposition 3.1 only gives a best-iterate guarantee, and identifying that iterate might introduce a substantial computational overhead. To address this, we also show in Corollary C.1 that by randomly selecting ⌈log(1/δ)⌉ iterates over the T repetitions of IPGMAX, we are guaranteed to recover an ϵnearly stationary point with probability at least 1− δ, for any δ > 0. 3.3 EFFICIENT EXTENSION TO NASH EQUILIBRIA In this subsection, we establish that any ϵ-nearly stationary point x̂ of ϕ, can be extended to an O(ϵ)-approximate Nash equilibrium (x̂, ŷ) for any adversarial team Markov game, where ŷ ∈ Y is the strategy for the adversary. Further, we show that ŷ can be computed in polynomial time through a carefully constructed linear program. This “extendibility” argument significantly extends a seminal characterization of Von Stengel & Koller (1997), and it is the crux in the analysis towards establishing our main result, Theorem 1.1. To this end, the techniques we leverage are more involved compared to (Von Stengel & Koller, 1997), and revolve around nonlinear programming. Specifically, in the spirit of (Filar & Vrieze, 2012, Chapter 3), the starting point of our argument is the following nonlinear program with variables (x,v) ∈ X × RS : (Q-NLP) min ∑ s∈S ρ(s)v(s) + ℓ∥x− x̂∥2 s.t. r(s,x, b) + γ ∑ s′∈S P(s′|s,x, b)v(s′) ≤ v(s), ∀(s, b) ∈ S × B; (Q1) x⊤k,s1 = 1, ∀(k, s) ∈ [n]× S; and (Q2) xk,s,a ≥ 0, ∀k ∈ [n], (s, a) ∈ S ×Ak. (Q3) Here, we have overloaded notation so that r(s,x, b) := Ea∼xs [r(s,a, b] and P(s′|s,x, b)) := Ea∼xs [P(s′|s,a, b)]. For a fixed strategy x ∈ X for the team, this program describes the (discounted) MDP faced by the adversary. A central challenge in this formulation lies in the nonconvexity-nonconcavity of the constraint functions, witnessed by the multilinear constraint (Q1). Importantly, unlike standard MDP formulations, we have incorporated a quadratic regularizer in the objective function; this term ensures the following property. Proposition 3.2. For any fixed x ∈ X , there is a unique optimal solution v⋆ to (Q-NLP). Further, if x̃ := proxϕ/(2ℓ)(x̂) and ṽ ∈ RS is the corresponding optimal, then (x̃, ṽ) is the global optimum of (Q-NLP). The uniqueness of the associated value vector is a consequence of Bellman’s optimality equation, while the optimality of the proximal point follows by realizing that (Q-NLP) is an equivalent formulation of the proximal mapping. These steps are formalized in Appendix B.2. Having established the optimality of (x̃, ṽ), the next step is to show the existence of nonnegative Lagrange multipliers satisfying the KKT conditions (recall Definition A.2); this is non-trivial due to the nonconvexity of the feasibility set of (Q-NLP). To do so, we leverage the so-called Arrow-Hurwicz-Uzawa constraint qualification (Theorem A.1)—a form of “regularity condition” for a nonconvex program. Indeed, in Lemma B.3 we show that any feasible point of (Q-NLP) satisfies that constraint qualification, thereby implying the existence of nonnegative Lagrange multipliers satisfying the KKT conditions for any local optimum (Corollary B.1), and in particular for (x̃, ṽ): Proposition 3.3. There exist nonnegative Lagrange multipliers satisfying the KKT conditions at (x̃, ṽ). Now the upshot is that a subset of those Lagrange multipliers λ̃ ∈ RS×B can be used to establish the extendibility of x̂ to a Nash equilibrium. Indeed, our next step makes this explicit: We construct a linear program whose sole goal is to identify such multipliers, which in turn will allow us to efficiently compute an admissible strategy for the adversary ŷ. However, determining λ̃ exactly seems too ambitious. For one, IPGMAX only granted us access to x̂, but not to x̃. On the other hand, the Lagrange multipliers λ̃ are induced by (x̃, ṽ). To address this, the constraints of our linear program are phrased in terms of (x̂, v̂), instead of (x̃, ṽ), while to guarantee feasibility we appropriately relax all the constraints of the linear program; this relaxation does not introduce a large error since ∥x̂ − x̃∥ ≤ ϵ (Proposition 3.1), and the underlying constraint functions are Lipschitz continuous—with constants that depend favorably on the game G; we formalize that in Lemma B.4. This leads to our main theorem, summarized below (see Theorem B.1 for a precise statement). Theorem 3.1. Let x̂ be an ϵ-nearly stationary point of ϕ. There exist a linear program, (LPadv), such that: (i) It has size that is polynomial in G, and all the coefficients depend on the (single-agent) MDP faced by the adversary when the team is playing a fixed strategy x̂; and (ii) It is always feasible, and any solution induces a strategy ŷ such that (x̂, ŷ) is an O(ϵ)approximate Nash equilibrium. The proof of this theorem carefully leverages the structure of adversarial team Markov games, along with the KKT conditions we previously established in Proposition 3.3. The algorithm for computing the policy for the adversary is summarized in Algorithm 2 of Appendix B. A delicate issue with Theorem 3.1, and in particular with the solution of (LPadv), is whether one can indeed efficiently simulate the environment faced by the adversary. Indeed, in the absence of any structure, determining the coefficients of the linear program could scale exponentially with the number of players; this is related to a well-known issue in computational game theory, revolving around the exponential blow-up of the input space as the number of players increases (Papadimitriou & Roughgarden, 2008). As is standard, we bypass this by assuming access to natural oracles that ensure we can efficiently simulate the environment faced by the adversary (Remark 2). 4 FURTHER RELATED WORK In this section, we highlight certain key lines of work that relate to our results in the context of adversarial team Markov games. We stress that the related literature on multi-agent reinforcement learning (MARL) is too vast to even attempt to faithfully cover here. For some excellent recent overviews of the area, we refer the interested reader to (Yang & Wang, 2020; Zhang et al., 2021a) and the extensive lists of references therein. Team Games. The study of team games has been a prolific topic of research in economic theory and group decision theory for many decades; see, e.g., (Marschak, 1955; Groves, 1973; Radner, 1962; Ho & Chu, 1972). A more modern key reference point to our work is the seminal paper of Von Stengel & Koller (1997) that introduced the notion of team-maxmin equilibrium (TME) in the context of normal-form games. A TME profile is a mixed strategy for each team member so that the minimal expected team payoff over all possible responses of the adversary—who potentially knows the play of the team—is the maximum possible. While TME’s enjoy a number of compelling properties, being the optimal equilibria for the team given the lack of coordination, they suffer from computational intractability even in 3-player team games (Hansen et al., 2008; Borgs et al., 2010).3 Nevertheless, practical algorithms have been recently proposed and studied for computing them in multiplayer games (Zhang & An, 2020a;b; Basilico et al., 2017). It is worth pointing out that team equilibria are also useful for extensive-form two-player zero-sum games where one of the players has imperfect recall (Piccione & Rubinstein, 1997). The intractability of TME has motivated the study of a relaxed equilibrium concept that incorporates a correlation device (Farina et al., 2018; Celli & Gatti, 2018; Basilico et al., 2017; Zhang & An, 2020b; Zhang & Sandholm, 2021; Zhang et al., 2022b; Carminati et al., 2022; Zhang et al., 2022a); namely, TMECor. In TMECor players are allowed to select correlated strategies. Despite the many compelling aspects of TMECor as a solution concept in team games, even ex ante coordination or correlated randomization—beyond the structure of the game itself—can be overly expensive or even infeasible in many applications (Von Stengel & Koller, 1997). Further, even TMECor is NPhard to compute (in the worst-case) for imperfect-information extensive-form games (EFGs) (Chu & Halpern, 2001), although fixed-parameter-tractable (FPT) algorithms have recently emerged for natural classes of EFGs (Zhang & Sandholm, 2021; Zhang et al., 2022b). 3Hansen et al. (2008); Borgs et al. (2010) establish FNP-hardness and inapproximability for general 3- player games, but their argument readily applies to 3-player team games as well. On the other hand, the computational aspects of the standard Nash equilibrium (NE) in adversarial team games is not well-understood, even in normal-form games. In fact, it is worth pointing out that Von Neumann’s celebrated minimax theorem (von Neumann & Morgenstern, 2007) does not apply in team games, rendering traditional techniques employed in two-player zero-sum games of little use. Indeed, Schulman & Vazirani (2017) provided a precise characterization of the duality gap between the two teams based on the natural parameters of the problem, while Kalogiannis et al. (2021) showed that standard no-regret learning dynamics such as gradient descent and optimistic Hedge could fail to stabilize to mixed NE even in binary-action adversarial team games. Finally, we should also point out that although from a complexity-theoretic standpoint our main result (Theorem 1.1) establishes a fully polynomial time approximate scheme (FPTAS), since the dependence on the approximation error ϵ is poly(1/ϵ), an improvement to poly(log(1/ϵ)) is precluded even in normal-form games unless CLS ⊆ P (an unlikely event); this follows as adversarial team games capture potential games (Kalogiannis et al., 2021), wherein computing mixed Nash equilibria is known to be complete for the class CLS = PPAD ∩ PLS (Babichenko & Rubinstein, 2021). Multi-agent RL. Computing Nash equilibria has been a central endeavor in multi-agent RL. While some algorithms have been proposed, perhaps most notably the Nash-Q algorithm (Hu & Wellman, 1998; 2003), convergence to Nash equilibria is only guaranteed under severe restrictions on the game. More broadly, the long-term behavior of independent policy gradient methods (Schulman et al., 2015) is still not well-understood. Before all else, from the impossibility result of Hart & Mas-Colell, universal convergence to Nash equilibria is precluded even for normal-form games; this is aligned with the computational intractability (PPAD-completeness) of Nash equilibria even in two-player general-sum games (Daskalakis et al., 2009; Chen et al., 2009). Surprisingly, recent work has also established hardness results in turn-based stochastic games, rendering even the weaker notion of (stationary) CCEs intractable (Daskalakis et al., 2022; Jin et al., 2022). As a result, the existing literature has inevitably focused on specific classes of games, such as Markov potential games (Leonardos et al., 2021; Ding et al., 2022; Zhang et al., 2021b; Chen et al., 2022; Maheshwari et al., 2022; Fox et al., 2022) or two-player zero-sum Markov games (Daskalakis et al., 2020; Wei et al., 2021; Sayin et al., 2021; Cen et al., 2021; Sayin et al., 2020). As we pointed out earlier, adversarial Markov team games can unify and extend those settings (Section 2.3). More broadly, identifying multi-agent settings for which Nash equilibria are provably efficiently computable is recognized as an important open problem in the literature (see, e.g., (Daskalakis et al., 2020)), boiling down to one of the main research question of this paper (Question (⋆)). We also remark that certain guarantees for convergence to Nash equilibria have been recently obtained in a class of symmetric games (Emmons et al., 2022)—including symmetric team games. Finally, weaker solution concepts relaxing either the Markovian or the stationarity properties have also recently attracted attention (Daskalakis et al., 2022; Jin et al., 2021). 5 CONCLUSIONS Our main contribution in this paper is the first polynomial algorithm for computing (stationary) Nash equilibria in adversarial team Markov games, an important class of games in which a team of uncoordinated but identically-interested players is competing against an adversarial player. We argued that this setting serves as a step towards modeling more realistic multi-agent applications that feature both competing and cooperative interests. There are many interesting directions for future research. One caveat of our main algorithm (IPGMAX) is that it requires a separate subroutine for computing the optimal policy of the adversary. It is plausible that a carefully designed two-timescale policy gradient method can efficiently reach a Nash equilibrium, which would yield fully model-free algorithms for adversarial team Markov games by obviating the need to solve a linear program. Techniques from the literature on constrained MDPs (Ying et al., 2022) could also be useful for computing the policy of the adversary in a more scalable way. Furthermore, exploring different solution concepts—beyond Nash equilibria—could also be a fruitful avenue for the future. Indeed, allowing some limited form of correlation between the players in the team could lead to more efficient algorithms; whether that form of coordination is justified (arguably) depends to a large extent on the application at hand. Finally, returning to Question (⋆), a more ambitious agenda revolves around understanding the fundamental structure of games for which computing Nash equilibria is provably computationally tractable. ACKNOWLEDGMENTS We are grateful to the anonymous ICLR reviewers for their valuable feedback. Ioannis Anagnostides thanks Gabriele Farina and Brian H. Zhang for helpful discussions. Ioannis Panageas would like to acknowledge a start-up grant. Part of this project was done while he was a visiting research scientist at the Simons Institute for the Theory of Computing for the program “Learning and Games”. Vaggos Chatziafratis was supported by a start-up grant of UC Santa Cruz, the Foundations of Data Science Institute (FODSI) fellowship at MIT and Northeastern, and part of this work was carried out at the Simons Institute for the Theory of Computing. Emmanouil V. Vlatakis-Gkaragkounis is grateful for financial support by the Google-Simons Fellowship, Pancretan Association of America and Simons Collaboration on Algorithms and Geometry. This project was completed while he was a visiting research fellow at the Simons Institute for the Theory of Computing. Additionally, he would like to acknowledge the following series of NSF-CCF grants under the numbers 1763970/2107187/1563155/1814873.
1. What is the focus and contribution of the paper regarding zero-sum team Markov games? 2. What are the strengths of the proposed approach, particularly in computing a stationary epsilon-Nash equilibrium? 3. What are the weaknesses of the paper, especially regarding the methodology and comparison with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper studies a zero-sum team Markov game. In the game, a team of agents compete with an adversary. Agents in the team have the same reward function, and the sum of the team and the adversary's rewards is zero. The paper in particualr looks at a class of potential games. The main contribution of the paper is to propose a set of algorithms to compute a stationary epsilon-Nash equilibrium of the game. Strengths And Weaknesses Strength: The problem is well-motivated and the paper is written well overall. The proposed approach to computing an approximate Nash equilibrium looks non-trivial and brings in many interesting concepts. The authors also provide a good summary about the related work. Weakness: Section 3.3 could have been improved to make the main idea clearer. It is unclear to me what is the main advantage of the proposed methods? In particular, why not just solve Q-NLP without the regularizer in the objective function, which gives a Nash equilibrium directly and seems much more manageable than the current formulation? The approach relies on an oracle to tackle a computational obstacle, which may be crucial. This further deepens the question of how meaningful the proposed methods are compared with solving Q-NLP without the regularizer --- now that there's an oracle to use, so supposedly it also simplifies the problem of solving Q-NLP without the regularizer. Clarity, Quality, Novelty And Reproducibility Clarity could be improved in some parts of the paper. The results look novel and very technical.
ICLR
Title Efficiently Computing Nash Equilibria in Adversarial Team Markov Games Abstract Computing Nash equilibrium policies is a central problem in multi-agent reinforcement learning that has received extensive attention both in theory and in practice. However, in light of computational intractability barriers in general-sum games, provable guarantees have been thus far either limited to fully competitive or cooperative scenarios, or impose strong assumptions that are difficult to meet in most practical applications. In this work, we depart from those prior results by investigating infinite-horizon adversarial team Markov games, a natural and well-motivated class of games in which a team of identically-interested players— in the absence of any explicit coordination or communication—is competing against an adversarial player. This setting allows for a unifying treatment of zero-sum Markov games and Markov potential games, and serves as a step to model more realistic strategic interactions that feature both competing and cooperative interests. Our main contribution is the first algorithm for computing stationary ε-approximate Nash equilibria in adversarial team Markov games with computational complexity that is polynomial in all the natural parameters of the game, as well as 1/ε. The proposed algorithm is based on performing independent policy gradient steps for each player in the team, in tandem with best responses from the side of the adversary; in turn, the policy for the adversary is then obtained by solving a carefully constructed linear program. Our analysis leverages non-standard techniques to establish the KKT optimality conditions for a nonlinear program with nonconvex constraints, thereby leading to a natural interpretation of the induced Lagrange multipliers. 1 INTRODUCTION Multi-agent reinforcement learning (MARL) offers a principled framework for analyzing competitive interactions in dynamic and stateful environments in which agents’ actions affect both the state of the world and the rewards of the other players. Strategic reasoning in such complex multi-agent settings has been guided by game-theoretic principles, leading to many recent landmark results in benchmark domains in AI (Bowling et al., 2015; Silver et al., 2017; Vinyals et al., 2019; Moravčı́k et al., 2017; Brown & Sandholm, 2019; 2018; Brown et al., 2020; Perolat et al., 2022). Most of these remarkable advances rely on scalable and decentralized algorithms for computing Nash equilibria (Nash, 1951)—a standard game-theoretic notion of rationality—in two-player zero-sum games. Nevertheless, while single-agent RL has enjoyed rapid theoretical progress over the last few years (e.g., see (Jin et al., 2018; Agarwal et al., 2020; Li et al., 2021; Luo et al., 2019; Sidford et al., 2018), and references therein), a comprehensive understanding of the multi-agent landscape still remains elusive. Indeed, provable guarantees for efficiently computing Nash equilibria have been thus far limited to either fully competitive settings, such as two-player zero-sum games (Daskalakis et al., 2020; Wei et al., 2021; Sayin et al., 2021; Cen et al., 2021; Sayin et al., 2020; Condon, 1993), or environments in which agents are striving to coordinate towards a common global objective (Claus ∗Correspondence to [email protected]. & Boutilier, 1998; Wang & Sandholm, 2002; Leonardos et al., 2021; Ding et al., 2022; Zhang et al., 2021b; Chen et al., 2022; Maheshwari et al., 2022; Fox et al., 2022). However, many real-world applications feature both shared and competing interests between the agents. Efficient algorithms for computing Nash equilibria in such settings are much more scarce, and typically impose restrictive assumptions that are difficult to meet in most applications (Hu & Wellman, 2003; Bowling, 2000). In fact, even in stateless two-player (normal-form) games, computing approximate Nash equilibria is computationally intractable (Daskalakis et al., 2009; Rubinstein, 2017; Chen et al., 2009; Etessami & Yannakakis, 2010)—subject to well-believed complexity-theoretic assumptions. As a result, it is common to investigate equilibrium concepts that are more permissive than Nash equilibria, such as coarse correlated equilibria (CCE) (Aumann, 1974; Moulin & Vial, 1978). Unfortunately, recent work has established strong lower bounds for computing even approximate (stationary) CCEs in turn-based stochastic two-player games (Daskalakis et al., 2022; Jin et al., 2022). Those negative results raise a central question: Are there natural multi-agent environments incorporating both competing and shared interests for which we can establish efficient algorithms for computing (stationary) Nash equilibria? (⋆) Our work makes concrete progress in this fundamental direction. Specifically, we establish the first efficient algorithm leading to Nash equilibria in adversarial team Markov games, a well-motivated and natural multi-agent setting in which a team of agents with a common objective is facing a competing adversary. 1.1 OUR RESULTS Before we state our main result, let us first briefly introduce the setting of adversarial team Markov games; a more precise description is deferred to Section 2.1. To address Question (⋆), we study an infinite-horizon Markov (stochastic) game with a finite state space S in which a team of agents NA := [n] with a common objective function is competing against a single adversary with opposing interests. Every agent k ∈ [n] has a (finite) set of available actions Ak, while B represents the adversary’s set of actions. We will also let γ ∈ [0, 1) be the discounting factor. Our goal will be to compute an (approximate) Nash equilibrium; that is, a strategy profile so that no player can improve via a unilateral deviation (see Definition 2.1). In this context, our main contribution is the first polynomial time algorithm for computing Nash equilibria in adversarial team Markov games: Theorem 1.1 (Informal). There is an algorithm (IPGMAX) that, for any ϵ > 0, computes an ϵapproximate stationary Nash equilibrium in adversarial team Markov games, and runs in time poly ( |S|, n∑ k=1 |Ak|+ |B|, 1 1− γ , 1 ϵ ) . A few remarks are in order. First, our guarantee significantly extends and unifies prior results that only applied to either two-player zero-sum Markov games or to Markov potential games; both of those settings can be cast as special cases of adversarial team Markov games (see Section 2.3). Further, the complexity of our algorithm, specified in Theorem 1.1, scales only with ∑ k∈NA |Ak| instead of ∏ k∈NA |Ak|, bypassing what is often referred to as the curse of multi-agents (Jin et al., 2021). Indeed, viewing the team as a single “meta-player” would induce an action space of size∏ k∈NA |Ak|, which is exponential in n even if each agent in the team has only two actions. In fact, our algorithm operates without requiring any (explicit) form of coordination or communication between the members of the team (beyond the structure of the game), a feature that has been motivated in practical applications (von Stengel & Koller, 1997). Namely, scenarios in which communication or coordination between the members of the team is either overly expensive, or even infeasible; for an in depth discussion regarding this point we refer to (Schulman & Vazirani, 2017). 1.2 OVERVIEW OF TECHNIQUES To establish Theorem 1.1, we propose a natural and decentraliezd algorithm we refer to as Independent Policy GradientMax (IPGMAX). IPGMAX works in turns. First, each player in the team performs one independent policy gradient step on their value function with an appropriately selected learning rate η > 0. In turn, the adversary best responds to the current policy of the team. This exchange is repeated for a sufficiently large number of iterations T . Finally, IPGMAX includes an auxiliary subroutine, namely AdvNashPolicy(), which computes the Nash policy of the adversary; this will be justified by Proposition 1.1 we describe below. Our analysis builds on the techniques of Lin et al. (2020)—developed for the saddle-point problem minx∈X maxy∈Y f(x,y)—for characterizing GDMAX. Specifically, GDMAX consists of performing gradient descent steps, specifically on the function ϕ(x) := maxy∈Y f(x,y). Lin et al. (2020) showed that GDMAX converges to a point ( x̂,y∗(x̂) ) such that x̂ is an approximate first-order stationary point of the Moreau envelope (see Definition 3.1) of ϕ(x), while y∗(x̂) is a best response to x̂. Now if f(x, ·) is strongly-concave, one can show (by Danskin’s theorem) that ( x̂,y∗(x) ) is an approximate first-order stationary point of f . However, our setting introduces further challenges since the value function Vρ(πteam,πadv) is nonconvex-nonconcave. For this reason, we take a more refined approach. We first show in Proposition 3.1 that IPGMAX is guaranteed to converge to a policy profile ( π̂team, · ) such that π̂team is an ϵ-nearly stationary point of maxπadv Vρ(πteam,πadv). Then, the next key step and the crux of the analysis is to show that π̂team can be extended to an O(ϵ)-approximate Nash equilibrium policy: Proposition 1.1 (Informal). If π̂team is an ϵ-nearly stationary point of maxπadv Vρ(πteam,πadv), there exists a policy for the adversary π̂adv so that (π̂team, π̂adv) is anO(ϵ)-approximate Nash equilibrium. In the special case of normal-form games, a similar extension theorem was recently obtained by Anagnostides et al. (2023). In particular, that result was derived by employing fairly standard linear programming techniques. In contrast, our more general setting introduces several new challenges, not least due to the nonconvexity-nonconcavity of the objective function. Indeed, our analysis leverages more refined techniques stemming from nonlinear programming. More precisely, while we make use of standard policy gradient properties, similar to the single-agent MDP setting (Agarwal et al., 2021; Xiao, 2022), our analysis does not rely on the so-called gradientdominance property (Bhandari & Russo, 2019), as that property does not hold in a team-wise sense. Instead, inspired by an alternative proof of Shapley’s theorem (Shapley, 1953) for two-person zerosum Markov games (Filar & Vrieze, 2012, Chapter 3), we employ mathematical programming. One of the central challenges is that the induced nonlinear program has a set of nonconvex constraints. As such, even the existence of (nonnegative) Lagrange multipliers satisfying the KKT conditions is not guaranteed, thereby necessitating more refined analysis techniques. To this end, we employ the Arrow-Hurwiz-Uzawa constraint qualification (Theorem A.1) in order to establish that the local optima are contained in the set of KKT points (Corollary B.1). Then, we leverage the structure of adversarial team Markov games to characterize the induced Lagrange multipliers, showing that a subset of these can be used to establish Proposition 1.1; incidentally, this also leads to an efficient algorithm for computing a (near-)optimal policy of the adversary. Finally, we also remark that controlling the approximation error—an inherent barrier under policy gradient methods—in Proposition 1.1 turns out to be challenging. We bypass this issue by constructing “relaxed” programs that incorporate some imprecision in the constraints. A more detailed overview of our algorithm and the analysis is given in Section 3. 2 PRELIMINARIES In this section, we introduce the relevant background and our notation. Section 2.1 describes adversarial team Markov games. Section 2.2 then defines some key concepts from multi-agent MDPs, while Section 2.3 describes a generalization of adversarial team Markov games, beyond identicallyinterested team players, allowing for a richer structure in the utilities of the team—namely, adversarial Markov potential games. Notation. We let [n] := {1, . . . , n}. We use superscripts to denote the (discrete) time index, and subscripts to index the players. We use boldface for vectors and matrices; scalars will be denoted by lightface variables. We denote by ∥ ·∥ := ∥ ·∥2 the Euclidean norm. For simplicity in the exposition, we may sometimes use theO(·) notation to suppress dependencies that are polynomial in the natural parameters of the game; precise statements are given in the Appendix. For the convenience of the reader, a comprehensive overview of our notation is given in A.3. 2.1 ADVERSARIAL TEAM MARKOV GAMES An adversarial team Markov game (or an adversarial team stochastic game) is the Markov game extension of static, normal-form adversarial team games (Von Stengel & Koller, 1997). The game is assumed to take place in an infinite-horizon discounted setting in which a team of identicallyinterested agents gain what the adversary loses. Formally, the game G is represented by a tuple G = (S,N ,A,B, r,P, γ, ρ) whose components are defined as follows. • S is a finite and nonempty set of states, with cardinality S := |S|; • N is the set of players, partitioned into a set of n team agentsNA := [n] and a single adversary • Ak is the action space of each player in the team k ∈ [n], so that A :=×k∈[n]Ak, while B is the action space of the adversary. We also let Ak := |Ak| and B := |B|;1 • r : S ×A×B → (0, 1) is the (deterministic) instantaneous reward function2 representing the (normalized) payoff of the adversary, so that for any (s,a, b) ∈ S ×A× B, r(s,a, b) + n∑ k=1 rk(s,a, b) = 0, (1) and for any k ∈ [n], rk(s,a, b) = rteam(s,a, b). (2) • P : S ×A×B → ∆(S) is the transition probability function, so that P(s′|s,a, b) denotes the probability of transitioning to state s′ ∈ S when the current state is s ∈ S under the action profile (a, b) ∈ A× B; • γ ∈ [0, 1) is the discount factor; and • ρ ∈ ∆(S) is the initial state distribution over the state space. We will assume that ρ is full- support, meaning that ρ(s) > 0 for all s ∈ S. In other words, an adversarial team Markov game is a subclass of general-sum infinite-horizon multi-agent discounted MDPs under the restriction that all but a single (adversarial) player have identical interests (see (2)), and the game is globally zero-sum—in the sense of (1). As we point out in Section 2.3, (2) can be relaxed in order to capture (adversarial) Markov potential games (Definition 2.2), without qualitatively altering our results. 2.2 POLICIES, VALUE FUNCTION, AND NASH EQUILIBRIA Policies. A stationary—that is, time-invariant—policy πk for an agent k is a function mapping a given state to a distribution over available actions, πk : S ∋ s 7→ πk(·|s) ∈ ∆(Ak). We will say that πk is deterministic if for every state there is some action that is selected with probability 1 under policy πk. For convenience, we will let Πteam : S → ∆(A) and Πadv : S → ∆(B) denote the policy space for the team and the adversary respectively. We may also write Π : S → ∆(A)×∆(B) to denote the joint policy space of all agents. Direct Parametrization. Throughout this paper we will assume that players employ direct policy parametrization. That is, for each player k ∈ [n], we let Xk := ∆(Ak)S and πk = xk so that xk,s,a = πk(a|s). Similarly, for the adversary, we let Y := ∆(B)S and πadv = y so that ys,a = πadv(a|s). (Extending our results to other policy parameterizations, such as soft-max (Agarwal et al., 2021), is left for future work.) Value Function. The value function Vs : Π ∋ (π1, . . . ,πn,πadv) 7→ R is defined as the expected cumulative discounted reward received by the adversary under the joint policy (πteam,πadv) ∈ Π and the initial state s ∈ S, where πteam := (π1, . . . ,πn). In symbols, Vs(πteam,πadv) := E(πteam,πadv) [ ∞∑ t=0 γtr(s(t),a(t), b(t)) ∣∣s0 = s] , (3) 1To ease the notation, and without any essential loss of generality, we will assume throughout that the action space does not depend on the state. 2Assuming that the reward is positive is without any loss of generality (see Claim D.6). where the expectation is taken over the trajectory distribution induced by πteam and πadv. When the initial state is drawn from a distribution ρ, the value function takes the form Vρ(πteam,πadv) := Es∼ρ [ Vs(πteam,πadv) ] . Nash Equilibrium. Our main goal is to compute a joint policy profile that is an (approximate) Nash equilibrium, a standard equilibrium concept in game theory formalized below. Definition 2.1 (Nash equilibrium). A joint policy profile ( π⋆team,π ⋆ adv ) ∈ Π is an ε-approximate Nash equilibrium, for ϵ ≥ 0, if{ Vρ(π ⋆ team,π ⋆ adv) ≤ Vρ((π′k,π⋆−k),π⋆adv ) + ε, ∀k ∈ [n],∀π′k ∈ Πk, Vρ(π ⋆ team,π ⋆ adv) ≥ Vρ(π⋆team,π′adv)− ε, ∀π′adv ∈ Πadv. That is, a joint policy profile is an (approximate) Nash equilibrium if no unilateral deviation from a player can result in a non-negligible—more than additive ϵ—improvement for that player. Nash equilibria always exist in multi-agent stochastic games (Fink, 1964); our main result implies an (efficient) constructive proof of that fact for the special case of adversarial team Markov games. 2.3 ADVERSARIAL MARKOV POTENTIAL GAMES A recent line of work has extended the fundamental class of potential normal-form games (Monderer & Shapley, 1996) to Markov potential games (Marden, 2012; Macua et al., 2018; Leonardos et al., 2021; Ding et al., 2022; Zhang et al., 2021b; Chen et al., 2022; Maheshwari et al., 2022; Fox et al., 2022). Importantly, our results readily carry over even if players in the team are not necessarily identically interested, but instead, there is some underlying potential function for the team; we will refer to such games as adversarial Markov potential games, formally introduced below. Definition 2.2. An adversarial Markov potential game G = (S,N ,A,B, {rk}k∈[n],P, γ, ρ) is a multi-agent discounted MDP that shares all the properties of adversarial team Markov games (Section 2.1), with the exception that (2) is relaxed in that there exists a potential function Φs, ∀s ∈ S , such that for any πadv ∈ Πadv, Φs(πk,π−k;πadv)− Φs(π′k,π−k;πadv) = Vk,s(πk,π−k;πadv)− Vk,s(π′k,π−k;πadv), for every agent k ∈ [n], every state s ∈ S, and all policies πk,πk′ ∈ Πk and π−k ∈ Π−k. 3 MAIN RESULT In this section, we sketch the main pieces required in the proof of our main result, Theorem 1.1. We begin by describing our algorithm in Section 3.1. Next, in Section 3.2, we characterize the strategy x̂ ∈ X for the team returned by IPGMAX, while Section 3.3 completes the proof by establishing that x̂ can be efficiently extended to an approximate Nash equilibrium. The formal proof of Theorem 1.1 is deferred to the Appendix. 3.1 OUR ALGORITHM In this subsection, we describe in detail our algorithm for computing ϵ-approximate Nash equilibria, IPGMAX, in adversarial team Markov games (Algorithm 1). IPGMAX takes as input a precision parameter ϵ > 0 (Line 1) and an initial strategy for the team (x(0)1 , . . . ,x (0) n ) = x(0) ∈ X := ×nk=1 Xk (Line 2). The algorithm then proceeds in two phases: • In the first phase the team players are performing independent policy gradient steps (Line 7) with learning rate η, as defined in Line 3, while the adversary is then best responding to their joint strategy (Line 6). Both of these steps can be performed in polynomial time under oracle access to the game (see Remark 2). This process is repeated for T iterations, with T as defined in Line 4. We note that Proj (·) in Line 7 stands for the Euclidean projection, ensuring that each player selects a valid strategy. The first phase is completed in Line 9, where we set x̂ according to the iterate at time t⋆, for some 0 ≤ t⋆ ≤ T − 1. As we explain in Section 3.2, selecting uniformly at random is a practical and theoretically sound way of setting t⋆. • In the second phase we are fixing the strategy of the team x̂ ∈ X , and the main goal is to determine a strategy ŷ ∈ Y so that (x̂, ŷ) is an O(ϵ)-approximate Nash equilibrium. This is accomplished in the subroutine AdvNashPolicy(x̂), which consists of solving a linear program—from the perspective of the adversary—that has polynomial size. Our analysis of the second phase of IPGMAX can be found in Section 3.3. It is worth stressing that under gradient feedback, IPGMAX requires no communication or coordination between the players in the team. Algorithm 1 Independent Policy GradientMax (IPGMAX) 1: Precision ϵ > 0 2: Initial Strategy x(0) ∈ X 3: Learning rate η := ϵ 2(1−γ)9 32S4D2( ∑n k=1 Ak+B) 3 4: Number of iterations T := 512S8D4( ∑n k=1 Ak+B) 4 ϵ4(1−γ)12 5: for t← 1, 2, . . . , T do 6: y(t) ← argmaxy∈Y Vρ ( x(t−1),y ) 7: x(t)k ← ProjXk ( x (t−1) k − η∇xkVρ ( x(t−1),y(t) )) ▷ for all agents i ∈ [n] 8: end for 9: x̂← x(t⋆) 10: ŷ← AdvNashPolicy(x̂) ▷ defined in Algorithm 2 11: return (x̂, ŷ) 3.2 ANALYZING INDEPENDENT POLICY GRADIENTMAX In this subsection, we establish that IPGMAX finds an ϵ-nearly stationary point x̂ of ϕ(x) := maxy∈Y Vρ(x,y) in a number of iterations T that is polynomial in the natural parameters of the game, as well as 1/ϵ; this is formalized in Proposition 3.1. First, we note the by-now standard property that the value function Vρ is L-Lipschitz continuous and ℓ-smooth, where L := √∑n k=1 Ak+B (1−γ)2 and ℓ := 2( ∑n k=1 Ak+B) (1−γ)3 (Lemma C.1). An important observation for the analysis is that IPGMAX is essentially performing gradient descent steps on ϕ(x). However, the challenge is that ϕ(x) is not necessarily differentiable; thus, our analysis relies on the Moreau envelope of ϕ, defined as follows. Definition 3.1 (Moreau Envelope). Let ϕ(x) := maxy∈Y Vρ(x,y). For any 0 < λ < 1ℓ the Moreau envelope ϕλ of ϕ is defined as ϕλ(x) := min x′∈X { ϕ(x′) + 1 2λ ∥x− x′∥2 } . (4) We will let λ := 12ℓ . Crucially, the Moreau envelope ϕλ, as introduced in (4), is ℓ-strongly convex; this follows immediately from the fact that ϕ(x) is ℓ-weakly convex, in the sense that ϕ(x) + ℓ2∥x∥ 2 is convex (see Lemma A.1). A related notion that will be useful to measure the progress of IPGMAX is the proximal mapping of a function f , defined as proxf : X ∋ x 7→ argminx′∈X { f(x′) + 12∥x ′ − x∥2 } ; the proximal point of ϕ/(2ℓ) is well-defined since ϕ is ℓ-weakly convex (Proposition A.1). We are now ready to state the convergence guarantee of IPGMAX. Proposition 3.1. Consider any ϵ > 0. If η = 2ϵ2(1− γ) and T = (1−γ) 4 8ϵ4( ∑n k=1 Ak+B) 2 , there exists an iterate t⋆, with 0 ≤ t⋆ ≤ T − 1, such that ∥∥x(t⋆) − x̃(t⋆)∥∥ 2 ≤ ϵ, where x̃(t⋆) := proxϕ/(2ℓ)(x(t ⋆)). The proof relies on the techniques of Lin et al. (2020), and it is deferred to Appendix C. The main takeaway is that O(1/ϵ4) iterations suffice in order to reach an ϵ-nearly stationary point of ϕ— in the sense that it is ϵ-far in ℓ2 distance from its proximal point. A delicate issue here is that Proposition 3.1 only gives a best-iterate guarantee, and identifying that iterate might introduce a substantial computational overhead. To address this, we also show in Corollary C.1 that by randomly selecting ⌈log(1/δ)⌉ iterates over the T repetitions of IPGMAX, we are guaranteed to recover an ϵnearly stationary point with probability at least 1− δ, for any δ > 0. 3.3 EFFICIENT EXTENSION TO NASH EQUILIBRIA In this subsection, we establish that any ϵ-nearly stationary point x̂ of ϕ, can be extended to an O(ϵ)-approximate Nash equilibrium (x̂, ŷ) for any adversarial team Markov game, where ŷ ∈ Y is the strategy for the adversary. Further, we show that ŷ can be computed in polynomial time through a carefully constructed linear program. This “extendibility” argument significantly extends a seminal characterization of Von Stengel & Koller (1997), and it is the crux in the analysis towards establishing our main result, Theorem 1.1. To this end, the techniques we leverage are more involved compared to (Von Stengel & Koller, 1997), and revolve around nonlinear programming. Specifically, in the spirit of (Filar & Vrieze, 2012, Chapter 3), the starting point of our argument is the following nonlinear program with variables (x,v) ∈ X × RS : (Q-NLP) min ∑ s∈S ρ(s)v(s) + ℓ∥x− x̂∥2 s.t. r(s,x, b) + γ ∑ s′∈S P(s′|s,x, b)v(s′) ≤ v(s), ∀(s, b) ∈ S × B; (Q1) x⊤k,s1 = 1, ∀(k, s) ∈ [n]× S; and (Q2) xk,s,a ≥ 0, ∀k ∈ [n], (s, a) ∈ S ×Ak. (Q3) Here, we have overloaded notation so that r(s,x, b) := Ea∼xs [r(s,a, b] and P(s′|s,x, b)) := Ea∼xs [P(s′|s,a, b)]. For a fixed strategy x ∈ X for the team, this program describes the (discounted) MDP faced by the adversary. A central challenge in this formulation lies in the nonconvexity-nonconcavity of the constraint functions, witnessed by the multilinear constraint (Q1). Importantly, unlike standard MDP formulations, we have incorporated a quadratic regularizer in the objective function; this term ensures the following property. Proposition 3.2. For any fixed x ∈ X , there is a unique optimal solution v⋆ to (Q-NLP). Further, if x̃ := proxϕ/(2ℓ)(x̂) and ṽ ∈ RS is the corresponding optimal, then (x̃, ṽ) is the global optimum of (Q-NLP). The uniqueness of the associated value vector is a consequence of Bellman’s optimality equation, while the optimality of the proximal point follows by realizing that (Q-NLP) is an equivalent formulation of the proximal mapping. These steps are formalized in Appendix B.2. Having established the optimality of (x̃, ṽ), the next step is to show the existence of nonnegative Lagrange multipliers satisfying the KKT conditions (recall Definition A.2); this is non-trivial due to the nonconvexity of the feasibility set of (Q-NLP). To do so, we leverage the so-called Arrow-Hurwicz-Uzawa constraint qualification (Theorem A.1)—a form of “regularity condition” for a nonconvex program. Indeed, in Lemma B.3 we show that any feasible point of (Q-NLP) satisfies that constraint qualification, thereby implying the existence of nonnegative Lagrange multipliers satisfying the KKT conditions for any local optimum (Corollary B.1), and in particular for (x̃, ṽ): Proposition 3.3. There exist nonnegative Lagrange multipliers satisfying the KKT conditions at (x̃, ṽ). Now the upshot is that a subset of those Lagrange multipliers λ̃ ∈ RS×B can be used to establish the extendibility of x̂ to a Nash equilibrium. Indeed, our next step makes this explicit: We construct a linear program whose sole goal is to identify such multipliers, which in turn will allow us to efficiently compute an admissible strategy for the adversary ŷ. However, determining λ̃ exactly seems too ambitious. For one, IPGMAX only granted us access to x̂, but not to x̃. On the other hand, the Lagrange multipliers λ̃ are induced by (x̃, ṽ). To address this, the constraints of our linear program are phrased in terms of (x̂, v̂), instead of (x̃, ṽ), while to guarantee feasibility we appropriately relax all the constraints of the linear program; this relaxation does not introduce a large error since ∥x̂ − x̃∥ ≤ ϵ (Proposition 3.1), and the underlying constraint functions are Lipschitz continuous—with constants that depend favorably on the game G; we formalize that in Lemma B.4. This leads to our main theorem, summarized below (see Theorem B.1 for a precise statement). Theorem 3.1. Let x̂ be an ϵ-nearly stationary point of ϕ. There exist a linear program, (LPadv), such that: (i) It has size that is polynomial in G, and all the coefficients depend on the (single-agent) MDP faced by the adversary when the team is playing a fixed strategy x̂; and (ii) It is always feasible, and any solution induces a strategy ŷ such that (x̂, ŷ) is an O(ϵ)approximate Nash equilibrium. The proof of this theorem carefully leverages the structure of adversarial team Markov games, along with the KKT conditions we previously established in Proposition 3.3. The algorithm for computing the policy for the adversary is summarized in Algorithm 2 of Appendix B. A delicate issue with Theorem 3.1, and in particular with the solution of (LPadv), is whether one can indeed efficiently simulate the environment faced by the adversary. Indeed, in the absence of any structure, determining the coefficients of the linear program could scale exponentially with the number of players; this is related to a well-known issue in computational game theory, revolving around the exponential blow-up of the input space as the number of players increases (Papadimitriou & Roughgarden, 2008). As is standard, we bypass this by assuming access to natural oracles that ensure we can efficiently simulate the environment faced by the adversary (Remark 2). 4 FURTHER RELATED WORK In this section, we highlight certain key lines of work that relate to our results in the context of adversarial team Markov games. We stress that the related literature on multi-agent reinforcement learning (MARL) is too vast to even attempt to faithfully cover here. For some excellent recent overviews of the area, we refer the interested reader to (Yang & Wang, 2020; Zhang et al., 2021a) and the extensive lists of references therein. Team Games. The study of team games has been a prolific topic of research in economic theory and group decision theory for many decades; see, e.g., (Marschak, 1955; Groves, 1973; Radner, 1962; Ho & Chu, 1972). A more modern key reference point to our work is the seminal paper of Von Stengel & Koller (1997) that introduced the notion of team-maxmin equilibrium (TME) in the context of normal-form games. A TME profile is a mixed strategy for each team member so that the minimal expected team payoff over all possible responses of the adversary—who potentially knows the play of the team—is the maximum possible. While TME’s enjoy a number of compelling properties, being the optimal equilibria for the team given the lack of coordination, they suffer from computational intractability even in 3-player team games (Hansen et al., 2008; Borgs et al., 2010).3 Nevertheless, practical algorithms have been recently proposed and studied for computing them in multiplayer games (Zhang & An, 2020a;b; Basilico et al., 2017). It is worth pointing out that team equilibria are also useful for extensive-form two-player zero-sum games where one of the players has imperfect recall (Piccione & Rubinstein, 1997). The intractability of TME has motivated the study of a relaxed equilibrium concept that incorporates a correlation device (Farina et al., 2018; Celli & Gatti, 2018; Basilico et al., 2017; Zhang & An, 2020b; Zhang & Sandholm, 2021; Zhang et al., 2022b; Carminati et al., 2022; Zhang et al., 2022a); namely, TMECor. In TMECor players are allowed to select correlated strategies. Despite the many compelling aspects of TMECor as a solution concept in team games, even ex ante coordination or correlated randomization—beyond the structure of the game itself—can be overly expensive or even infeasible in many applications (Von Stengel & Koller, 1997). Further, even TMECor is NPhard to compute (in the worst-case) for imperfect-information extensive-form games (EFGs) (Chu & Halpern, 2001), although fixed-parameter-tractable (FPT) algorithms have recently emerged for natural classes of EFGs (Zhang & Sandholm, 2021; Zhang et al., 2022b). 3Hansen et al. (2008); Borgs et al. (2010) establish FNP-hardness and inapproximability for general 3- player games, but their argument readily applies to 3-player team games as well. On the other hand, the computational aspects of the standard Nash equilibrium (NE) in adversarial team games is not well-understood, even in normal-form games. In fact, it is worth pointing out that Von Neumann’s celebrated minimax theorem (von Neumann & Morgenstern, 2007) does not apply in team games, rendering traditional techniques employed in two-player zero-sum games of little use. Indeed, Schulman & Vazirani (2017) provided a precise characterization of the duality gap between the two teams based on the natural parameters of the problem, while Kalogiannis et al. (2021) showed that standard no-regret learning dynamics such as gradient descent and optimistic Hedge could fail to stabilize to mixed NE even in binary-action adversarial team games. Finally, we should also point out that although from a complexity-theoretic standpoint our main result (Theorem 1.1) establishes a fully polynomial time approximate scheme (FPTAS), since the dependence on the approximation error ϵ is poly(1/ϵ), an improvement to poly(log(1/ϵ)) is precluded even in normal-form games unless CLS ⊆ P (an unlikely event); this follows as adversarial team games capture potential games (Kalogiannis et al., 2021), wherein computing mixed Nash equilibria is known to be complete for the class CLS = PPAD ∩ PLS (Babichenko & Rubinstein, 2021). Multi-agent RL. Computing Nash equilibria has been a central endeavor in multi-agent RL. While some algorithms have been proposed, perhaps most notably the Nash-Q algorithm (Hu & Wellman, 1998; 2003), convergence to Nash equilibria is only guaranteed under severe restrictions on the game. More broadly, the long-term behavior of independent policy gradient methods (Schulman et al., 2015) is still not well-understood. Before all else, from the impossibility result of Hart & Mas-Colell, universal convergence to Nash equilibria is precluded even for normal-form games; this is aligned with the computational intractability (PPAD-completeness) of Nash equilibria even in two-player general-sum games (Daskalakis et al., 2009; Chen et al., 2009). Surprisingly, recent work has also established hardness results in turn-based stochastic games, rendering even the weaker notion of (stationary) CCEs intractable (Daskalakis et al., 2022; Jin et al., 2022). As a result, the existing literature has inevitably focused on specific classes of games, such as Markov potential games (Leonardos et al., 2021; Ding et al., 2022; Zhang et al., 2021b; Chen et al., 2022; Maheshwari et al., 2022; Fox et al., 2022) or two-player zero-sum Markov games (Daskalakis et al., 2020; Wei et al., 2021; Sayin et al., 2021; Cen et al., 2021; Sayin et al., 2020). As we pointed out earlier, adversarial Markov team games can unify and extend those settings (Section 2.3). More broadly, identifying multi-agent settings for which Nash equilibria are provably efficiently computable is recognized as an important open problem in the literature (see, e.g., (Daskalakis et al., 2020)), boiling down to one of the main research question of this paper (Question (⋆)). We also remark that certain guarantees for convergence to Nash equilibria have been recently obtained in a class of symmetric games (Emmons et al., 2022)—including symmetric team games. Finally, weaker solution concepts relaxing either the Markovian or the stationarity properties have also recently attracted attention (Daskalakis et al., 2022; Jin et al., 2021). 5 CONCLUSIONS Our main contribution in this paper is the first polynomial algorithm for computing (stationary) Nash equilibria in adversarial team Markov games, an important class of games in which a team of uncoordinated but identically-interested players is competing against an adversarial player. We argued that this setting serves as a step towards modeling more realistic multi-agent applications that feature both competing and cooperative interests. There are many interesting directions for future research. One caveat of our main algorithm (IPGMAX) is that it requires a separate subroutine for computing the optimal policy of the adversary. It is plausible that a carefully designed two-timescale policy gradient method can efficiently reach a Nash equilibrium, which would yield fully model-free algorithms for adversarial team Markov games by obviating the need to solve a linear program. Techniques from the literature on constrained MDPs (Ying et al., 2022) could also be useful for computing the policy of the adversary in a more scalable way. Furthermore, exploring different solution concepts—beyond Nash equilibria—could also be a fruitful avenue for the future. Indeed, allowing some limited form of correlation between the players in the team could lead to more efficient algorithms; whether that form of coordination is justified (arguably) depends to a large extent on the application at hand. Finally, returning to Question (⋆), a more ambitious agenda revolves around understanding the fundamental structure of games for which computing Nash equilibria is provably computationally tractable. ACKNOWLEDGMENTS We are grateful to the anonymous ICLR reviewers for their valuable feedback. Ioannis Anagnostides thanks Gabriele Farina and Brian H. Zhang for helpful discussions. Ioannis Panageas would like to acknowledge a start-up grant. Part of this project was done while he was a visiting research scientist at the Simons Institute for the Theory of Computing for the program “Learning and Games”. Vaggos Chatziafratis was supported by a start-up grant of UC Santa Cruz, the Foundations of Data Science Institute (FODSI) fellowship at MIT and Northeastern, and part of this work was carried out at the Simons Institute for the Theory of Computing. Emmanouil V. Vlatakis-Gkaragkounis is grateful for financial support by the Google-Simons Fellowship, Pancretan Association of America and Simons Collaboration on Algorithms and Geometry. This project was completed while he was a visiting research fellow at the Simons Institute for the Theory of Computing. Additionally, he would like to acknowledge the following series of NSF-CCF grants under the numbers 1763970/2107187/1563155/1814873.
1. What is the focus of the paper regarding multi-agent Markov games? 2. What are the strengths of the proposed algorithm, particularly in terms of scaling and technique? 3. What are the limitations of the approach, especially regarding its applicability and practicality? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper considers multi-agent Markov games; specifically, the problem of computing a Nash equilibrium. Even in normal-form games, it is computationally intractable to compute a Nash equilibrium in two-player general-sum games, much less in games with more players. In this paper, the game is restricted to coalition of players with a common adversary, so-called "adversarial team Markov games". In addition to the case when the coalition shares a common objective, the results extend to the case in which the objectives of the coalition share a common potential function (Markov potential games). Strengths And Weaknesses Strengths The paper introduces the first poly-time algorithm to compute a Nash equilibrium in the setting of adversarial team Markov games. Computing a Nash equilibrium is an important practical problem, but unfortunately the general-case is difficult. Therefore, it is useful and relevant to consider special cases such as the coalition considered here. The algorithm has good scaling with the action sets of each agent; that is, it scales with their sum rather than the product. The techniques used to obtain this result are well described at an overview level and compared to the most relevant related work which is to compute a Nash equilibrium in the analogous normal-form setting. The challenges in generalizing the normal-form setting to Markov game include: nonlinear program with a set of nonconvex constraints, which requires the Arrow-Hurwiz-Uzawa constraint qualification technique. Weaknesses The type of game considered is very special. It is only a modest generalization of two-player zero-sum games. More practical would be algorithms that have exponential worst-case but still run on practical examples of n-player Markov games. Still, any generalizations from fully cooperative or competitive two-player games are welcome. The practical scalability of the algorithm is not evaluated. Although it is polynomial, the number of iterations looks very large from inspection of the pseudocode. Still, this is a theoretical work, so this isn't too significant of a weakness. Clarity, Quality, Novelty And Reproducibility The paper is written very clearly and is well organized. The novelty is put into context well and is clear.
ICLR
Title Efficiently Computing Nash Equilibria in Adversarial Team Markov Games Abstract Computing Nash equilibrium policies is a central problem in multi-agent reinforcement learning that has received extensive attention both in theory and in practice. However, in light of computational intractability barriers in general-sum games, provable guarantees have been thus far either limited to fully competitive or cooperative scenarios, or impose strong assumptions that are difficult to meet in most practical applications. In this work, we depart from those prior results by investigating infinite-horizon adversarial team Markov games, a natural and well-motivated class of games in which a team of identically-interested players— in the absence of any explicit coordination or communication—is competing against an adversarial player. This setting allows for a unifying treatment of zero-sum Markov games and Markov potential games, and serves as a step to model more realistic strategic interactions that feature both competing and cooperative interests. Our main contribution is the first algorithm for computing stationary ε-approximate Nash equilibria in adversarial team Markov games with computational complexity that is polynomial in all the natural parameters of the game, as well as 1/ε. The proposed algorithm is based on performing independent policy gradient steps for each player in the team, in tandem with best responses from the side of the adversary; in turn, the policy for the adversary is then obtained by solving a carefully constructed linear program. Our analysis leverages non-standard techniques to establish the KKT optimality conditions for a nonlinear program with nonconvex constraints, thereby leading to a natural interpretation of the induced Lagrange multipliers. 1 INTRODUCTION Multi-agent reinforcement learning (MARL) offers a principled framework for analyzing competitive interactions in dynamic and stateful environments in which agents’ actions affect both the state of the world and the rewards of the other players. Strategic reasoning in such complex multi-agent settings has been guided by game-theoretic principles, leading to many recent landmark results in benchmark domains in AI (Bowling et al., 2015; Silver et al., 2017; Vinyals et al., 2019; Moravčı́k et al., 2017; Brown & Sandholm, 2019; 2018; Brown et al., 2020; Perolat et al., 2022). Most of these remarkable advances rely on scalable and decentralized algorithms for computing Nash equilibria (Nash, 1951)—a standard game-theoretic notion of rationality—in two-player zero-sum games. Nevertheless, while single-agent RL has enjoyed rapid theoretical progress over the last few years (e.g., see (Jin et al., 2018; Agarwal et al., 2020; Li et al., 2021; Luo et al., 2019; Sidford et al., 2018), and references therein), a comprehensive understanding of the multi-agent landscape still remains elusive. Indeed, provable guarantees for efficiently computing Nash equilibria have been thus far limited to either fully competitive settings, such as two-player zero-sum games (Daskalakis et al., 2020; Wei et al., 2021; Sayin et al., 2021; Cen et al., 2021; Sayin et al., 2020; Condon, 1993), or environments in which agents are striving to coordinate towards a common global objective (Claus ∗Correspondence to [email protected]. & Boutilier, 1998; Wang & Sandholm, 2002; Leonardos et al., 2021; Ding et al., 2022; Zhang et al., 2021b; Chen et al., 2022; Maheshwari et al., 2022; Fox et al., 2022). However, many real-world applications feature both shared and competing interests between the agents. Efficient algorithms for computing Nash equilibria in such settings are much more scarce, and typically impose restrictive assumptions that are difficult to meet in most applications (Hu & Wellman, 2003; Bowling, 2000). In fact, even in stateless two-player (normal-form) games, computing approximate Nash equilibria is computationally intractable (Daskalakis et al., 2009; Rubinstein, 2017; Chen et al., 2009; Etessami & Yannakakis, 2010)—subject to well-believed complexity-theoretic assumptions. As a result, it is common to investigate equilibrium concepts that are more permissive than Nash equilibria, such as coarse correlated equilibria (CCE) (Aumann, 1974; Moulin & Vial, 1978). Unfortunately, recent work has established strong lower bounds for computing even approximate (stationary) CCEs in turn-based stochastic two-player games (Daskalakis et al., 2022; Jin et al., 2022). Those negative results raise a central question: Are there natural multi-agent environments incorporating both competing and shared interests for which we can establish efficient algorithms for computing (stationary) Nash equilibria? (⋆) Our work makes concrete progress in this fundamental direction. Specifically, we establish the first efficient algorithm leading to Nash equilibria in adversarial team Markov games, a well-motivated and natural multi-agent setting in which a team of agents with a common objective is facing a competing adversary. 1.1 OUR RESULTS Before we state our main result, let us first briefly introduce the setting of adversarial team Markov games; a more precise description is deferred to Section 2.1. To address Question (⋆), we study an infinite-horizon Markov (stochastic) game with a finite state space S in which a team of agents NA := [n] with a common objective function is competing against a single adversary with opposing interests. Every agent k ∈ [n] has a (finite) set of available actions Ak, while B represents the adversary’s set of actions. We will also let γ ∈ [0, 1) be the discounting factor. Our goal will be to compute an (approximate) Nash equilibrium; that is, a strategy profile so that no player can improve via a unilateral deviation (see Definition 2.1). In this context, our main contribution is the first polynomial time algorithm for computing Nash equilibria in adversarial team Markov games: Theorem 1.1 (Informal). There is an algorithm (IPGMAX) that, for any ϵ > 0, computes an ϵapproximate stationary Nash equilibrium in adversarial team Markov games, and runs in time poly ( |S|, n∑ k=1 |Ak|+ |B|, 1 1− γ , 1 ϵ ) . A few remarks are in order. First, our guarantee significantly extends and unifies prior results that only applied to either two-player zero-sum Markov games or to Markov potential games; both of those settings can be cast as special cases of adversarial team Markov games (see Section 2.3). Further, the complexity of our algorithm, specified in Theorem 1.1, scales only with ∑ k∈NA |Ak| instead of ∏ k∈NA |Ak|, bypassing what is often referred to as the curse of multi-agents (Jin et al., 2021). Indeed, viewing the team as a single “meta-player” would induce an action space of size∏ k∈NA |Ak|, which is exponential in n even if each agent in the team has only two actions. In fact, our algorithm operates without requiring any (explicit) form of coordination or communication between the members of the team (beyond the structure of the game), a feature that has been motivated in practical applications (von Stengel & Koller, 1997). Namely, scenarios in which communication or coordination between the members of the team is either overly expensive, or even infeasible; for an in depth discussion regarding this point we refer to (Schulman & Vazirani, 2017). 1.2 OVERVIEW OF TECHNIQUES To establish Theorem 1.1, we propose a natural and decentraliezd algorithm we refer to as Independent Policy GradientMax (IPGMAX). IPGMAX works in turns. First, each player in the team performs one independent policy gradient step on their value function with an appropriately selected learning rate η > 0. In turn, the adversary best responds to the current policy of the team. This exchange is repeated for a sufficiently large number of iterations T . Finally, IPGMAX includes an auxiliary subroutine, namely AdvNashPolicy(), which computes the Nash policy of the adversary; this will be justified by Proposition 1.1 we describe below. Our analysis builds on the techniques of Lin et al. (2020)—developed for the saddle-point problem minx∈X maxy∈Y f(x,y)—for characterizing GDMAX. Specifically, GDMAX consists of performing gradient descent steps, specifically on the function ϕ(x) := maxy∈Y f(x,y). Lin et al. (2020) showed that GDMAX converges to a point ( x̂,y∗(x̂) ) such that x̂ is an approximate first-order stationary point of the Moreau envelope (see Definition 3.1) of ϕ(x), while y∗(x̂) is a best response to x̂. Now if f(x, ·) is strongly-concave, one can show (by Danskin’s theorem) that ( x̂,y∗(x) ) is an approximate first-order stationary point of f . However, our setting introduces further challenges since the value function Vρ(πteam,πadv) is nonconvex-nonconcave. For this reason, we take a more refined approach. We first show in Proposition 3.1 that IPGMAX is guaranteed to converge to a policy profile ( π̂team, · ) such that π̂team is an ϵ-nearly stationary point of maxπadv Vρ(πteam,πadv). Then, the next key step and the crux of the analysis is to show that π̂team can be extended to an O(ϵ)-approximate Nash equilibrium policy: Proposition 1.1 (Informal). If π̂team is an ϵ-nearly stationary point of maxπadv Vρ(πteam,πadv), there exists a policy for the adversary π̂adv so that (π̂team, π̂adv) is anO(ϵ)-approximate Nash equilibrium. In the special case of normal-form games, a similar extension theorem was recently obtained by Anagnostides et al. (2023). In particular, that result was derived by employing fairly standard linear programming techniques. In contrast, our more general setting introduces several new challenges, not least due to the nonconvexity-nonconcavity of the objective function. Indeed, our analysis leverages more refined techniques stemming from nonlinear programming. More precisely, while we make use of standard policy gradient properties, similar to the single-agent MDP setting (Agarwal et al., 2021; Xiao, 2022), our analysis does not rely on the so-called gradientdominance property (Bhandari & Russo, 2019), as that property does not hold in a team-wise sense. Instead, inspired by an alternative proof of Shapley’s theorem (Shapley, 1953) for two-person zerosum Markov games (Filar & Vrieze, 2012, Chapter 3), we employ mathematical programming. One of the central challenges is that the induced nonlinear program has a set of nonconvex constraints. As such, even the existence of (nonnegative) Lagrange multipliers satisfying the KKT conditions is not guaranteed, thereby necessitating more refined analysis techniques. To this end, we employ the Arrow-Hurwiz-Uzawa constraint qualification (Theorem A.1) in order to establish that the local optima are contained in the set of KKT points (Corollary B.1). Then, we leverage the structure of adversarial team Markov games to characterize the induced Lagrange multipliers, showing that a subset of these can be used to establish Proposition 1.1; incidentally, this also leads to an efficient algorithm for computing a (near-)optimal policy of the adversary. Finally, we also remark that controlling the approximation error—an inherent barrier under policy gradient methods—in Proposition 1.1 turns out to be challenging. We bypass this issue by constructing “relaxed” programs that incorporate some imprecision in the constraints. A more detailed overview of our algorithm and the analysis is given in Section 3. 2 PRELIMINARIES In this section, we introduce the relevant background and our notation. Section 2.1 describes adversarial team Markov games. Section 2.2 then defines some key concepts from multi-agent MDPs, while Section 2.3 describes a generalization of adversarial team Markov games, beyond identicallyinterested team players, allowing for a richer structure in the utilities of the team—namely, adversarial Markov potential games. Notation. We let [n] := {1, . . . , n}. We use superscripts to denote the (discrete) time index, and subscripts to index the players. We use boldface for vectors and matrices; scalars will be denoted by lightface variables. We denote by ∥ ·∥ := ∥ ·∥2 the Euclidean norm. For simplicity in the exposition, we may sometimes use theO(·) notation to suppress dependencies that are polynomial in the natural parameters of the game; precise statements are given in the Appendix. For the convenience of the reader, a comprehensive overview of our notation is given in A.3. 2.1 ADVERSARIAL TEAM MARKOV GAMES An adversarial team Markov game (or an adversarial team stochastic game) is the Markov game extension of static, normal-form adversarial team games (Von Stengel & Koller, 1997). The game is assumed to take place in an infinite-horizon discounted setting in which a team of identicallyinterested agents gain what the adversary loses. Formally, the game G is represented by a tuple G = (S,N ,A,B, r,P, γ, ρ) whose components are defined as follows. • S is a finite and nonempty set of states, with cardinality S := |S|; • N is the set of players, partitioned into a set of n team agentsNA := [n] and a single adversary • Ak is the action space of each player in the team k ∈ [n], so that A :=×k∈[n]Ak, while B is the action space of the adversary. We also let Ak := |Ak| and B := |B|;1 • r : S ×A×B → (0, 1) is the (deterministic) instantaneous reward function2 representing the (normalized) payoff of the adversary, so that for any (s,a, b) ∈ S ×A× B, r(s,a, b) + n∑ k=1 rk(s,a, b) = 0, (1) and for any k ∈ [n], rk(s,a, b) = rteam(s,a, b). (2) • P : S ×A×B → ∆(S) is the transition probability function, so that P(s′|s,a, b) denotes the probability of transitioning to state s′ ∈ S when the current state is s ∈ S under the action profile (a, b) ∈ A× B; • γ ∈ [0, 1) is the discount factor; and • ρ ∈ ∆(S) is the initial state distribution over the state space. We will assume that ρ is full- support, meaning that ρ(s) > 0 for all s ∈ S. In other words, an adversarial team Markov game is a subclass of general-sum infinite-horizon multi-agent discounted MDPs under the restriction that all but a single (adversarial) player have identical interests (see (2)), and the game is globally zero-sum—in the sense of (1). As we point out in Section 2.3, (2) can be relaxed in order to capture (adversarial) Markov potential games (Definition 2.2), without qualitatively altering our results. 2.2 POLICIES, VALUE FUNCTION, AND NASH EQUILIBRIA Policies. A stationary—that is, time-invariant—policy πk for an agent k is a function mapping a given state to a distribution over available actions, πk : S ∋ s 7→ πk(·|s) ∈ ∆(Ak). We will say that πk is deterministic if for every state there is some action that is selected with probability 1 under policy πk. For convenience, we will let Πteam : S → ∆(A) and Πadv : S → ∆(B) denote the policy space for the team and the adversary respectively. We may also write Π : S → ∆(A)×∆(B) to denote the joint policy space of all agents. Direct Parametrization. Throughout this paper we will assume that players employ direct policy parametrization. That is, for each player k ∈ [n], we let Xk := ∆(Ak)S and πk = xk so that xk,s,a = πk(a|s). Similarly, for the adversary, we let Y := ∆(B)S and πadv = y so that ys,a = πadv(a|s). (Extending our results to other policy parameterizations, such as soft-max (Agarwal et al., 2021), is left for future work.) Value Function. The value function Vs : Π ∋ (π1, . . . ,πn,πadv) 7→ R is defined as the expected cumulative discounted reward received by the adversary under the joint policy (πteam,πadv) ∈ Π and the initial state s ∈ S, where πteam := (π1, . . . ,πn). In symbols, Vs(πteam,πadv) := E(πteam,πadv) [ ∞∑ t=0 γtr(s(t),a(t), b(t)) ∣∣s0 = s] , (3) 1To ease the notation, and without any essential loss of generality, we will assume throughout that the action space does not depend on the state. 2Assuming that the reward is positive is without any loss of generality (see Claim D.6). where the expectation is taken over the trajectory distribution induced by πteam and πadv. When the initial state is drawn from a distribution ρ, the value function takes the form Vρ(πteam,πadv) := Es∼ρ [ Vs(πteam,πadv) ] . Nash Equilibrium. Our main goal is to compute a joint policy profile that is an (approximate) Nash equilibrium, a standard equilibrium concept in game theory formalized below. Definition 2.1 (Nash equilibrium). A joint policy profile ( π⋆team,π ⋆ adv ) ∈ Π is an ε-approximate Nash equilibrium, for ϵ ≥ 0, if{ Vρ(π ⋆ team,π ⋆ adv) ≤ Vρ((π′k,π⋆−k),π⋆adv ) + ε, ∀k ∈ [n],∀π′k ∈ Πk, Vρ(π ⋆ team,π ⋆ adv) ≥ Vρ(π⋆team,π′adv)− ε, ∀π′adv ∈ Πadv. That is, a joint policy profile is an (approximate) Nash equilibrium if no unilateral deviation from a player can result in a non-negligible—more than additive ϵ—improvement for that player. Nash equilibria always exist in multi-agent stochastic games (Fink, 1964); our main result implies an (efficient) constructive proof of that fact for the special case of adversarial team Markov games. 2.3 ADVERSARIAL MARKOV POTENTIAL GAMES A recent line of work has extended the fundamental class of potential normal-form games (Monderer & Shapley, 1996) to Markov potential games (Marden, 2012; Macua et al., 2018; Leonardos et al., 2021; Ding et al., 2022; Zhang et al., 2021b; Chen et al., 2022; Maheshwari et al., 2022; Fox et al., 2022). Importantly, our results readily carry over even if players in the team are not necessarily identically interested, but instead, there is some underlying potential function for the team; we will refer to such games as adversarial Markov potential games, formally introduced below. Definition 2.2. An adversarial Markov potential game G = (S,N ,A,B, {rk}k∈[n],P, γ, ρ) is a multi-agent discounted MDP that shares all the properties of adversarial team Markov games (Section 2.1), with the exception that (2) is relaxed in that there exists a potential function Φs, ∀s ∈ S , such that for any πadv ∈ Πadv, Φs(πk,π−k;πadv)− Φs(π′k,π−k;πadv) = Vk,s(πk,π−k;πadv)− Vk,s(π′k,π−k;πadv), for every agent k ∈ [n], every state s ∈ S, and all policies πk,πk′ ∈ Πk and π−k ∈ Π−k. 3 MAIN RESULT In this section, we sketch the main pieces required in the proof of our main result, Theorem 1.1. We begin by describing our algorithm in Section 3.1. Next, in Section 3.2, we characterize the strategy x̂ ∈ X for the team returned by IPGMAX, while Section 3.3 completes the proof by establishing that x̂ can be efficiently extended to an approximate Nash equilibrium. The formal proof of Theorem 1.1 is deferred to the Appendix. 3.1 OUR ALGORITHM In this subsection, we describe in detail our algorithm for computing ϵ-approximate Nash equilibria, IPGMAX, in adversarial team Markov games (Algorithm 1). IPGMAX takes as input a precision parameter ϵ > 0 (Line 1) and an initial strategy for the team (x(0)1 , . . . ,x (0) n ) = x(0) ∈ X := ×nk=1 Xk (Line 2). The algorithm then proceeds in two phases: • In the first phase the team players are performing independent policy gradient steps (Line 7) with learning rate η, as defined in Line 3, while the adversary is then best responding to their joint strategy (Line 6). Both of these steps can be performed in polynomial time under oracle access to the game (see Remark 2). This process is repeated for T iterations, with T as defined in Line 4. We note that Proj (·) in Line 7 stands for the Euclidean projection, ensuring that each player selects a valid strategy. The first phase is completed in Line 9, where we set x̂ according to the iterate at time t⋆, for some 0 ≤ t⋆ ≤ T − 1. As we explain in Section 3.2, selecting uniformly at random is a practical and theoretically sound way of setting t⋆. • In the second phase we are fixing the strategy of the team x̂ ∈ X , and the main goal is to determine a strategy ŷ ∈ Y so that (x̂, ŷ) is an O(ϵ)-approximate Nash equilibrium. This is accomplished in the subroutine AdvNashPolicy(x̂), which consists of solving a linear program—from the perspective of the adversary—that has polynomial size. Our analysis of the second phase of IPGMAX can be found in Section 3.3. It is worth stressing that under gradient feedback, IPGMAX requires no communication or coordination between the players in the team. Algorithm 1 Independent Policy GradientMax (IPGMAX) 1: Precision ϵ > 0 2: Initial Strategy x(0) ∈ X 3: Learning rate η := ϵ 2(1−γ)9 32S4D2( ∑n k=1 Ak+B) 3 4: Number of iterations T := 512S8D4( ∑n k=1 Ak+B) 4 ϵ4(1−γ)12 5: for t← 1, 2, . . . , T do 6: y(t) ← argmaxy∈Y Vρ ( x(t−1),y ) 7: x(t)k ← ProjXk ( x (t−1) k − η∇xkVρ ( x(t−1),y(t) )) ▷ for all agents i ∈ [n] 8: end for 9: x̂← x(t⋆) 10: ŷ← AdvNashPolicy(x̂) ▷ defined in Algorithm 2 11: return (x̂, ŷ) 3.2 ANALYZING INDEPENDENT POLICY GRADIENTMAX In this subsection, we establish that IPGMAX finds an ϵ-nearly stationary point x̂ of ϕ(x) := maxy∈Y Vρ(x,y) in a number of iterations T that is polynomial in the natural parameters of the game, as well as 1/ϵ; this is formalized in Proposition 3.1. First, we note the by-now standard property that the value function Vρ is L-Lipschitz continuous and ℓ-smooth, where L := √∑n k=1 Ak+B (1−γ)2 and ℓ := 2( ∑n k=1 Ak+B) (1−γ)3 (Lemma C.1). An important observation for the analysis is that IPGMAX is essentially performing gradient descent steps on ϕ(x). However, the challenge is that ϕ(x) is not necessarily differentiable; thus, our analysis relies on the Moreau envelope of ϕ, defined as follows. Definition 3.1 (Moreau Envelope). Let ϕ(x) := maxy∈Y Vρ(x,y). For any 0 < λ < 1ℓ the Moreau envelope ϕλ of ϕ is defined as ϕλ(x) := min x′∈X { ϕ(x′) + 1 2λ ∥x− x′∥2 } . (4) We will let λ := 12ℓ . Crucially, the Moreau envelope ϕλ, as introduced in (4), is ℓ-strongly convex; this follows immediately from the fact that ϕ(x) is ℓ-weakly convex, in the sense that ϕ(x) + ℓ2∥x∥ 2 is convex (see Lemma A.1). A related notion that will be useful to measure the progress of IPGMAX is the proximal mapping of a function f , defined as proxf : X ∋ x 7→ argminx′∈X { f(x′) + 12∥x ′ − x∥2 } ; the proximal point of ϕ/(2ℓ) is well-defined since ϕ is ℓ-weakly convex (Proposition A.1). We are now ready to state the convergence guarantee of IPGMAX. Proposition 3.1. Consider any ϵ > 0. If η = 2ϵ2(1− γ) and T = (1−γ) 4 8ϵ4( ∑n k=1 Ak+B) 2 , there exists an iterate t⋆, with 0 ≤ t⋆ ≤ T − 1, such that ∥∥x(t⋆) − x̃(t⋆)∥∥ 2 ≤ ϵ, where x̃(t⋆) := proxϕ/(2ℓ)(x(t ⋆)). The proof relies on the techniques of Lin et al. (2020), and it is deferred to Appendix C. The main takeaway is that O(1/ϵ4) iterations suffice in order to reach an ϵ-nearly stationary point of ϕ— in the sense that it is ϵ-far in ℓ2 distance from its proximal point. A delicate issue here is that Proposition 3.1 only gives a best-iterate guarantee, and identifying that iterate might introduce a substantial computational overhead. To address this, we also show in Corollary C.1 that by randomly selecting ⌈log(1/δ)⌉ iterates over the T repetitions of IPGMAX, we are guaranteed to recover an ϵnearly stationary point with probability at least 1− δ, for any δ > 0. 3.3 EFFICIENT EXTENSION TO NASH EQUILIBRIA In this subsection, we establish that any ϵ-nearly stationary point x̂ of ϕ, can be extended to an O(ϵ)-approximate Nash equilibrium (x̂, ŷ) for any adversarial team Markov game, where ŷ ∈ Y is the strategy for the adversary. Further, we show that ŷ can be computed in polynomial time through a carefully constructed linear program. This “extendibility” argument significantly extends a seminal characterization of Von Stengel & Koller (1997), and it is the crux in the analysis towards establishing our main result, Theorem 1.1. To this end, the techniques we leverage are more involved compared to (Von Stengel & Koller, 1997), and revolve around nonlinear programming. Specifically, in the spirit of (Filar & Vrieze, 2012, Chapter 3), the starting point of our argument is the following nonlinear program with variables (x,v) ∈ X × RS : (Q-NLP) min ∑ s∈S ρ(s)v(s) + ℓ∥x− x̂∥2 s.t. r(s,x, b) + γ ∑ s′∈S P(s′|s,x, b)v(s′) ≤ v(s), ∀(s, b) ∈ S × B; (Q1) x⊤k,s1 = 1, ∀(k, s) ∈ [n]× S; and (Q2) xk,s,a ≥ 0, ∀k ∈ [n], (s, a) ∈ S ×Ak. (Q3) Here, we have overloaded notation so that r(s,x, b) := Ea∼xs [r(s,a, b] and P(s′|s,x, b)) := Ea∼xs [P(s′|s,a, b)]. For a fixed strategy x ∈ X for the team, this program describes the (discounted) MDP faced by the adversary. A central challenge in this formulation lies in the nonconvexity-nonconcavity of the constraint functions, witnessed by the multilinear constraint (Q1). Importantly, unlike standard MDP formulations, we have incorporated a quadratic regularizer in the objective function; this term ensures the following property. Proposition 3.2. For any fixed x ∈ X , there is a unique optimal solution v⋆ to (Q-NLP). Further, if x̃ := proxϕ/(2ℓ)(x̂) and ṽ ∈ RS is the corresponding optimal, then (x̃, ṽ) is the global optimum of (Q-NLP). The uniqueness of the associated value vector is a consequence of Bellman’s optimality equation, while the optimality of the proximal point follows by realizing that (Q-NLP) is an equivalent formulation of the proximal mapping. These steps are formalized in Appendix B.2. Having established the optimality of (x̃, ṽ), the next step is to show the existence of nonnegative Lagrange multipliers satisfying the KKT conditions (recall Definition A.2); this is non-trivial due to the nonconvexity of the feasibility set of (Q-NLP). To do so, we leverage the so-called Arrow-Hurwicz-Uzawa constraint qualification (Theorem A.1)—a form of “regularity condition” for a nonconvex program. Indeed, in Lemma B.3 we show that any feasible point of (Q-NLP) satisfies that constraint qualification, thereby implying the existence of nonnegative Lagrange multipliers satisfying the KKT conditions for any local optimum (Corollary B.1), and in particular for (x̃, ṽ): Proposition 3.3. There exist nonnegative Lagrange multipliers satisfying the KKT conditions at (x̃, ṽ). Now the upshot is that a subset of those Lagrange multipliers λ̃ ∈ RS×B can be used to establish the extendibility of x̂ to a Nash equilibrium. Indeed, our next step makes this explicit: We construct a linear program whose sole goal is to identify such multipliers, which in turn will allow us to efficiently compute an admissible strategy for the adversary ŷ. However, determining λ̃ exactly seems too ambitious. For one, IPGMAX only granted us access to x̂, but not to x̃. On the other hand, the Lagrange multipliers λ̃ are induced by (x̃, ṽ). To address this, the constraints of our linear program are phrased in terms of (x̂, v̂), instead of (x̃, ṽ), while to guarantee feasibility we appropriately relax all the constraints of the linear program; this relaxation does not introduce a large error since ∥x̂ − x̃∥ ≤ ϵ (Proposition 3.1), and the underlying constraint functions are Lipschitz continuous—with constants that depend favorably on the game G; we formalize that in Lemma B.4. This leads to our main theorem, summarized below (see Theorem B.1 for a precise statement). Theorem 3.1. Let x̂ be an ϵ-nearly stationary point of ϕ. There exist a linear program, (LPadv), such that: (i) It has size that is polynomial in G, and all the coefficients depend on the (single-agent) MDP faced by the adversary when the team is playing a fixed strategy x̂; and (ii) It is always feasible, and any solution induces a strategy ŷ such that (x̂, ŷ) is an O(ϵ)approximate Nash equilibrium. The proof of this theorem carefully leverages the structure of adversarial team Markov games, along with the KKT conditions we previously established in Proposition 3.3. The algorithm for computing the policy for the adversary is summarized in Algorithm 2 of Appendix B. A delicate issue with Theorem 3.1, and in particular with the solution of (LPadv), is whether one can indeed efficiently simulate the environment faced by the adversary. Indeed, in the absence of any structure, determining the coefficients of the linear program could scale exponentially with the number of players; this is related to a well-known issue in computational game theory, revolving around the exponential blow-up of the input space as the number of players increases (Papadimitriou & Roughgarden, 2008). As is standard, we bypass this by assuming access to natural oracles that ensure we can efficiently simulate the environment faced by the adversary (Remark 2). 4 FURTHER RELATED WORK In this section, we highlight certain key lines of work that relate to our results in the context of adversarial team Markov games. We stress that the related literature on multi-agent reinforcement learning (MARL) is too vast to even attempt to faithfully cover here. For some excellent recent overviews of the area, we refer the interested reader to (Yang & Wang, 2020; Zhang et al., 2021a) and the extensive lists of references therein. Team Games. The study of team games has been a prolific topic of research in economic theory and group decision theory for many decades; see, e.g., (Marschak, 1955; Groves, 1973; Radner, 1962; Ho & Chu, 1972). A more modern key reference point to our work is the seminal paper of Von Stengel & Koller (1997) that introduced the notion of team-maxmin equilibrium (TME) in the context of normal-form games. A TME profile is a mixed strategy for each team member so that the minimal expected team payoff over all possible responses of the adversary—who potentially knows the play of the team—is the maximum possible. While TME’s enjoy a number of compelling properties, being the optimal equilibria for the team given the lack of coordination, they suffer from computational intractability even in 3-player team games (Hansen et al., 2008; Borgs et al., 2010).3 Nevertheless, practical algorithms have been recently proposed and studied for computing them in multiplayer games (Zhang & An, 2020a;b; Basilico et al., 2017). It is worth pointing out that team equilibria are also useful for extensive-form two-player zero-sum games where one of the players has imperfect recall (Piccione & Rubinstein, 1997). The intractability of TME has motivated the study of a relaxed equilibrium concept that incorporates a correlation device (Farina et al., 2018; Celli & Gatti, 2018; Basilico et al., 2017; Zhang & An, 2020b; Zhang & Sandholm, 2021; Zhang et al., 2022b; Carminati et al., 2022; Zhang et al., 2022a); namely, TMECor. In TMECor players are allowed to select correlated strategies. Despite the many compelling aspects of TMECor as a solution concept in team games, even ex ante coordination or correlated randomization—beyond the structure of the game itself—can be overly expensive or even infeasible in many applications (Von Stengel & Koller, 1997). Further, even TMECor is NPhard to compute (in the worst-case) for imperfect-information extensive-form games (EFGs) (Chu & Halpern, 2001), although fixed-parameter-tractable (FPT) algorithms have recently emerged for natural classes of EFGs (Zhang & Sandholm, 2021; Zhang et al., 2022b). 3Hansen et al. (2008); Borgs et al. (2010) establish FNP-hardness and inapproximability for general 3- player games, but their argument readily applies to 3-player team games as well. On the other hand, the computational aspects of the standard Nash equilibrium (NE) in adversarial team games is not well-understood, even in normal-form games. In fact, it is worth pointing out that Von Neumann’s celebrated minimax theorem (von Neumann & Morgenstern, 2007) does not apply in team games, rendering traditional techniques employed in two-player zero-sum games of little use. Indeed, Schulman & Vazirani (2017) provided a precise characterization of the duality gap between the two teams based on the natural parameters of the problem, while Kalogiannis et al. (2021) showed that standard no-regret learning dynamics such as gradient descent and optimistic Hedge could fail to stabilize to mixed NE even in binary-action adversarial team games. Finally, we should also point out that although from a complexity-theoretic standpoint our main result (Theorem 1.1) establishes a fully polynomial time approximate scheme (FPTAS), since the dependence on the approximation error ϵ is poly(1/ϵ), an improvement to poly(log(1/ϵ)) is precluded even in normal-form games unless CLS ⊆ P (an unlikely event); this follows as adversarial team games capture potential games (Kalogiannis et al., 2021), wherein computing mixed Nash equilibria is known to be complete for the class CLS = PPAD ∩ PLS (Babichenko & Rubinstein, 2021). Multi-agent RL. Computing Nash equilibria has been a central endeavor in multi-agent RL. While some algorithms have been proposed, perhaps most notably the Nash-Q algorithm (Hu & Wellman, 1998; 2003), convergence to Nash equilibria is only guaranteed under severe restrictions on the game. More broadly, the long-term behavior of independent policy gradient methods (Schulman et al., 2015) is still not well-understood. Before all else, from the impossibility result of Hart & Mas-Colell, universal convergence to Nash equilibria is precluded even for normal-form games; this is aligned with the computational intractability (PPAD-completeness) of Nash equilibria even in two-player general-sum games (Daskalakis et al., 2009; Chen et al., 2009). Surprisingly, recent work has also established hardness results in turn-based stochastic games, rendering even the weaker notion of (stationary) CCEs intractable (Daskalakis et al., 2022; Jin et al., 2022). As a result, the existing literature has inevitably focused on specific classes of games, such as Markov potential games (Leonardos et al., 2021; Ding et al., 2022; Zhang et al., 2021b; Chen et al., 2022; Maheshwari et al., 2022; Fox et al., 2022) or two-player zero-sum Markov games (Daskalakis et al., 2020; Wei et al., 2021; Sayin et al., 2021; Cen et al., 2021; Sayin et al., 2020). As we pointed out earlier, adversarial Markov team games can unify and extend those settings (Section 2.3). More broadly, identifying multi-agent settings for which Nash equilibria are provably efficiently computable is recognized as an important open problem in the literature (see, e.g., (Daskalakis et al., 2020)), boiling down to one of the main research question of this paper (Question (⋆)). We also remark that certain guarantees for convergence to Nash equilibria have been recently obtained in a class of symmetric games (Emmons et al., 2022)—including symmetric team games. Finally, weaker solution concepts relaxing either the Markovian or the stationarity properties have also recently attracted attention (Daskalakis et al., 2022; Jin et al., 2021). 5 CONCLUSIONS Our main contribution in this paper is the first polynomial algorithm for computing (stationary) Nash equilibria in adversarial team Markov games, an important class of games in which a team of uncoordinated but identically-interested players is competing against an adversarial player. We argued that this setting serves as a step towards modeling more realistic multi-agent applications that feature both competing and cooperative interests. There are many interesting directions for future research. One caveat of our main algorithm (IPGMAX) is that it requires a separate subroutine for computing the optimal policy of the adversary. It is plausible that a carefully designed two-timescale policy gradient method can efficiently reach a Nash equilibrium, which would yield fully model-free algorithms for adversarial team Markov games by obviating the need to solve a linear program. Techniques from the literature on constrained MDPs (Ying et al., 2022) could also be useful for computing the policy of the adversary in a more scalable way. Furthermore, exploring different solution concepts—beyond Nash equilibria—could also be a fruitful avenue for the future. Indeed, allowing some limited form of correlation between the players in the team could lead to more efficient algorithms; whether that form of coordination is justified (arguably) depends to a large extent on the application at hand. Finally, returning to Question (⋆), a more ambitious agenda revolves around understanding the fundamental structure of games for which computing Nash equilibria is provably computationally tractable. ACKNOWLEDGMENTS We are grateful to the anonymous ICLR reviewers for their valuable feedback. Ioannis Anagnostides thanks Gabriele Farina and Brian H. Zhang for helpful discussions. Ioannis Panageas would like to acknowledge a start-up grant. Part of this project was done while he was a visiting research scientist at the Simons Institute for the Theory of Computing for the program “Learning and Games”. Vaggos Chatziafratis was supported by a start-up grant of UC Santa Cruz, the Foundations of Data Science Institute (FODSI) fellowship at MIT and Northeastern, and part of this work was carried out at the Simons Institute for the Theory of Computing. Emmanouil V. Vlatakis-Gkaragkounis is grateful for financial support by the Google-Simons Fellowship, Pancretan Association of America and Simons Collaboration on Algorithms and Geometry. This project was completed while he was a visiting research fellow at the Simons Institute for the Theory of Computing. Additionally, he would like to acknowledge the following series of NSF-CCF grants under the numbers 1763970/2107187/1563155/1814873.
1. What is the focus and contribution of the paper on computing Nash Equilibria in an adversarial team Markov game? 2. What are the strengths of the proposed algorithm, particularly in reducing the time complexity? 3. What are the weaknesses of the paper regarding the clarity of certain parts of the algorithm? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or questions regarding the computational cost of certain steps in the algorithm?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper provides an efficient algorithm to compute the Nash Equilibria in an adversarial team Markov game. Specifially, in this zero sum Markov game, a team of players try to gain reward as much as possible, and an adversary tries to lose reward as less as possible. The authors propose an algorithm called IPGMAX. In this algorithm, all the team players will first do policy gradient descent for several steps, and this procedure returns the team players' policies in Nash Equilibria. Then the algorithm uses a procedure to look for the response of the adversarial policy (to the team players' policies). The authors show that, team players' policies as well as the the response of the adversarial policy form a Nash Equilibria, and the time cost of the algorithm is polynomial, which can be much better than the exponential ones in prior works. Strengths And Weaknesses Strength Looking for Nash Equilibrias in Markov Game is an important task, and reducing the time complexity from exponential to polynomial is also a significant contribution. Weaknesses There are many parts not very clear in the IPGMAX algorithm (for details please see the questions below) My Questions Proposition 3.1 shows that there must exists such a t ∗ , and Corollary C.1 shows that with high probability, there is one such t ∗ in the randomly chosen set. However, it is still not very clear to me that how you can find this t ∗ is this set. Do you mean that it is easy (with low computation cost) to check whether x t is good or not? How much is the computation cost here? In line 6 of Algorithm 1, we need to look for the best response of the adversary. What is the computation cost here? In line 7 of Algorithm 1, we need to compute the gradient of V ρ . What is the computation cost here? It is mentioned that the IPGMAX algorithm is a decentralized one (which may avoid communications between players), but I am wondering whether line 6 and line 7 in Algorithm 1 could be done without any communications? Do you mean that these steps could be done even if all the players (e.g. the team players) do not know others' policies in the last time step? Clarity, Quality, Novelty And Reproducibility I think there are some problems about clarity (see the Questions above).
ICLR
Title Variational Autoencoders for Opponent Modeling in Multi-Agent Systems Abstract Multi-agent systems exhibit complex behaviors that emanate from the interactions of multiple agents in a shared environment. In this work, we are interested in controlling one agent in a multi-agent system and successfully learn to interact with the other agents that have fixed policies. Modeling the behavior of other agents (opponents) is essential in understanding the interactions of the agents in the system. By taking advantage of recent advances in unsupervised learning, we propose modeling opponents using variational autoencoders. Additionally, many existing methods in the literature assume that the opponent models have access to opponent’s observations and actions during both training and execution. To eliminate this assumption, we propose a modification that attempts to identify the underlying opponent model, using only local information of our agent, such as its observations, actions, and rewards. The experiments indicate that our opponent modeling methods achieve equal or greater episodic returns in reinforcement learning tasks against another modeling method. 1 INTRODUCTION In recent years, several promising works (Mnih et al., 2015; Schulman et al., 2015a; Mnih et al., 2016) have arisen in deep reinforcement learning (RL), leading to fruitful results in single-agent scenarios. In this work, we are interested in using single-agent RL in multi-agent systems, where we control one agent and the other agents (opponents) in the environment have fixed policies. The agent should be able to successfully interact with a diverse set of opponents as well as generalize to new unseen opponents. One effective way to address this problem is opponent modeling. The opponent models output specific characteristics of the opponents based on their trajectories. By successfully modeling the opponents, the agent can reason about opponents’ behaviors and goals and adjust its policy to achieve the optimal outcome. There is a rich literature of modeling opponents in the multiagent systems (Albrecht & Stone, 2018). Several recent works have proposed learning opponent models using deep learning architectures (He et al., 2016; Raileanu et al., 2018; Grover et al., 2018a; Rabinowitz et al., 2018). In this work, we focus on learning opponent models using Variational Autoencoders (VAEs) (Kingma & Welling, 2014). This work is, to the best of our knowledge, the first attempt to use VAEs in multi-agent scenarios. VAE are generative models that are commonly used for learning representations of the data, and various works use them in RL for learning representations of the environment (Igl et al., 2018; Ha & Schmidhuber, 2018; Zintgraf et al., 2019). We first propose a VAE for learning opponent representations in multi-agent systems based on the opponent trajectories. A shortcoming of this approach and most opponent modeling methods, as will be presented in Section 2, is that they require access to opponent’s information, such as observations and actions, during training as well as execution. This assumption is too limiting in the majority of scenarios. For example, consider Poker, where each agent never has access to the opponent’s observations. Nevertheless, during Poker, humans can reason about the opponent’s behaviors and goals using only their local observations. For example, an increase in the table’s pot could mean that the opponent either holds strong cards or is bluffing. Based on the idea that an agent can reason about an opponent’s model using its observations, actions, and rewards in a recurrent fashion, we propose a second VAE-based architecture. The encoder of the VAE learns to represent opponents’ models conditioned on only local information removing the requirement to access the opponents’ information during execution. To summarize our contribution, in this work, we explore VAEs for opponent modeling in multiagent systems. We are not interested in VAEs as generative models but as methods for learning representations. We evaluate our proposed methodology using a toy example and the commonly used Multi-agent Particle Environment (Mordatch & Abbeel, 2017). We evaluate the quality of the learned representations, and the episodic returns that RL algorithms can achieve. The experiments indicate that opponent modeling without opponents’ information can perform the same or even better in RL compared to models that access the opponent’s information. 2 RELATED WORK Learning Opponent Models. In this work, we are interested in opponent modeling methods that use neural networks to learn representations of the opponents. He et al. (2016) proposed an opponent modeling method that learns a modeling network to reconstruct the opponent’s actions given the opponent observations. Raileanu et al. (2018) developed an algorithm for learning to infer opponents’ goals using the policy of the controlled agent. Grover et al. (2018a) proposed an encoder-decoder method for modeling the opponent’s policy. The encoder learns a point-based representation of different opponents’ trajectories, and the decoder learns to reconstruct the’ opponent’s policy given samples from the embedding space. Additionally, Grover et al. (2018a) introduce an objective to separate embeddings of different agents into different clusters. d(z+, z−, z) = 1 (1 + e|z−z−|2−|z−z+|2)2 (1) where z+ and z are embeddings of the same agent from two different episodes and embedding z− is generated from the episode of a different agent. Rabinowitz et al. (2018) proposed the Theory of mind Network (TomNet), which learns embedding-based representations of opponents for metalearning. Tacchetti et al. (2018) proposed RFM to model opponents using graph neural networks. A common assumption among these methods, that this work aims to eliminate, is that access to opponents trajectories is available during execution. Representation Learning in Reinforcement Learning. Another topic that has received significant attention, recently, is representation learning in RL. Using unsupervised learning techniques to learn low dimensional representations of the MDP has led to significant improvement in RL. Ha & Schmidhuber (2018) proposed a VAE-based and a forward model to learn state representations of the environment. Hausman et al. (2018) learned tasks embeddings and interpolated them to solve harder tasks. Igl et al. (2018) used a VAE for learning representation in partially-observable environments. Gupta et al. (2018) proposed MAESN, which learns Gaussian embeddings to represent different tasks during meta-training and manages to quickly adapt to new task during meta-testing. The work of Zintgraf et al. (2019) is closely related, where Zintgraf et al. proposed a recurrent VAE model, which receives as input the observation, action, reward of the agent, and learns a variational distribution of tasks. Rakelly et al. (2019) used representations from an encoder for off-policy metaRL. Note that, all these works have been applied for learning representations of tasks or properties of the environments. On the contrary, our approach is focused on learning representations of the opponents. 3 BACKGROUND 3.1 REINFORCEMENT LEARNING Markov Decision Processes (MDPs) are commonly used to model decision making problems. An MDP consists of the set of states S, the set of actions A, the transition function, P (s′|s,a), which is the probability of the next state, s′, given the current state, s, and the action, a, and the reward function, r(s′,a, s), that returns a scalar value conditioned on two consecutive states and the intermediate action. A policy function is used to choose an action given a state, which can be stochastic a ∼ π(a|s) or deterministic a = µ(s). Given a policy π, the state value function is defined as V (st) = Eπ[ ∑H i=t γ i−trt|s = st] and the state-action value (Q-value) Q(st,at) = Eπ[ ∑H i=t γ i−trt|s = st, a = at], where 0 ≤ γ ≤ 1 is the discount factor and H is the finite horizon of the episode. The goal of RL is to compute the policy that maximizes state value function V , when the transition and the reward functions are unknown. There is a large number of RL algorithms; however, in this work, we focus on two actor-critic algorithms; the synchronous Advantage Actor-Critic (A2C) (Mnih et al., 2016; Dhariwal et al., 2017) and the Deep Deterministic Policy Gradient (DDPG) (Silver et al., 2014; Lillicrap et al., 2015). DDPG is an off-policy algorithm, using an experience replay for breaking the correlation between consecutive samples and target networks for stabilizing the training (Mnih et al., 2015). Given an actor network with parameters θ and a critic network with parameter φ, the gradient updates are performed using the following equations. min φ 1 2 EB [(r + γ ·Qtarget,φ′(s′, µtarget,θ′(s′))−Qφ(s,a))2] min θ −EB [Qφ(s, µθ(s))] (2) On the other hand, A2C is an on-policy actor-critic algorithm, using parallel environments to break the correlation between consecutive samples. The actor-critic parameters are optimized by: min θ,φ EB [−Â log πθ(a|s) + 1 2 (r + γVφ(s ′)− Vφ(s))2] (3) where the advantage term, Â, can be computed using the Generalized Advantage Estimation (GAE) (Schulman et al., 2015b). 3.2 VARIATIONAL AUTOENCODERS Consider samples from a dataset x ∈ X that are generated from some hidden (latent) random variable z based on a generative distribution pu(x|z) with unknown parameter u and a prior distribution on the latent variables, which we assume is a Gaussian with 0 mean and unit variance p(z) = N (z;0, I). We are interested in approximating the true posterior p(z|x) with a variational parametric distribution qw(z|x) = N (z;µ,Σ,w). Kingma & Welling (2014) proposed the Variational Autoencoders (VAE) to learn this distribution. Starting from the Kullback-Leibler (KL) divergence from the approximate to the true posterior DKL(qw(z|x)‖p(z|x)), the lower bound on the evidence log p(x) is derived as: log p(x) ≥ Ez∼qw(z|x)[log pu(x|z)]−DKL(qw(z|x)‖p(z)) (4) The architecture consists of an encoder which receives a sample x and generates the Gaussian variational distribution p(z|x;w). The decoder receives a sample from the Gaussian variational distribution and reconstructs the initial input x. The architecture is trained using the reparameterization trick Kingma & Welling (2014). Higgins et al. (2017) proposed β-VAE, where a parameter β ≥ 0 is used to control the trade-off between the reconstruction loss and the KL-divergence. L(x;w,v) = Ez∼qw(z|x)[log pu(x|z)]− βDKL(qw(z|x)‖p(z)) (5) 4 APPROACH 4.1 PROBLEM FORMULATION We consider a modified Markov Game (Littman, 1994), which consists of N agents I = {1, 2, ..., N}, the set of states S, the set of actions A = A1 × A−1 , the transition function P : S×A×S→ R and the reward function r : S×A×S→ RN . We consider partially observable settings, where each agent i has access only to its local observation oi and reward ri. Additionally, two sets of pretrained opponents are provided T = {I−1,m}m=Mm=1 and G = {I−1,m}m=Mm=1 , which are responsible for providing the joint action A−1. Note that by opponent we refer to I−1,m, which consists of one or more agents, independently from the type of the interactions (cooperative, mixed or competitive). At the beginning of each episode, we sample a pretrained opponent from the set T during training or from G during testing. Our goal is to train the agent 1 using RL, to maximize the average return against opponents from the training set, T, and generalize to opponents sampled from the test G. Note, that when we refer to agent 1 we drop the subscript. maxEπ[ET[ ∑ t γtrt]] (6) 4.2 VARIATIONAL AUTOENCODER WITH ACCESS TO OPPONENT’S INFORMATION We assume a number of K provided episode trajectories for each pretrained opponents j ∈ T, E(j) = {τ (j,k)−1 } k=K−1 k=0 , where τ (j,k) −1 = {o−1,t,a−1,t}t=Ht=0 , and o−1,t,a−1,t are the observations and actions of the opponent at the time step t in the trajectory. These trajectories are generated from the opponents in set T, which are represented in the latent space from the variable z and for which we assume there exists an unknown model pu(τ−1|z). Our goal is to approximate the unknown posterior, p(z|τ−1), using a variational Gaussian distribution N (µ,Σ;w) with parameters w. We consider using a β-VAE for the sequential task: L(τ−1;w,u) = Ez∼qw(z|τ−1)[log pu(τ−1|z)]− βDKL(qw(z|τ−1)‖p(z)) (7) We can further subtract the discrimination objective (equation 1) that was proposed by Grover et al. (2018a). Since the discrimination objective is always non-negative, we derive and optimize a lower bound as: L(τ−1;w,u) ≥ Ez∼qw(z|τ−1)[log pu(τ−1|z)] − βDKL(qw(z|τ−1)‖p(z))− λd(E(z+),E(z−),E(z)) (8) The discrimination objective receives as input the mean of the variational Gaussian distribution, produced by three different trajectories. Despite the less tight lower bound, the discrimination objective will separate the opponents in the embedding space, which could potentially lead to higher episodic returns. At each time step t, the recurrent encoder network generates a latent sample zt, which is conditioned on the opponent’s trajectory τ−1,:t, until this time step. The KL divergence can be written as: DKL(qw(z|τ−1)‖p(z)) = H∑ t=1 DKL(qw(zt|τ−1,:t)‖p(zt)) (9) The lower bound consist of the reconstruction loss of the trajectory which involves the observation and actions of the opponent. The opponent’s action, at each time step depends on its observation and the opponent’s policy, which is represented by the latent variable z. We use a decoder that consists of fully-connected layers, however, a recurrent network can be used, if we instead assume that the opponent decides its actions based on the history of its observations. Additionally, the observation at each time step depends only on the dynamics of the environment and the actions of the agents and not on the identity of the opponent. Therefore, the reconstruction loss factorizes as: log pu(τ−1|z) = H∑ t=1 log pu(a−1,t|o−1,t, zt)pu(o−1,t|ot−1,o−1,t−1,at−1,a−1,t−1) ∝ H∑ t=1 log pu(a−1,t|o−1,t, zt) (10) From the equation above, we observe that the loss is the reconstruction of the opponent’s policy given the current observation and a sample from the latent variable. Overall, our proposed VAE takes the form of a Conditional VAE (Sohn et al., 2015). Figure 1 illustrates the diagram of the VAE. The full pseudocode of the method is provided in the Appendix D. 4.3 VARIATIONAL AUTOENCODER WITHOUT ACCESS TO OPPONENT’S INFORMATION In Sections 1 and 2, it was noted that most agent modeling methods assume access to opponent’s observations and actions is available both during training and execution. To eliminate this assumption, we propose a VAE that uses a parametric variational distribution which is conditioned on the observation-action-reward triplet of the agent that we control and a variable d indicating whether the episode has terminated; qw(z|τ = (o,a, r, d)). More precisely, our goal is to approximate the true posterior that is conditioned on opponent’s information, with a variational distribution that only depends on local information. The use of this local information in a recurrent fashion has been successfully used in meta-RL settings (Wang et al., 2016; Duan et al., 2016). We start by computing the KL divergence between the two distributions: DKL(qw(z|τ )‖p(z|τ−1)) = Ez∼qw(z|τ )[log qw(z|τ )− log p(z|τ−1)] (11) By following the works of Kingma & Welling (2014) and Higgins et al. (2017) and using the Jensen inequality, the VAE objective can be written as: L(τ , τ−1;w,v) = Ez∼qw(z|τ )[log pu(τ−1|z)]− βDKL(qw(z|τ )‖p(z)) (12) The reconstruction loss factorizes exactly similar to equation 10. From equation 12, it can be seen that the variational distribution only depends on locally available information. Since during execution, only the encoder is required to generate the opponent’s model, this approach removes the assumption that access to the opponent’s observations and actions is available during execution. Figure 2 presents the diagram of the VAE. 4.4 REINFORCEMENT LEARNING TRAINING We use the latent variable z augmented with the agent’s observation to condition the policy of our agent, which is optimized using RL. Consider the augmented observation space O′ = O×Z, where O is the original observation space of the our agent in the Markov game, and Z is the representation space of the opponent models. The advantage of learning the policy on O′ compared to O is that the policy can adapt to different z ∈ Z. After training the variational autoencoder that was described in Section 4.2, we use it to train our agent against the opponents in the set T. We use the DDPG (Lillicrap et al., 2015) algorithm for this task. We did not manage to optimize the representation jointly with the policy, neither with DDPG or A2C. At the beginning of each episode, we sample an opponent from the set T. The agent’s input is the local observation and a sample from the variational distribution. We refer to this as OMDDPG (Opponent Modeling DDPG), and the full pseudocode is provided in Appendix D. We optimize the second proposed VAE method jointly with the policy of the controlled agent. We use the A2C algorithm, similarly to the meta-learning algorithm RL2 (Wang et al., 2016; Duan et al., 2016). In the rest of this paper, we refer to this as SMA2C (Self Modeling A2C). The actor’s and the critic’s input is the local observation and a sample from the latent space. We back-propagate the gradient from both the actor and the critic loss to the parameters of the encoder. Therefore, the encoder’s parameters are shaped to maximize both the VAE’s objective as well as the discounted sum of rewards. The full pseudocode is provided in Appendix D. 5 EXPERIMENTS 5.1 TOY EXAMPLE We will first provide a toy example to illustrate SMA2C. We consider the classic repeated game of prisoner’s dilemma with a constant episode length of 25 time steps. We control one agent, and the other agent is selected randomly between two possible opponent policies. The first opponent always defects, while the second opponent follows a tit-for-tat policy. At the beginning of the episode, one of the two opponents is randomly selected. We train SMA2C against the two possible opponents. The agent that we control has to identify the correct opponent, and the optimal policy, it can achieve, is to defect against opponent one and collaborate with opponent two. Figure 3 shows the payoff matrix, the embedding space at the last time step of the episode, and the episodic return that SMA2C and A2C achieve during training. Note that, based on the payoff matrix, the optimal average episodic return that can be achieved is 24.5. 5.2 EXPERIMENTAL FRAMEWORK To evaluate the proposed methods in more complex environments, we used the Multi-agent Particle Environment (MPE) (Mordatch & Abbeel, 2017), which provides several different multi-agent environments. The environments have continuous observation, discrete action space, and fixed-length episodes of 25 time steps. Four environments are used for evaluating the proposed methodology; the speaker-listener, the double-speaker listener, the predator-prey, and the spread. In Appendix A, descriptions of the different environments are provided. During the experiments, we evaluated the two proposed algorithms OMDDPG and SMA2C as well as the modeling method of Grover et al. (2018a) combined with DDPG (Lillicrap et al., 2015). In all the environments, we pretrain ten different opponents, where five are used for training and five for testing. In the speaker-listener environment, we control the listener, and we create ten different speakers using different communication messages for different colors. In the double speakerlistener, which consists of two agents that have to be both listener and speaker simultaneously, we control the first agent. We create a diverse set of opponents that have different communication messages similar to speaker-listener, while they learn to navigate using the MADDPG algorithm (Lowe et al., 2017), with different initial random seeds. In the predator-prey environment, we control the prey and pretrain the three other agents in the environment using MADDPG with different initial parameters. Similarly, in spread, we control one of the agents, while the opponents are pretrained using MADDPG. We use agent generalization graphs (Grover et al., 2018b) to evaluate the generalization of the proposed methods. We evaluate two types of generalizations in this work. First, we evaluate the episodic returns against the opponents that are used for training, T, which Grover et al. (2018b) call ”weak generalization”. Secondly, we evaluate against unknown opponents from set G, which is called ”strong generalization”. A figure of an agent generalization graph is provided in Appendix E. 5.3 EVALUATION OF THE REPRESENTATIONS To evaluate the representations created from our models we will estimate the Mutual Information (MI) between the variational distribution (q(z|τ ) or q(z|τ−1)) and the prior on the opponents’ identities, which is uniform. This is a common method to estimate the quality of the representation (Chen et al., 2018; Hjelm et al., 2019). To estimate the MI, we use the Mutual Information Neural Estimation (MINE) (Belghazi et al., 2018). Note that, the upper bound of the MI, the entropy of the uniform distribution, in our experiments is 1.61. We gather 200 trajectories against each opponent in T, where 80% of them are used for training and the remaining for testing. The visualization of the embedding space, for the predator-prey environment, is provided in Appendix C. From Table 1, we observe that the method of Grover et al. (2018a) achieves significantly higher values of MI. We believe that the main reason behind this is the discrimination objective that implicitly increases MI. This is apparent in the MI values of OMDDPG as well. SMA2C manages to create opponent representations, based only on the local information of our agent, that have information about the opponent identities. Additionally, based on Figure 4, we observe that the value of MI is not directly related to the episodic returns in RL tasks. In Appendix B, we demonstrate that when we detach the encoder’s parameters from the policy optimization, the MI decreases. 5.4 REINFORCEMENT LEARNING PERFORMANCE We evaluate the proposed opponent modeling methods in RL settings. In Figure 4, the episodic returns for the three methods in all four environments are presented. Every line corresponds to the average return over five runs with different initial seeds, and the shadowed part represents the 95% confidence interval. We evaluate the models every 1000 episodes for 100 episodes. During the evaluation, we sample an embedding from the variational distribution at each time step, and the agent follows the greedy policy. The hyperparameters for all the experiments in Figure 4 were optimized on weak generalization scenarios, against opponents from set T. Details about the implementation and hyperparameters that were used for generating the figures are presented in Appendix D. OMDDPG is an upper baseline for SMA2C achieving higher returns in all environments during weak generalization. However, OMDDPG, as well as Grover et al. (2018a), tend to overfit and perform poorly during strong generalization in the speaker-listener and double speaker-listener environment. SMA2C achieves higher returns that Grover et al. (2018a) in more than half of the scenarios. Below, in the Section 5.5, we perform an ablation study on different inputs in the encoder of SMA2C. In Appendix B, we evaluate whether back-propagating the RL loss to the parameters of the encoder, in SMA2C, affects the episodic returns. 5.5 ABLATION STUDY ON SMA2C INPUTS We perform an ablation study to assess the performance requirements of the SMA2C. Our proposed method utilizes the observation, action, reward, and termination sequence to generate the opponent’s model. We use different combinations of these elements in the encoder and compare the average episodic returns. In Figure 5, the average episode return is presented for three different cases; SMA2C with all inputs, SMA2C with only observation and action as inputs and SMA2C with only observation as input; for all four environments. 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e6 40 35 30 25 20 15 10 5 0 R ew ar d Speaker-Listener 0 1 2 3 4 5 1e6 60 50 40 30 20 10 0 Double Speaker-Listener 0.0 0.2 0.4 0.6 0.8 1.0 1e7 80 70 60 50 40 30 20 10 0 Predator-Prey 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1e7 64 62 60 58 56 54 52 50 Spread 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Time steps 1e6 40 35 30 25 20 15 10 5 0 R ew ar d Speaker-Listener 0 1 2 3 4 5 Time steps 1e6 60 50 40 30 20 10 0 Double Speaker-Listener 0.0 0.2 0.4 0.6 0.8 1.0 Time steps 1e7 80 70 60 50 40 30 20 10 0 Predator-Prey 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Time steps 1e7 64 62 60 58 56 54 52 50 R ew ar d Spread SMA2C Full SMA2C Observation-Action SMA2C Observation Figure 5: Ablation on the episodic returns for different inputs in the VAE of SMA2C for weak (top row) and strong (bottom row) generalization in all four environments. 5.6 ABLATION STUDY ON THE DISCRIMINATION OBJECTIVE Another element of this work is the utilization of the discrimination objective of the Grover et al. (2018a) in the VAE loss. To better understand how the opponent separation in the embedding space is related to RL performance, below, Figure 6 shows the episodic return during the training for the OMDDPG with and without the discrimination objective is presented. Using the discrimination objective has a significant impact on the episodic returns in the speaker-listener and the double speaker-listener environment. 6 CONCLUSION To conclude this work, we proposed two methods for opponent modeling in multi-agent systems using variational autoencoders. First, we proposed OMDDPG a VAE-based method that uses the common assumption that access to opponents’ information is available during execution. The goal of this work is to motivate opponent modeling without access to opponent’s information. The core contribution of this work is SMA2C, which learns representations without requiring access to opponent’s information during execution. We performed a thorough experimental evaluation of the proposed methodology. We evaluated the quality of the representations produced by our models as well as the episodic return that can achieve in RL tasks. The experiments conclusively indicate that access to the opponent’s information is not necessary during execution, eliminating a long-standing assumption of the prior work. Additionally, we provided evidence that the relationship between the MI and the RL performance is not apparent. In the future, we would like to research how these models can be used for non-stationary opponents. Particularly, there are two scenarios worth investigating; the first is multi-agent deep RL, where different agents are learning concurrently leading to non-stationarity in the environment, which prevents the agents from learning optimal policies. Secondly, we would like to explore whether the proposed models can deal with opponents that try to deceive it and exploit the controlled agent (Ganzfried & Sandholm, 2011; 2015). A DETAILS OF THE EXPERIMENTAL ENVIRONMENT A.1 SPEAKER-LISTENER ENVIRONMENT The speaker-listener environment consists of two agents, called speaker and listener as well as three designated landmarks that each one has a different color, red green or blue. At the beginning of the episode, the listener is assigned a color, which can be red, green, or blue. The task of the listener is to navigate to the landmark that has the same color. However, the color of the listener can only be observed by the speaker. So the speaker has to learn to communicate the correct color to the listener. The listener should be able to understand the generated message of the speaker and to navigate to the correct color. The observation space of the listener has 13 dimensions, which consists of the position of the listener, the positions of the three landmarks in the 2D environment and the 5-dimensional communication message of the speaker and its action space has five dimensions. The speaker has a 3-dimensional observation space, which is a vector assigned to the color of the listener, and 5- dimensional action space. We use our method to train the listener to be able to understand a set of different speakers, pretrained speakers that use different communication messages. Both speaker and listener share the same reward, which is the negative Euclidean distance between the listener and the correct landmark. In Figure 7a, an instance from the speaker-listener environment is presented. A.2 DOUBLE SPEAKER-LISTENER ENVIRONMENT The double speaker-listener environment consists of two agents and three designated landmarks that each one has a different color, red green or blue, similarly to the speaker-listener environment. The only difference is that both agents are simultaneously both speakers and listeners. Therefore, at the beginning of the episode, each agent has a color that can only be observed by the other agent. Each agent must learn both to communicate a message to the other agent as well as navigate to the correct landmark. The observation space of each agent has 16 dimensions, where 13 of them are the same as the listener’s in the previous environment, and the other three are the vector assigned to the color of the opponent, while the action space is 5-dimensional for the navigation actions and 5-dimensional for the communication message as well. The reward is the average of the negative Euclidean distances between each agent and the correct landmark. This environment is significantly more difficult compared to the speaker-listener because our agent has to infer both the color that the other agent observes as well as communicate the correct message that the opponent expects. In Figure 7b, an instance from the double speaker-listener environment is presented. A.3 PREDATOR-PREY ENVIRONMENT This environment consists of one prey agent and three predator agents. At the beginning of the episode, the prey and the predators are randomly placed on a 2D map. The goal of the prey is to avoid being caught by predators. In the environment, there are additionally two large black obstacles in order to block the agents. The advantage of the prey compared to the three predators is that it can move faster compared to the three adversaries. This environment, compared to the previous two, is competitive. We deliberately chose this environment in order to prove that our method is agnostic to the environment setting. In our work, we will apply the proposed algorithm in the prey agent in order to examine whether it can avoid a large number of different pretrained predator agents. The observation of each agent has 14 dimensions representing the agents’ positions as well as the obstacles in the 2D space, while their action space consists of 5 actions. In Figure 7c, an instance from the predator-prey environment is presented. A.4 SPREAD ENVIRONMENT The spread environment consists of three large agents and three landmarks as well. At the beginning of the episode, the three agents and the three landmarks are spread randomly in the 2D space. The goal of the agents is to navigate to three different landmarks without colliding. The reward is the negative distance of each agent from the landmark. In the case of collision, there is an additional negative reward. The reward is the same for all agents. All the agents have the same observation space, which consists of 18 dimensions, while their action space consists of 5 actions. In Figure 7d, an instance from the spread environment, is presented. B ABLATION STUDY ON RL BACK-PROPAGATION We evaluate SMA2C without back-propagating the gradients of the RL loss to the parameters of the encoder. Therefore, the encoder is only trained based on ELBO. Figure 8 verifies that not performing back-propagation does not significantly affect the episodic returns that SMA2C achieves during weak generalization. Additionally, we compute the MI between the embeddings and the opponents’ identities similarly to Section 5.3 for the double speaker-listener environment. We observe that the MI decreases when we do not perform back-propagation to the embeddings of the encoder. C EMBEDDING VISUALIZATION Figure 9 visualizes the embedding space for the three different evaluated algorithms in the predatorprey environment. Note that the embeddings were generated using interactions between the opponents from the set T and the trained agent that we control. D IMPLEMENTATION DETAILS The pseudocode for training the VAE from Section 4.2 is provided below. We consider that 1000 trajectories are provided for each one of the opponents, which are generated against trained agents. We train the VAE for 1000 epochs, using Adam (Kingma & Ba, 2014) with 10−3 learning rate. Algorithm 1 Pseudocode of the proposed VAE algorithm for i = 1 :M do Sample an episode and compute the embedding z ← q(sample(Ei)) Sample a different episode and compute the embedding z+ ← q(sample(Ei)) for j = 1 :M do if i == j then continue Sample an episode and compute the embedding z− ← q(sample(Ej)) Update VAE parameters by maximizing 8 We train OMDDPG for 2 million steps in all the experiments. Since the encoder of the VAE has an LSTM layer and has to be trained on sequential data, we use a modified experience replay, that enables sampling of whole episodes, which has also been used by Hausknecht & Stone (2015). DDPG algorithm requires continuous action space. However, since our experimental environments have discrete action space, we use the Gumbel-Softmax trick (Jang et al., 2016) to create differentiable samples from a discrete distribution. Additionally, we regularize the actor loss by adding the squared logits in order to prevent them from getting large values. The pseudocode of OMDDPG is presented below 2. Algorithm 2 Pseudocode of the OMDDPG algorithm for e = 1 : K episodes do opp← sample(opponents) while episode is not fished do Get the observation of our agent o and the observation of the opponent o−1 Compute the action of the opponent a−1 Get a sample from the encoder z ← q(z|o−1, a−1) Compute the action a of the agent using exploration Perform the actions in the environment and get new observations and rewards Store the sequences of both agents in the experience replay if t%update frequency == 0 then Sample a batch of sequences from the experience replay Update the actor-critic parameters using 2 where s← concat(o, z) Update the target networks All neural networks have 2 hidden layers with ReLU (Maas et al., 2013) activation function. We use the Adam optimizer (Kingma & Ba, 2014) for all experiments. The target networks are updated with τ = 0.01. The hidden dimensions for all VAE layers is 100. The parameter λ in the VAE loss (equation 8) is always 1. We perform gradient updates every 50 time steps with a batch of 100 episodes. Table 3 summarizes the rest of the hyperparameters. We train A2C for 2 million steps in the speaker-listener environment, 5 million in the double speakerlistener, 10 million in the predator-prey and 15 million in the spread environment. A2C as an onpolicy algorithm is significantly less sample-efficient compared to DDPG, and as a result, more training steps are required. The pseudocode of SMA2C is presented below 3. We subtract the policy entropy from the actor loss (Mnih et al., 2016) to ensure sufficient exploration. The loss that SMA2C minimizes can be written as: min φ,θ,w,u 1 2 EB [(r + γVφ(s′)− Vφ(s))2 − Â log πθ(a|s) − δ log pu(τ−1|z) + βDKL(qw(z|τ )‖p(z))− bH(π)] (13) Algorithm 3 Pseudocode of the SMA2C algorithm Create D parallel environments t← 0 for e = 1 : K episodes do Sample D opponents opp← sample(opponents) while episode is not finished do for every environment in D do Get the observation of our agent o and the observation of the opponent o−1 Get a sample from the encoder z ← q(z|o, a, r, d) Compute the action a of the agent using exploration t← t+ 1 Perform the actions in the environment and get new observations, rewards and done if t%update frequency == 0 then Gather the sequences from all environments to a single batch Update the actor-critic and VAE parameters using 13 where s← concat(o, z) For the advantage computation, we use the GAE (Schulman et al., 2015b) with λGAE = 0.95. We create 10 parallel environment to break the correlation between consecutive samples. The actor and the critic share all hidden layers in all the environments except the double speaker-listener. We use the Adam optimizer (Kingma & Ba, 2014), and we clip the gradient norm to 0.5. Table 4 summarizes the rest of the hyperparameters. E AGENT GENERALIZATION GRAPHS Figure 10 presents the agent generalization graph that was used in all experiments in this paper.
1. What is the main contribution of the paper regarding opponent modeling in multi-agent games? 2. What are the strengths and weaknesses of the proposed VAE framework for agent/opponent modeling? 3. How critical is reward information for opponent identification in the proposed encoder architecture? 4. How are episode trajectories obtained for each pretrained opponent, and how exactly are the opponents pretrained? 5. Can the triplet loss be reinterpreted or expressed as a specific prior on the opponent model? 6. What additional baselines could be included to better understand the effect of opponent modeling? 7. Why is mutual information between the approximate posterior q and the prior p used as the policy embedding quality metric, and how does the triplet loss degrade this metric? 8. How does SMA2C underperform compared to methods that use opponent trajectories, and what does this say about the importance of local observations versus opponent embeddings?
Review
Review The authors propose a variational autoencoding (VAE) framework for agent/opponent modeling in multi-agent games. Interestingly, as the authors show, it looks like it is possible to compute accurate embeddings of the opponent policies without having access to opponent observations and actions. The paper is well written, the methods are simple yet still interesting/informative, but there are a few questions that I find necessary to be addressed. Methods: 1. I find the idea of learning to infer embeddings of the opponent policies from the agent's own local observations quite interesting. Intuitively, it makes sense -- since the opponent's policy effectively specifies the dynamics of the environment (from the agent's perspective), opponent's behavior must be reflected in the agent's observations. Comparing figures 1 and 2, the proposed encoder architecture also uses information about the reward (and episode termination). How critical is this information for opponent identification? Would it work without r_{t-1} and d_{t-1}? 2. Sec. 4.2: "We assume a number of K provided episode trajectories for each pretrained opponent" -- how exactly are these trajectories obtained? Similarly, how exactly are the opponents pretrained? (Self-play, hardcoded, or something else?) 3. As the authors mention, the triplet loss that discriminates between the opponents loosens the lower bound. Since the regularized objective is still a lower bound, I wonder if the triplet loss can be re-interpreted/expresses as a specific prior on the opponent model? Experiments: 1. Sec. 5.1: to understand the effect of opponent modeling, it would be nice to see how baselines perform in this setup against a randomly picked opponent (otherwise, the curve in Fig. 3-c is not informative). I suggest the following baselines: tit-for-tat (hardcoded), a couple of classical learning algorithms for iterated games (e.g., policy hill-climbing, WoLF), an agent that learns using policy gradients but without opponent embeddings. Without any baselines, Sec. 5.1 seems like a sanity check which just shows that the implementation works unless I am missing something. 2. Sec. 5.3: (1) Why is mutual information between the approximate posterior q and the prior p makes sense as the policy embedding quality metric here? (2) Could you intuitively (or formally) justify the fact that the triplet loss degrades MI metric? Right now, this is stated as a fact but not justified. (3) It looks like Grover et al. (2018a) used deterministic trajectory encoders; how exactly is MI measured in that case? 3. If I understand correctly from Fig. 4, SMA2C (which uses local information) underperforms as compared to the methods that use opponent trajectories in 6/8 cases. To me, this somewhat confirms the point opposite to what the authors claim -- local observations, while containing some information about the opponent, are still inferior. Also, having baselines that do not use opponent embeddings on the charts of Fig.4 would help understand the contribution of opponent modeling. ---- I acknowledge reading the author's response, which addressed some of my questions/concerns to some extent. However, I believe that while estimating accurate embeddings of the opponent behavior from the agent's observations only is interesting, the approach has limitations, and I feel those are not studied in-depth enough (e.g., as a reader, I would like to understand if and when I should use the proposed approach and expect it to work). My assessment of the paper stays the same.
ICLR
Title Variational Autoencoders for Opponent Modeling in Multi-Agent Systems Abstract Multi-agent systems exhibit complex behaviors that emanate from the interactions of multiple agents in a shared environment. In this work, we are interested in controlling one agent in a multi-agent system and successfully learn to interact with the other agents that have fixed policies. Modeling the behavior of other agents (opponents) is essential in understanding the interactions of the agents in the system. By taking advantage of recent advances in unsupervised learning, we propose modeling opponents using variational autoencoders. Additionally, many existing methods in the literature assume that the opponent models have access to opponent’s observations and actions during both training and execution. To eliminate this assumption, we propose a modification that attempts to identify the underlying opponent model, using only local information of our agent, such as its observations, actions, and rewards. The experiments indicate that our opponent modeling methods achieve equal or greater episodic returns in reinforcement learning tasks against another modeling method. 1 INTRODUCTION In recent years, several promising works (Mnih et al., 2015; Schulman et al., 2015a; Mnih et al., 2016) have arisen in deep reinforcement learning (RL), leading to fruitful results in single-agent scenarios. In this work, we are interested in using single-agent RL in multi-agent systems, where we control one agent and the other agents (opponents) in the environment have fixed policies. The agent should be able to successfully interact with a diverse set of opponents as well as generalize to new unseen opponents. One effective way to address this problem is opponent modeling. The opponent models output specific characteristics of the opponents based on their trajectories. By successfully modeling the opponents, the agent can reason about opponents’ behaviors and goals and adjust its policy to achieve the optimal outcome. There is a rich literature of modeling opponents in the multiagent systems (Albrecht & Stone, 2018). Several recent works have proposed learning opponent models using deep learning architectures (He et al., 2016; Raileanu et al., 2018; Grover et al., 2018a; Rabinowitz et al., 2018). In this work, we focus on learning opponent models using Variational Autoencoders (VAEs) (Kingma & Welling, 2014). This work is, to the best of our knowledge, the first attempt to use VAEs in multi-agent scenarios. VAE are generative models that are commonly used for learning representations of the data, and various works use them in RL for learning representations of the environment (Igl et al., 2018; Ha & Schmidhuber, 2018; Zintgraf et al., 2019). We first propose a VAE for learning opponent representations in multi-agent systems based on the opponent trajectories. A shortcoming of this approach and most opponent modeling methods, as will be presented in Section 2, is that they require access to opponent’s information, such as observations and actions, during training as well as execution. This assumption is too limiting in the majority of scenarios. For example, consider Poker, where each agent never has access to the opponent’s observations. Nevertheless, during Poker, humans can reason about the opponent’s behaviors and goals using only their local observations. For example, an increase in the table’s pot could mean that the opponent either holds strong cards or is bluffing. Based on the idea that an agent can reason about an opponent’s model using its observations, actions, and rewards in a recurrent fashion, we propose a second VAE-based architecture. The encoder of the VAE learns to represent opponents’ models conditioned on only local information removing the requirement to access the opponents’ information during execution. To summarize our contribution, in this work, we explore VAEs for opponent modeling in multiagent systems. We are not interested in VAEs as generative models but as methods for learning representations. We evaluate our proposed methodology using a toy example and the commonly used Multi-agent Particle Environment (Mordatch & Abbeel, 2017). We evaluate the quality of the learned representations, and the episodic returns that RL algorithms can achieve. The experiments indicate that opponent modeling without opponents’ information can perform the same or even better in RL compared to models that access the opponent’s information. 2 RELATED WORK Learning Opponent Models. In this work, we are interested in opponent modeling methods that use neural networks to learn representations of the opponents. He et al. (2016) proposed an opponent modeling method that learns a modeling network to reconstruct the opponent’s actions given the opponent observations. Raileanu et al. (2018) developed an algorithm for learning to infer opponents’ goals using the policy of the controlled agent. Grover et al. (2018a) proposed an encoder-decoder method for modeling the opponent’s policy. The encoder learns a point-based representation of different opponents’ trajectories, and the decoder learns to reconstruct the’ opponent’s policy given samples from the embedding space. Additionally, Grover et al. (2018a) introduce an objective to separate embeddings of different agents into different clusters. d(z+, z−, z) = 1 (1 + e|z−z−|2−|z−z+|2)2 (1) where z+ and z are embeddings of the same agent from two different episodes and embedding z− is generated from the episode of a different agent. Rabinowitz et al. (2018) proposed the Theory of mind Network (TomNet), which learns embedding-based representations of opponents for metalearning. Tacchetti et al. (2018) proposed RFM to model opponents using graph neural networks. A common assumption among these methods, that this work aims to eliminate, is that access to opponents trajectories is available during execution. Representation Learning in Reinforcement Learning. Another topic that has received significant attention, recently, is representation learning in RL. Using unsupervised learning techniques to learn low dimensional representations of the MDP has led to significant improvement in RL. Ha & Schmidhuber (2018) proposed a VAE-based and a forward model to learn state representations of the environment. Hausman et al. (2018) learned tasks embeddings and interpolated them to solve harder tasks. Igl et al. (2018) used a VAE for learning representation in partially-observable environments. Gupta et al. (2018) proposed MAESN, which learns Gaussian embeddings to represent different tasks during meta-training and manages to quickly adapt to new task during meta-testing. The work of Zintgraf et al. (2019) is closely related, where Zintgraf et al. proposed a recurrent VAE model, which receives as input the observation, action, reward of the agent, and learns a variational distribution of tasks. Rakelly et al. (2019) used representations from an encoder for off-policy metaRL. Note that, all these works have been applied for learning representations of tasks or properties of the environments. On the contrary, our approach is focused on learning representations of the opponents. 3 BACKGROUND 3.1 REINFORCEMENT LEARNING Markov Decision Processes (MDPs) are commonly used to model decision making problems. An MDP consists of the set of states S, the set of actions A, the transition function, P (s′|s,a), which is the probability of the next state, s′, given the current state, s, and the action, a, and the reward function, r(s′,a, s), that returns a scalar value conditioned on two consecutive states and the intermediate action. A policy function is used to choose an action given a state, which can be stochastic a ∼ π(a|s) or deterministic a = µ(s). Given a policy π, the state value function is defined as V (st) = Eπ[ ∑H i=t γ i−trt|s = st] and the state-action value (Q-value) Q(st,at) = Eπ[ ∑H i=t γ i−trt|s = st, a = at], where 0 ≤ γ ≤ 1 is the discount factor and H is the finite horizon of the episode. The goal of RL is to compute the policy that maximizes state value function V , when the transition and the reward functions are unknown. There is a large number of RL algorithms; however, in this work, we focus on two actor-critic algorithms; the synchronous Advantage Actor-Critic (A2C) (Mnih et al., 2016; Dhariwal et al., 2017) and the Deep Deterministic Policy Gradient (DDPG) (Silver et al., 2014; Lillicrap et al., 2015). DDPG is an off-policy algorithm, using an experience replay for breaking the correlation between consecutive samples and target networks for stabilizing the training (Mnih et al., 2015). Given an actor network with parameters θ and a critic network with parameter φ, the gradient updates are performed using the following equations. min φ 1 2 EB [(r + γ ·Qtarget,φ′(s′, µtarget,θ′(s′))−Qφ(s,a))2] min θ −EB [Qφ(s, µθ(s))] (2) On the other hand, A2C is an on-policy actor-critic algorithm, using parallel environments to break the correlation between consecutive samples. The actor-critic parameters are optimized by: min θ,φ EB [−Â log πθ(a|s) + 1 2 (r + γVφ(s ′)− Vφ(s))2] (3) where the advantage term, Â, can be computed using the Generalized Advantage Estimation (GAE) (Schulman et al., 2015b). 3.2 VARIATIONAL AUTOENCODERS Consider samples from a dataset x ∈ X that are generated from some hidden (latent) random variable z based on a generative distribution pu(x|z) with unknown parameter u and a prior distribution on the latent variables, which we assume is a Gaussian with 0 mean and unit variance p(z) = N (z;0, I). We are interested in approximating the true posterior p(z|x) with a variational parametric distribution qw(z|x) = N (z;µ,Σ,w). Kingma & Welling (2014) proposed the Variational Autoencoders (VAE) to learn this distribution. Starting from the Kullback-Leibler (KL) divergence from the approximate to the true posterior DKL(qw(z|x)‖p(z|x)), the lower bound on the evidence log p(x) is derived as: log p(x) ≥ Ez∼qw(z|x)[log pu(x|z)]−DKL(qw(z|x)‖p(z)) (4) The architecture consists of an encoder which receives a sample x and generates the Gaussian variational distribution p(z|x;w). The decoder receives a sample from the Gaussian variational distribution and reconstructs the initial input x. The architecture is trained using the reparameterization trick Kingma & Welling (2014). Higgins et al. (2017) proposed β-VAE, where a parameter β ≥ 0 is used to control the trade-off between the reconstruction loss and the KL-divergence. L(x;w,v) = Ez∼qw(z|x)[log pu(x|z)]− βDKL(qw(z|x)‖p(z)) (5) 4 APPROACH 4.1 PROBLEM FORMULATION We consider a modified Markov Game (Littman, 1994), which consists of N agents I = {1, 2, ..., N}, the set of states S, the set of actions A = A1 × A−1 , the transition function P : S×A×S→ R and the reward function r : S×A×S→ RN . We consider partially observable settings, where each agent i has access only to its local observation oi and reward ri. Additionally, two sets of pretrained opponents are provided T = {I−1,m}m=Mm=1 and G = {I−1,m}m=Mm=1 , which are responsible for providing the joint action A−1. Note that by opponent we refer to I−1,m, which consists of one or more agents, independently from the type of the interactions (cooperative, mixed or competitive). At the beginning of each episode, we sample a pretrained opponent from the set T during training or from G during testing. Our goal is to train the agent 1 using RL, to maximize the average return against opponents from the training set, T, and generalize to opponents sampled from the test G. Note, that when we refer to agent 1 we drop the subscript. maxEπ[ET[ ∑ t γtrt]] (6) 4.2 VARIATIONAL AUTOENCODER WITH ACCESS TO OPPONENT’S INFORMATION We assume a number of K provided episode trajectories for each pretrained opponents j ∈ T, E(j) = {τ (j,k)−1 } k=K−1 k=0 , where τ (j,k) −1 = {o−1,t,a−1,t}t=Ht=0 , and o−1,t,a−1,t are the observations and actions of the opponent at the time step t in the trajectory. These trajectories are generated from the opponents in set T, which are represented in the latent space from the variable z and for which we assume there exists an unknown model pu(τ−1|z). Our goal is to approximate the unknown posterior, p(z|τ−1), using a variational Gaussian distribution N (µ,Σ;w) with parameters w. We consider using a β-VAE for the sequential task: L(τ−1;w,u) = Ez∼qw(z|τ−1)[log pu(τ−1|z)]− βDKL(qw(z|τ−1)‖p(z)) (7) We can further subtract the discrimination objective (equation 1) that was proposed by Grover et al. (2018a). Since the discrimination objective is always non-negative, we derive and optimize a lower bound as: L(τ−1;w,u) ≥ Ez∼qw(z|τ−1)[log pu(τ−1|z)] − βDKL(qw(z|τ−1)‖p(z))− λd(E(z+),E(z−),E(z)) (8) The discrimination objective receives as input the mean of the variational Gaussian distribution, produced by three different trajectories. Despite the less tight lower bound, the discrimination objective will separate the opponents in the embedding space, which could potentially lead to higher episodic returns. At each time step t, the recurrent encoder network generates a latent sample zt, which is conditioned on the opponent’s trajectory τ−1,:t, until this time step. The KL divergence can be written as: DKL(qw(z|τ−1)‖p(z)) = H∑ t=1 DKL(qw(zt|τ−1,:t)‖p(zt)) (9) The lower bound consist of the reconstruction loss of the trajectory which involves the observation and actions of the opponent. The opponent’s action, at each time step depends on its observation and the opponent’s policy, which is represented by the latent variable z. We use a decoder that consists of fully-connected layers, however, a recurrent network can be used, if we instead assume that the opponent decides its actions based on the history of its observations. Additionally, the observation at each time step depends only on the dynamics of the environment and the actions of the agents and not on the identity of the opponent. Therefore, the reconstruction loss factorizes as: log pu(τ−1|z) = H∑ t=1 log pu(a−1,t|o−1,t, zt)pu(o−1,t|ot−1,o−1,t−1,at−1,a−1,t−1) ∝ H∑ t=1 log pu(a−1,t|o−1,t, zt) (10) From the equation above, we observe that the loss is the reconstruction of the opponent’s policy given the current observation and a sample from the latent variable. Overall, our proposed VAE takes the form of a Conditional VAE (Sohn et al., 2015). Figure 1 illustrates the diagram of the VAE. The full pseudocode of the method is provided in the Appendix D. 4.3 VARIATIONAL AUTOENCODER WITHOUT ACCESS TO OPPONENT’S INFORMATION In Sections 1 and 2, it was noted that most agent modeling methods assume access to opponent’s observations and actions is available both during training and execution. To eliminate this assumption, we propose a VAE that uses a parametric variational distribution which is conditioned on the observation-action-reward triplet of the agent that we control and a variable d indicating whether the episode has terminated; qw(z|τ = (o,a, r, d)). More precisely, our goal is to approximate the true posterior that is conditioned on opponent’s information, with a variational distribution that only depends on local information. The use of this local information in a recurrent fashion has been successfully used in meta-RL settings (Wang et al., 2016; Duan et al., 2016). We start by computing the KL divergence between the two distributions: DKL(qw(z|τ )‖p(z|τ−1)) = Ez∼qw(z|τ )[log qw(z|τ )− log p(z|τ−1)] (11) By following the works of Kingma & Welling (2014) and Higgins et al. (2017) and using the Jensen inequality, the VAE objective can be written as: L(τ , τ−1;w,v) = Ez∼qw(z|τ )[log pu(τ−1|z)]− βDKL(qw(z|τ )‖p(z)) (12) The reconstruction loss factorizes exactly similar to equation 10. From equation 12, it can be seen that the variational distribution only depends on locally available information. Since during execution, only the encoder is required to generate the opponent’s model, this approach removes the assumption that access to the opponent’s observations and actions is available during execution. Figure 2 presents the diagram of the VAE. 4.4 REINFORCEMENT LEARNING TRAINING We use the latent variable z augmented with the agent’s observation to condition the policy of our agent, which is optimized using RL. Consider the augmented observation space O′ = O×Z, where O is the original observation space of the our agent in the Markov game, and Z is the representation space of the opponent models. The advantage of learning the policy on O′ compared to O is that the policy can adapt to different z ∈ Z. After training the variational autoencoder that was described in Section 4.2, we use it to train our agent against the opponents in the set T. We use the DDPG (Lillicrap et al., 2015) algorithm for this task. We did not manage to optimize the representation jointly with the policy, neither with DDPG or A2C. At the beginning of each episode, we sample an opponent from the set T. The agent’s input is the local observation and a sample from the variational distribution. We refer to this as OMDDPG (Opponent Modeling DDPG), and the full pseudocode is provided in Appendix D. We optimize the second proposed VAE method jointly with the policy of the controlled agent. We use the A2C algorithm, similarly to the meta-learning algorithm RL2 (Wang et al., 2016; Duan et al., 2016). In the rest of this paper, we refer to this as SMA2C (Self Modeling A2C). The actor’s and the critic’s input is the local observation and a sample from the latent space. We back-propagate the gradient from both the actor and the critic loss to the parameters of the encoder. Therefore, the encoder’s parameters are shaped to maximize both the VAE’s objective as well as the discounted sum of rewards. The full pseudocode is provided in Appendix D. 5 EXPERIMENTS 5.1 TOY EXAMPLE We will first provide a toy example to illustrate SMA2C. We consider the classic repeated game of prisoner’s dilemma with a constant episode length of 25 time steps. We control one agent, and the other agent is selected randomly between two possible opponent policies. The first opponent always defects, while the second opponent follows a tit-for-tat policy. At the beginning of the episode, one of the two opponents is randomly selected. We train SMA2C against the two possible opponents. The agent that we control has to identify the correct opponent, and the optimal policy, it can achieve, is to defect against opponent one and collaborate with opponent two. Figure 3 shows the payoff matrix, the embedding space at the last time step of the episode, and the episodic return that SMA2C and A2C achieve during training. Note that, based on the payoff matrix, the optimal average episodic return that can be achieved is 24.5. 5.2 EXPERIMENTAL FRAMEWORK To evaluate the proposed methods in more complex environments, we used the Multi-agent Particle Environment (MPE) (Mordatch & Abbeel, 2017), which provides several different multi-agent environments. The environments have continuous observation, discrete action space, and fixed-length episodes of 25 time steps. Four environments are used for evaluating the proposed methodology; the speaker-listener, the double-speaker listener, the predator-prey, and the spread. In Appendix A, descriptions of the different environments are provided. During the experiments, we evaluated the two proposed algorithms OMDDPG and SMA2C as well as the modeling method of Grover et al. (2018a) combined with DDPG (Lillicrap et al., 2015). In all the environments, we pretrain ten different opponents, where five are used for training and five for testing. In the speaker-listener environment, we control the listener, and we create ten different speakers using different communication messages for different colors. In the double speakerlistener, which consists of two agents that have to be both listener and speaker simultaneously, we control the first agent. We create a diverse set of opponents that have different communication messages similar to speaker-listener, while they learn to navigate using the MADDPG algorithm (Lowe et al., 2017), with different initial random seeds. In the predator-prey environment, we control the prey and pretrain the three other agents in the environment using MADDPG with different initial parameters. Similarly, in spread, we control one of the agents, while the opponents are pretrained using MADDPG. We use agent generalization graphs (Grover et al., 2018b) to evaluate the generalization of the proposed methods. We evaluate two types of generalizations in this work. First, we evaluate the episodic returns against the opponents that are used for training, T, which Grover et al. (2018b) call ”weak generalization”. Secondly, we evaluate against unknown opponents from set G, which is called ”strong generalization”. A figure of an agent generalization graph is provided in Appendix E. 5.3 EVALUATION OF THE REPRESENTATIONS To evaluate the representations created from our models we will estimate the Mutual Information (MI) between the variational distribution (q(z|τ ) or q(z|τ−1)) and the prior on the opponents’ identities, which is uniform. This is a common method to estimate the quality of the representation (Chen et al., 2018; Hjelm et al., 2019). To estimate the MI, we use the Mutual Information Neural Estimation (MINE) (Belghazi et al., 2018). Note that, the upper bound of the MI, the entropy of the uniform distribution, in our experiments is 1.61. We gather 200 trajectories against each opponent in T, where 80% of them are used for training and the remaining for testing. The visualization of the embedding space, for the predator-prey environment, is provided in Appendix C. From Table 1, we observe that the method of Grover et al. (2018a) achieves significantly higher values of MI. We believe that the main reason behind this is the discrimination objective that implicitly increases MI. This is apparent in the MI values of OMDDPG as well. SMA2C manages to create opponent representations, based only on the local information of our agent, that have information about the opponent identities. Additionally, based on Figure 4, we observe that the value of MI is not directly related to the episodic returns in RL tasks. In Appendix B, we demonstrate that when we detach the encoder’s parameters from the policy optimization, the MI decreases. 5.4 REINFORCEMENT LEARNING PERFORMANCE We evaluate the proposed opponent modeling methods in RL settings. In Figure 4, the episodic returns for the three methods in all four environments are presented. Every line corresponds to the average return over five runs with different initial seeds, and the shadowed part represents the 95% confidence interval. We evaluate the models every 1000 episodes for 100 episodes. During the evaluation, we sample an embedding from the variational distribution at each time step, and the agent follows the greedy policy. The hyperparameters for all the experiments in Figure 4 were optimized on weak generalization scenarios, against opponents from set T. Details about the implementation and hyperparameters that were used for generating the figures are presented in Appendix D. OMDDPG is an upper baseline for SMA2C achieving higher returns in all environments during weak generalization. However, OMDDPG, as well as Grover et al. (2018a), tend to overfit and perform poorly during strong generalization in the speaker-listener and double speaker-listener environment. SMA2C achieves higher returns that Grover et al. (2018a) in more than half of the scenarios. Below, in the Section 5.5, we perform an ablation study on different inputs in the encoder of SMA2C. In Appendix B, we evaluate whether back-propagating the RL loss to the parameters of the encoder, in SMA2C, affects the episodic returns. 5.5 ABLATION STUDY ON SMA2C INPUTS We perform an ablation study to assess the performance requirements of the SMA2C. Our proposed method utilizes the observation, action, reward, and termination sequence to generate the opponent’s model. We use different combinations of these elements in the encoder and compare the average episodic returns. In Figure 5, the average episode return is presented for three different cases; SMA2C with all inputs, SMA2C with only observation and action as inputs and SMA2C with only observation as input; for all four environments. 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e6 40 35 30 25 20 15 10 5 0 R ew ar d Speaker-Listener 0 1 2 3 4 5 1e6 60 50 40 30 20 10 0 Double Speaker-Listener 0.0 0.2 0.4 0.6 0.8 1.0 1e7 80 70 60 50 40 30 20 10 0 Predator-Prey 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1e7 64 62 60 58 56 54 52 50 Spread 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Time steps 1e6 40 35 30 25 20 15 10 5 0 R ew ar d Speaker-Listener 0 1 2 3 4 5 Time steps 1e6 60 50 40 30 20 10 0 Double Speaker-Listener 0.0 0.2 0.4 0.6 0.8 1.0 Time steps 1e7 80 70 60 50 40 30 20 10 0 Predator-Prey 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Time steps 1e7 64 62 60 58 56 54 52 50 R ew ar d Spread SMA2C Full SMA2C Observation-Action SMA2C Observation Figure 5: Ablation on the episodic returns for different inputs in the VAE of SMA2C for weak (top row) and strong (bottom row) generalization in all four environments. 5.6 ABLATION STUDY ON THE DISCRIMINATION OBJECTIVE Another element of this work is the utilization of the discrimination objective of the Grover et al. (2018a) in the VAE loss. To better understand how the opponent separation in the embedding space is related to RL performance, below, Figure 6 shows the episodic return during the training for the OMDDPG with and without the discrimination objective is presented. Using the discrimination objective has a significant impact on the episodic returns in the speaker-listener and the double speaker-listener environment. 6 CONCLUSION To conclude this work, we proposed two methods for opponent modeling in multi-agent systems using variational autoencoders. First, we proposed OMDDPG a VAE-based method that uses the common assumption that access to opponents’ information is available during execution. The goal of this work is to motivate opponent modeling without access to opponent’s information. The core contribution of this work is SMA2C, which learns representations without requiring access to opponent’s information during execution. We performed a thorough experimental evaluation of the proposed methodology. We evaluated the quality of the representations produced by our models as well as the episodic return that can achieve in RL tasks. The experiments conclusively indicate that access to the opponent’s information is not necessary during execution, eliminating a long-standing assumption of the prior work. Additionally, we provided evidence that the relationship between the MI and the RL performance is not apparent. In the future, we would like to research how these models can be used for non-stationary opponents. Particularly, there are two scenarios worth investigating; the first is multi-agent deep RL, where different agents are learning concurrently leading to non-stationarity in the environment, which prevents the agents from learning optimal policies. Secondly, we would like to explore whether the proposed models can deal with opponents that try to deceive it and exploit the controlled agent (Ganzfried & Sandholm, 2011; 2015). A DETAILS OF THE EXPERIMENTAL ENVIRONMENT A.1 SPEAKER-LISTENER ENVIRONMENT The speaker-listener environment consists of two agents, called speaker and listener as well as three designated landmarks that each one has a different color, red green or blue. At the beginning of the episode, the listener is assigned a color, which can be red, green, or blue. The task of the listener is to navigate to the landmark that has the same color. However, the color of the listener can only be observed by the speaker. So the speaker has to learn to communicate the correct color to the listener. The listener should be able to understand the generated message of the speaker and to navigate to the correct color. The observation space of the listener has 13 dimensions, which consists of the position of the listener, the positions of the three landmarks in the 2D environment and the 5-dimensional communication message of the speaker and its action space has five dimensions. The speaker has a 3-dimensional observation space, which is a vector assigned to the color of the listener, and 5- dimensional action space. We use our method to train the listener to be able to understand a set of different speakers, pretrained speakers that use different communication messages. Both speaker and listener share the same reward, which is the negative Euclidean distance between the listener and the correct landmark. In Figure 7a, an instance from the speaker-listener environment is presented. A.2 DOUBLE SPEAKER-LISTENER ENVIRONMENT The double speaker-listener environment consists of two agents and three designated landmarks that each one has a different color, red green or blue, similarly to the speaker-listener environment. The only difference is that both agents are simultaneously both speakers and listeners. Therefore, at the beginning of the episode, each agent has a color that can only be observed by the other agent. Each agent must learn both to communicate a message to the other agent as well as navigate to the correct landmark. The observation space of each agent has 16 dimensions, where 13 of them are the same as the listener’s in the previous environment, and the other three are the vector assigned to the color of the opponent, while the action space is 5-dimensional for the navigation actions and 5-dimensional for the communication message as well. The reward is the average of the negative Euclidean distances between each agent and the correct landmark. This environment is significantly more difficult compared to the speaker-listener because our agent has to infer both the color that the other agent observes as well as communicate the correct message that the opponent expects. In Figure 7b, an instance from the double speaker-listener environment is presented. A.3 PREDATOR-PREY ENVIRONMENT This environment consists of one prey agent and three predator agents. At the beginning of the episode, the prey and the predators are randomly placed on a 2D map. The goal of the prey is to avoid being caught by predators. In the environment, there are additionally two large black obstacles in order to block the agents. The advantage of the prey compared to the three predators is that it can move faster compared to the three adversaries. This environment, compared to the previous two, is competitive. We deliberately chose this environment in order to prove that our method is agnostic to the environment setting. In our work, we will apply the proposed algorithm in the prey agent in order to examine whether it can avoid a large number of different pretrained predator agents. The observation of each agent has 14 dimensions representing the agents’ positions as well as the obstacles in the 2D space, while their action space consists of 5 actions. In Figure 7c, an instance from the predator-prey environment is presented. A.4 SPREAD ENVIRONMENT The spread environment consists of three large agents and three landmarks as well. At the beginning of the episode, the three agents and the three landmarks are spread randomly in the 2D space. The goal of the agents is to navigate to three different landmarks without colliding. The reward is the negative distance of each agent from the landmark. In the case of collision, there is an additional negative reward. The reward is the same for all agents. All the agents have the same observation space, which consists of 18 dimensions, while their action space consists of 5 actions. In Figure 7d, an instance from the spread environment, is presented. B ABLATION STUDY ON RL BACK-PROPAGATION We evaluate SMA2C without back-propagating the gradients of the RL loss to the parameters of the encoder. Therefore, the encoder is only trained based on ELBO. Figure 8 verifies that not performing back-propagation does not significantly affect the episodic returns that SMA2C achieves during weak generalization. Additionally, we compute the MI between the embeddings and the opponents’ identities similarly to Section 5.3 for the double speaker-listener environment. We observe that the MI decreases when we do not perform back-propagation to the embeddings of the encoder. C EMBEDDING VISUALIZATION Figure 9 visualizes the embedding space for the three different evaluated algorithms in the predatorprey environment. Note that the embeddings were generated using interactions between the opponents from the set T and the trained agent that we control. D IMPLEMENTATION DETAILS The pseudocode for training the VAE from Section 4.2 is provided below. We consider that 1000 trajectories are provided for each one of the opponents, which are generated against trained agents. We train the VAE for 1000 epochs, using Adam (Kingma & Ba, 2014) with 10−3 learning rate. Algorithm 1 Pseudocode of the proposed VAE algorithm for i = 1 :M do Sample an episode and compute the embedding z ← q(sample(Ei)) Sample a different episode and compute the embedding z+ ← q(sample(Ei)) for j = 1 :M do if i == j then continue Sample an episode and compute the embedding z− ← q(sample(Ej)) Update VAE parameters by maximizing 8 We train OMDDPG for 2 million steps in all the experiments. Since the encoder of the VAE has an LSTM layer and has to be trained on sequential data, we use a modified experience replay, that enables sampling of whole episodes, which has also been used by Hausknecht & Stone (2015). DDPG algorithm requires continuous action space. However, since our experimental environments have discrete action space, we use the Gumbel-Softmax trick (Jang et al., 2016) to create differentiable samples from a discrete distribution. Additionally, we regularize the actor loss by adding the squared logits in order to prevent them from getting large values. The pseudocode of OMDDPG is presented below 2. Algorithm 2 Pseudocode of the OMDDPG algorithm for e = 1 : K episodes do opp← sample(opponents) while episode is not fished do Get the observation of our agent o and the observation of the opponent o−1 Compute the action of the opponent a−1 Get a sample from the encoder z ← q(z|o−1, a−1) Compute the action a of the agent using exploration Perform the actions in the environment and get new observations and rewards Store the sequences of both agents in the experience replay if t%update frequency == 0 then Sample a batch of sequences from the experience replay Update the actor-critic parameters using 2 where s← concat(o, z) Update the target networks All neural networks have 2 hidden layers with ReLU (Maas et al., 2013) activation function. We use the Adam optimizer (Kingma & Ba, 2014) for all experiments. The target networks are updated with τ = 0.01. The hidden dimensions for all VAE layers is 100. The parameter λ in the VAE loss (equation 8) is always 1. We perform gradient updates every 50 time steps with a batch of 100 episodes. Table 3 summarizes the rest of the hyperparameters. We train A2C for 2 million steps in the speaker-listener environment, 5 million in the double speakerlistener, 10 million in the predator-prey and 15 million in the spread environment. A2C as an onpolicy algorithm is significantly less sample-efficient compared to DDPG, and as a result, more training steps are required. The pseudocode of SMA2C is presented below 3. We subtract the policy entropy from the actor loss (Mnih et al., 2016) to ensure sufficient exploration. The loss that SMA2C minimizes can be written as: min φ,θ,w,u 1 2 EB [(r + γVφ(s′)− Vφ(s))2 − Â log πθ(a|s) − δ log pu(τ−1|z) + βDKL(qw(z|τ )‖p(z))− bH(π)] (13) Algorithm 3 Pseudocode of the SMA2C algorithm Create D parallel environments t← 0 for e = 1 : K episodes do Sample D opponents opp← sample(opponents) while episode is not finished do for every environment in D do Get the observation of our agent o and the observation of the opponent o−1 Get a sample from the encoder z ← q(z|o, a, r, d) Compute the action a of the agent using exploration t← t+ 1 Perform the actions in the environment and get new observations, rewards and done if t%update frequency == 0 then Gather the sequences from all environments to a single batch Update the actor-critic and VAE parameters using 13 where s← concat(o, z) For the advantage computation, we use the GAE (Schulman et al., 2015b) with λGAE = 0.95. We create 10 parallel environment to break the correlation between consecutive samples. The actor and the critic share all hidden layers in all the environments except the double speaker-listener. We use the Adam optimizer (Kingma & Ba, 2014), and we clip the gradient norm to 0.5. Table 4 summarizes the rest of the hyperparameters. E AGENT GENERALIZATION GRAPHS Figure 10 presents the agent generalization graph that was used in all experiments in this paper.
1. What is the purpose and significance of using VAEs to model fixed-policy opponents in a reinforcement learning setting? 2. Are there any novel or original aspects of the proposed approach? If so, what are they? 3. Can you provide more information or context regarding the experiments conducted in the paper? What are they intended to demonstrate? 4. How does the reviewer assess the benefits or advantages of the presented approach compared to other methods? 5. Is there anything else that could be added or improved upon in the paper to enhance its value or impact?
Review
Review The authors propose to use VAEs to model fixed-policy opponents in a reinforcement learning setting. They use these models to augment existing RL algorithms in situations where the environment can be factorized into opponents. I really fail to see the point of this paper. All the techniques presented in the paper are standard and the way they are put together is not particularly original. I found no specific claims about the benefits the presented approach offers over alternatives. The experiments are described from a technical perspective but I did not understand what they are actually supposed to show.
ICLR
Title Variational Autoencoders for Opponent Modeling in Multi-Agent Systems Abstract Multi-agent systems exhibit complex behaviors that emanate from the interactions of multiple agents in a shared environment. In this work, we are interested in controlling one agent in a multi-agent system and successfully learn to interact with the other agents that have fixed policies. Modeling the behavior of other agents (opponents) is essential in understanding the interactions of the agents in the system. By taking advantage of recent advances in unsupervised learning, we propose modeling opponents using variational autoencoders. Additionally, many existing methods in the literature assume that the opponent models have access to opponent’s observations and actions during both training and execution. To eliminate this assumption, we propose a modification that attempts to identify the underlying opponent model, using only local information of our agent, such as its observations, actions, and rewards. The experiments indicate that our opponent modeling methods achieve equal or greater episodic returns in reinforcement learning tasks against another modeling method. 1 INTRODUCTION In recent years, several promising works (Mnih et al., 2015; Schulman et al., 2015a; Mnih et al., 2016) have arisen in deep reinforcement learning (RL), leading to fruitful results in single-agent scenarios. In this work, we are interested in using single-agent RL in multi-agent systems, where we control one agent and the other agents (opponents) in the environment have fixed policies. The agent should be able to successfully interact with a diverse set of opponents as well as generalize to new unseen opponents. One effective way to address this problem is opponent modeling. The opponent models output specific characteristics of the opponents based on their trajectories. By successfully modeling the opponents, the agent can reason about opponents’ behaviors and goals and adjust its policy to achieve the optimal outcome. There is a rich literature of modeling opponents in the multiagent systems (Albrecht & Stone, 2018). Several recent works have proposed learning opponent models using deep learning architectures (He et al., 2016; Raileanu et al., 2018; Grover et al., 2018a; Rabinowitz et al., 2018). In this work, we focus on learning opponent models using Variational Autoencoders (VAEs) (Kingma & Welling, 2014). This work is, to the best of our knowledge, the first attempt to use VAEs in multi-agent scenarios. VAE are generative models that are commonly used for learning representations of the data, and various works use them in RL for learning representations of the environment (Igl et al., 2018; Ha & Schmidhuber, 2018; Zintgraf et al., 2019). We first propose a VAE for learning opponent representations in multi-agent systems based on the opponent trajectories. A shortcoming of this approach and most opponent modeling methods, as will be presented in Section 2, is that they require access to opponent’s information, such as observations and actions, during training as well as execution. This assumption is too limiting in the majority of scenarios. For example, consider Poker, where each agent never has access to the opponent’s observations. Nevertheless, during Poker, humans can reason about the opponent’s behaviors and goals using only their local observations. For example, an increase in the table’s pot could mean that the opponent either holds strong cards or is bluffing. Based on the idea that an agent can reason about an opponent’s model using its observations, actions, and rewards in a recurrent fashion, we propose a second VAE-based architecture. The encoder of the VAE learns to represent opponents’ models conditioned on only local information removing the requirement to access the opponents’ information during execution. To summarize our contribution, in this work, we explore VAEs for opponent modeling in multiagent systems. We are not interested in VAEs as generative models but as methods for learning representations. We evaluate our proposed methodology using a toy example and the commonly used Multi-agent Particle Environment (Mordatch & Abbeel, 2017). We evaluate the quality of the learned representations, and the episodic returns that RL algorithms can achieve. The experiments indicate that opponent modeling without opponents’ information can perform the same or even better in RL compared to models that access the opponent’s information. 2 RELATED WORK Learning Opponent Models. In this work, we are interested in opponent modeling methods that use neural networks to learn representations of the opponents. He et al. (2016) proposed an opponent modeling method that learns a modeling network to reconstruct the opponent’s actions given the opponent observations. Raileanu et al. (2018) developed an algorithm for learning to infer opponents’ goals using the policy of the controlled agent. Grover et al. (2018a) proposed an encoder-decoder method for modeling the opponent’s policy. The encoder learns a point-based representation of different opponents’ trajectories, and the decoder learns to reconstruct the’ opponent’s policy given samples from the embedding space. Additionally, Grover et al. (2018a) introduce an objective to separate embeddings of different agents into different clusters. d(z+, z−, z) = 1 (1 + e|z−z−|2−|z−z+|2)2 (1) where z+ and z are embeddings of the same agent from two different episodes and embedding z− is generated from the episode of a different agent. Rabinowitz et al. (2018) proposed the Theory of mind Network (TomNet), which learns embedding-based representations of opponents for metalearning. Tacchetti et al. (2018) proposed RFM to model opponents using graph neural networks. A common assumption among these methods, that this work aims to eliminate, is that access to opponents trajectories is available during execution. Representation Learning in Reinforcement Learning. Another topic that has received significant attention, recently, is representation learning in RL. Using unsupervised learning techniques to learn low dimensional representations of the MDP has led to significant improvement in RL. Ha & Schmidhuber (2018) proposed a VAE-based and a forward model to learn state representations of the environment. Hausman et al. (2018) learned tasks embeddings and interpolated them to solve harder tasks. Igl et al. (2018) used a VAE for learning representation in partially-observable environments. Gupta et al. (2018) proposed MAESN, which learns Gaussian embeddings to represent different tasks during meta-training and manages to quickly adapt to new task during meta-testing. The work of Zintgraf et al. (2019) is closely related, where Zintgraf et al. proposed a recurrent VAE model, which receives as input the observation, action, reward of the agent, and learns a variational distribution of tasks. Rakelly et al. (2019) used representations from an encoder for off-policy metaRL. Note that, all these works have been applied for learning representations of tasks or properties of the environments. On the contrary, our approach is focused on learning representations of the opponents. 3 BACKGROUND 3.1 REINFORCEMENT LEARNING Markov Decision Processes (MDPs) are commonly used to model decision making problems. An MDP consists of the set of states S, the set of actions A, the transition function, P (s′|s,a), which is the probability of the next state, s′, given the current state, s, and the action, a, and the reward function, r(s′,a, s), that returns a scalar value conditioned on two consecutive states and the intermediate action. A policy function is used to choose an action given a state, which can be stochastic a ∼ π(a|s) or deterministic a = µ(s). Given a policy π, the state value function is defined as V (st) = Eπ[ ∑H i=t γ i−trt|s = st] and the state-action value (Q-value) Q(st,at) = Eπ[ ∑H i=t γ i−trt|s = st, a = at], where 0 ≤ γ ≤ 1 is the discount factor and H is the finite horizon of the episode. The goal of RL is to compute the policy that maximizes state value function V , when the transition and the reward functions are unknown. There is a large number of RL algorithms; however, in this work, we focus on two actor-critic algorithms; the synchronous Advantage Actor-Critic (A2C) (Mnih et al., 2016; Dhariwal et al., 2017) and the Deep Deterministic Policy Gradient (DDPG) (Silver et al., 2014; Lillicrap et al., 2015). DDPG is an off-policy algorithm, using an experience replay for breaking the correlation between consecutive samples and target networks for stabilizing the training (Mnih et al., 2015). Given an actor network with parameters θ and a critic network with parameter φ, the gradient updates are performed using the following equations. min φ 1 2 EB [(r + γ ·Qtarget,φ′(s′, µtarget,θ′(s′))−Qφ(s,a))2] min θ −EB [Qφ(s, µθ(s))] (2) On the other hand, A2C is an on-policy actor-critic algorithm, using parallel environments to break the correlation between consecutive samples. The actor-critic parameters are optimized by: min θ,φ EB [−Â log πθ(a|s) + 1 2 (r + γVφ(s ′)− Vφ(s))2] (3) where the advantage term, Â, can be computed using the Generalized Advantage Estimation (GAE) (Schulman et al., 2015b). 3.2 VARIATIONAL AUTOENCODERS Consider samples from a dataset x ∈ X that are generated from some hidden (latent) random variable z based on a generative distribution pu(x|z) with unknown parameter u and a prior distribution on the latent variables, which we assume is a Gaussian with 0 mean and unit variance p(z) = N (z;0, I). We are interested in approximating the true posterior p(z|x) with a variational parametric distribution qw(z|x) = N (z;µ,Σ,w). Kingma & Welling (2014) proposed the Variational Autoencoders (VAE) to learn this distribution. Starting from the Kullback-Leibler (KL) divergence from the approximate to the true posterior DKL(qw(z|x)‖p(z|x)), the lower bound on the evidence log p(x) is derived as: log p(x) ≥ Ez∼qw(z|x)[log pu(x|z)]−DKL(qw(z|x)‖p(z)) (4) The architecture consists of an encoder which receives a sample x and generates the Gaussian variational distribution p(z|x;w). The decoder receives a sample from the Gaussian variational distribution and reconstructs the initial input x. The architecture is trained using the reparameterization trick Kingma & Welling (2014). Higgins et al. (2017) proposed β-VAE, where a parameter β ≥ 0 is used to control the trade-off between the reconstruction loss and the KL-divergence. L(x;w,v) = Ez∼qw(z|x)[log pu(x|z)]− βDKL(qw(z|x)‖p(z)) (5) 4 APPROACH 4.1 PROBLEM FORMULATION We consider a modified Markov Game (Littman, 1994), which consists of N agents I = {1, 2, ..., N}, the set of states S, the set of actions A = A1 × A−1 , the transition function P : S×A×S→ R and the reward function r : S×A×S→ RN . We consider partially observable settings, where each agent i has access only to its local observation oi and reward ri. Additionally, two sets of pretrained opponents are provided T = {I−1,m}m=Mm=1 and G = {I−1,m}m=Mm=1 , which are responsible for providing the joint action A−1. Note that by opponent we refer to I−1,m, which consists of one or more agents, independently from the type of the interactions (cooperative, mixed or competitive). At the beginning of each episode, we sample a pretrained opponent from the set T during training or from G during testing. Our goal is to train the agent 1 using RL, to maximize the average return against opponents from the training set, T, and generalize to opponents sampled from the test G. Note, that when we refer to agent 1 we drop the subscript. maxEπ[ET[ ∑ t γtrt]] (6) 4.2 VARIATIONAL AUTOENCODER WITH ACCESS TO OPPONENT’S INFORMATION We assume a number of K provided episode trajectories for each pretrained opponents j ∈ T, E(j) = {τ (j,k)−1 } k=K−1 k=0 , where τ (j,k) −1 = {o−1,t,a−1,t}t=Ht=0 , and o−1,t,a−1,t are the observations and actions of the opponent at the time step t in the trajectory. These trajectories are generated from the opponents in set T, which are represented in the latent space from the variable z and for which we assume there exists an unknown model pu(τ−1|z). Our goal is to approximate the unknown posterior, p(z|τ−1), using a variational Gaussian distribution N (µ,Σ;w) with parameters w. We consider using a β-VAE for the sequential task: L(τ−1;w,u) = Ez∼qw(z|τ−1)[log pu(τ−1|z)]− βDKL(qw(z|τ−1)‖p(z)) (7) We can further subtract the discrimination objective (equation 1) that was proposed by Grover et al. (2018a). Since the discrimination objective is always non-negative, we derive and optimize a lower bound as: L(τ−1;w,u) ≥ Ez∼qw(z|τ−1)[log pu(τ−1|z)] − βDKL(qw(z|τ−1)‖p(z))− λd(E(z+),E(z−),E(z)) (8) The discrimination objective receives as input the mean of the variational Gaussian distribution, produced by three different trajectories. Despite the less tight lower bound, the discrimination objective will separate the opponents in the embedding space, which could potentially lead to higher episodic returns. At each time step t, the recurrent encoder network generates a latent sample zt, which is conditioned on the opponent’s trajectory τ−1,:t, until this time step. The KL divergence can be written as: DKL(qw(z|τ−1)‖p(z)) = H∑ t=1 DKL(qw(zt|τ−1,:t)‖p(zt)) (9) The lower bound consist of the reconstruction loss of the trajectory which involves the observation and actions of the opponent. The opponent’s action, at each time step depends on its observation and the opponent’s policy, which is represented by the latent variable z. We use a decoder that consists of fully-connected layers, however, a recurrent network can be used, if we instead assume that the opponent decides its actions based on the history of its observations. Additionally, the observation at each time step depends only on the dynamics of the environment and the actions of the agents and not on the identity of the opponent. Therefore, the reconstruction loss factorizes as: log pu(τ−1|z) = H∑ t=1 log pu(a−1,t|o−1,t, zt)pu(o−1,t|ot−1,o−1,t−1,at−1,a−1,t−1) ∝ H∑ t=1 log pu(a−1,t|o−1,t, zt) (10) From the equation above, we observe that the loss is the reconstruction of the opponent’s policy given the current observation and a sample from the latent variable. Overall, our proposed VAE takes the form of a Conditional VAE (Sohn et al., 2015). Figure 1 illustrates the diagram of the VAE. The full pseudocode of the method is provided in the Appendix D. 4.3 VARIATIONAL AUTOENCODER WITHOUT ACCESS TO OPPONENT’S INFORMATION In Sections 1 and 2, it was noted that most agent modeling methods assume access to opponent’s observations and actions is available both during training and execution. To eliminate this assumption, we propose a VAE that uses a parametric variational distribution which is conditioned on the observation-action-reward triplet of the agent that we control and a variable d indicating whether the episode has terminated; qw(z|τ = (o,a, r, d)). More precisely, our goal is to approximate the true posterior that is conditioned on opponent’s information, with a variational distribution that only depends on local information. The use of this local information in a recurrent fashion has been successfully used in meta-RL settings (Wang et al., 2016; Duan et al., 2016). We start by computing the KL divergence between the two distributions: DKL(qw(z|τ )‖p(z|τ−1)) = Ez∼qw(z|τ )[log qw(z|τ )− log p(z|τ−1)] (11) By following the works of Kingma & Welling (2014) and Higgins et al. (2017) and using the Jensen inequality, the VAE objective can be written as: L(τ , τ−1;w,v) = Ez∼qw(z|τ )[log pu(τ−1|z)]− βDKL(qw(z|τ )‖p(z)) (12) The reconstruction loss factorizes exactly similar to equation 10. From equation 12, it can be seen that the variational distribution only depends on locally available information. Since during execution, only the encoder is required to generate the opponent’s model, this approach removes the assumption that access to the opponent’s observations and actions is available during execution. Figure 2 presents the diagram of the VAE. 4.4 REINFORCEMENT LEARNING TRAINING We use the latent variable z augmented with the agent’s observation to condition the policy of our agent, which is optimized using RL. Consider the augmented observation space O′ = O×Z, where O is the original observation space of the our agent in the Markov game, and Z is the representation space of the opponent models. The advantage of learning the policy on O′ compared to O is that the policy can adapt to different z ∈ Z. After training the variational autoencoder that was described in Section 4.2, we use it to train our agent against the opponents in the set T. We use the DDPG (Lillicrap et al., 2015) algorithm for this task. We did not manage to optimize the representation jointly with the policy, neither with DDPG or A2C. At the beginning of each episode, we sample an opponent from the set T. The agent’s input is the local observation and a sample from the variational distribution. We refer to this as OMDDPG (Opponent Modeling DDPG), and the full pseudocode is provided in Appendix D. We optimize the second proposed VAE method jointly with the policy of the controlled agent. We use the A2C algorithm, similarly to the meta-learning algorithm RL2 (Wang et al., 2016; Duan et al., 2016). In the rest of this paper, we refer to this as SMA2C (Self Modeling A2C). The actor’s and the critic’s input is the local observation and a sample from the latent space. We back-propagate the gradient from both the actor and the critic loss to the parameters of the encoder. Therefore, the encoder’s parameters are shaped to maximize both the VAE’s objective as well as the discounted sum of rewards. The full pseudocode is provided in Appendix D. 5 EXPERIMENTS 5.1 TOY EXAMPLE We will first provide a toy example to illustrate SMA2C. We consider the classic repeated game of prisoner’s dilemma with a constant episode length of 25 time steps. We control one agent, and the other agent is selected randomly between two possible opponent policies. The first opponent always defects, while the second opponent follows a tit-for-tat policy. At the beginning of the episode, one of the two opponents is randomly selected. We train SMA2C against the two possible opponents. The agent that we control has to identify the correct opponent, and the optimal policy, it can achieve, is to defect against opponent one and collaborate with opponent two. Figure 3 shows the payoff matrix, the embedding space at the last time step of the episode, and the episodic return that SMA2C and A2C achieve during training. Note that, based on the payoff matrix, the optimal average episodic return that can be achieved is 24.5. 5.2 EXPERIMENTAL FRAMEWORK To evaluate the proposed methods in more complex environments, we used the Multi-agent Particle Environment (MPE) (Mordatch & Abbeel, 2017), which provides several different multi-agent environments. The environments have continuous observation, discrete action space, and fixed-length episodes of 25 time steps. Four environments are used for evaluating the proposed methodology; the speaker-listener, the double-speaker listener, the predator-prey, and the spread. In Appendix A, descriptions of the different environments are provided. During the experiments, we evaluated the two proposed algorithms OMDDPG and SMA2C as well as the modeling method of Grover et al. (2018a) combined with DDPG (Lillicrap et al., 2015). In all the environments, we pretrain ten different opponents, where five are used for training and five for testing. In the speaker-listener environment, we control the listener, and we create ten different speakers using different communication messages for different colors. In the double speakerlistener, which consists of two agents that have to be both listener and speaker simultaneously, we control the first agent. We create a diverse set of opponents that have different communication messages similar to speaker-listener, while they learn to navigate using the MADDPG algorithm (Lowe et al., 2017), with different initial random seeds. In the predator-prey environment, we control the prey and pretrain the three other agents in the environment using MADDPG with different initial parameters. Similarly, in spread, we control one of the agents, while the opponents are pretrained using MADDPG. We use agent generalization graphs (Grover et al., 2018b) to evaluate the generalization of the proposed methods. We evaluate two types of generalizations in this work. First, we evaluate the episodic returns against the opponents that are used for training, T, which Grover et al. (2018b) call ”weak generalization”. Secondly, we evaluate against unknown opponents from set G, which is called ”strong generalization”. A figure of an agent generalization graph is provided in Appendix E. 5.3 EVALUATION OF THE REPRESENTATIONS To evaluate the representations created from our models we will estimate the Mutual Information (MI) between the variational distribution (q(z|τ ) or q(z|τ−1)) and the prior on the opponents’ identities, which is uniform. This is a common method to estimate the quality of the representation (Chen et al., 2018; Hjelm et al., 2019). To estimate the MI, we use the Mutual Information Neural Estimation (MINE) (Belghazi et al., 2018). Note that, the upper bound of the MI, the entropy of the uniform distribution, in our experiments is 1.61. We gather 200 trajectories against each opponent in T, where 80% of them are used for training and the remaining for testing. The visualization of the embedding space, for the predator-prey environment, is provided in Appendix C. From Table 1, we observe that the method of Grover et al. (2018a) achieves significantly higher values of MI. We believe that the main reason behind this is the discrimination objective that implicitly increases MI. This is apparent in the MI values of OMDDPG as well. SMA2C manages to create opponent representations, based only on the local information of our agent, that have information about the opponent identities. Additionally, based on Figure 4, we observe that the value of MI is not directly related to the episodic returns in RL tasks. In Appendix B, we demonstrate that when we detach the encoder’s parameters from the policy optimization, the MI decreases. 5.4 REINFORCEMENT LEARNING PERFORMANCE We evaluate the proposed opponent modeling methods in RL settings. In Figure 4, the episodic returns for the three methods in all four environments are presented. Every line corresponds to the average return over five runs with different initial seeds, and the shadowed part represents the 95% confidence interval. We evaluate the models every 1000 episodes for 100 episodes. During the evaluation, we sample an embedding from the variational distribution at each time step, and the agent follows the greedy policy. The hyperparameters for all the experiments in Figure 4 were optimized on weak generalization scenarios, against opponents from set T. Details about the implementation and hyperparameters that were used for generating the figures are presented in Appendix D. OMDDPG is an upper baseline for SMA2C achieving higher returns in all environments during weak generalization. However, OMDDPG, as well as Grover et al. (2018a), tend to overfit and perform poorly during strong generalization in the speaker-listener and double speaker-listener environment. SMA2C achieves higher returns that Grover et al. (2018a) in more than half of the scenarios. Below, in the Section 5.5, we perform an ablation study on different inputs in the encoder of SMA2C. In Appendix B, we evaluate whether back-propagating the RL loss to the parameters of the encoder, in SMA2C, affects the episodic returns. 5.5 ABLATION STUDY ON SMA2C INPUTS We perform an ablation study to assess the performance requirements of the SMA2C. Our proposed method utilizes the observation, action, reward, and termination sequence to generate the opponent’s model. We use different combinations of these elements in the encoder and compare the average episodic returns. In Figure 5, the average episode return is presented for three different cases; SMA2C with all inputs, SMA2C with only observation and action as inputs and SMA2C with only observation as input; for all four environments. 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e6 40 35 30 25 20 15 10 5 0 R ew ar d Speaker-Listener 0 1 2 3 4 5 1e6 60 50 40 30 20 10 0 Double Speaker-Listener 0.0 0.2 0.4 0.6 0.8 1.0 1e7 80 70 60 50 40 30 20 10 0 Predator-Prey 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1e7 64 62 60 58 56 54 52 50 Spread 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Time steps 1e6 40 35 30 25 20 15 10 5 0 R ew ar d Speaker-Listener 0 1 2 3 4 5 Time steps 1e6 60 50 40 30 20 10 0 Double Speaker-Listener 0.0 0.2 0.4 0.6 0.8 1.0 Time steps 1e7 80 70 60 50 40 30 20 10 0 Predator-Prey 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Time steps 1e7 64 62 60 58 56 54 52 50 R ew ar d Spread SMA2C Full SMA2C Observation-Action SMA2C Observation Figure 5: Ablation on the episodic returns for different inputs in the VAE of SMA2C for weak (top row) and strong (bottom row) generalization in all four environments. 5.6 ABLATION STUDY ON THE DISCRIMINATION OBJECTIVE Another element of this work is the utilization of the discrimination objective of the Grover et al. (2018a) in the VAE loss. To better understand how the opponent separation in the embedding space is related to RL performance, below, Figure 6 shows the episodic return during the training for the OMDDPG with and without the discrimination objective is presented. Using the discrimination objective has a significant impact on the episodic returns in the speaker-listener and the double speaker-listener environment. 6 CONCLUSION To conclude this work, we proposed two methods for opponent modeling in multi-agent systems using variational autoencoders. First, we proposed OMDDPG a VAE-based method that uses the common assumption that access to opponents’ information is available during execution. The goal of this work is to motivate opponent modeling without access to opponent’s information. The core contribution of this work is SMA2C, which learns representations without requiring access to opponent’s information during execution. We performed a thorough experimental evaluation of the proposed methodology. We evaluated the quality of the representations produced by our models as well as the episodic return that can achieve in RL tasks. The experiments conclusively indicate that access to the opponent’s information is not necessary during execution, eliminating a long-standing assumption of the prior work. Additionally, we provided evidence that the relationship between the MI and the RL performance is not apparent. In the future, we would like to research how these models can be used for non-stationary opponents. Particularly, there are two scenarios worth investigating; the first is multi-agent deep RL, where different agents are learning concurrently leading to non-stationarity in the environment, which prevents the agents from learning optimal policies. Secondly, we would like to explore whether the proposed models can deal with opponents that try to deceive it and exploit the controlled agent (Ganzfried & Sandholm, 2011; 2015). A DETAILS OF THE EXPERIMENTAL ENVIRONMENT A.1 SPEAKER-LISTENER ENVIRONMENT The speaker-listener environment consists of two agents, called speaker and listener as well as three designated landmarks that each one has a different color, red green or blue. At the beginning of the episode, the listener is assigned a color, which can be red, green, or blue. The task of the listener is to navigate to the landmark that has the same color. However, the color of the listener can only be observed by the speaker. So the speaker has to learn to communicate the correct color to the listener. The listener should be able to understand the generated message of the speaker and to navigate to the correct color. The observation space of the listener has 13 dimensions, which consists of the position of the listener, the positions of the three landmarks in the 2D environment and the 5-dimensional communication message of the speaker and its action space has five dimensions. The speaker has a 3-dimensional observation space, which is a vector assigned to the color of the listener, and 5- dimensional action space. We use our method to train the listener to be able to understand a set of different speakers, pretrained speakers that use different communication messages. Both speaker and listener share the same reward, which is the negative Euclidean distance between the listener and the correct landmark. In Figure 7a, an instance from the speaker-listener environment is presented. A.2 DOUBLE SPEAKER-LISTENER ENVIRONMENT The double speaker-listener environment consists of two agents and three designated landmarks that each one has a different color, red green or blue, similarly to the speaker-listener environment. The only difference is that both agents are simultaneously both speakers and listeners. Therefore, at the beginning of the episode, each agent has a color that can only be observed by the other agent. Each agent must learn both to communicate a message to the other agent as well as navigate to the correct landmark. The observation space of each agent has 16 dimensions, where 13 of them are the same as the listener’s in the previous environment, and the other three are the vector assigned to the color of the opponent, while the action space is 5-dimensional for the navigation actions and 5-dimensional for the communication message as well. The reward is the average of the negative Euclidean distances between each agent and the correct landmark. This environment is significantly more difficult compared to the speaker-listener because our agent has to infer both the color that the other agent observes as well as communicate the correct message that the opponent expects. In Figure 7b, an instance from the double speaker-listener environment is presented. A.3 PREDATOR-PREY ENVIRONMENT This environment consists of one prey agent and three predator agents. At the beginning of the episode, the prey and the predators are randomly placed on a 2D map. The goal of the prey is to avoid being caught by predators. In the environment, there are additionally two large black obstacles in order to block the agents. The advantage of the prey compared to the three predators is that it can move faster compared to the three adversaries. This environment, compared to the previous two, is competitive. We deliberately chose this environment in order to prove that our method is agnostic to the environment setting. In our work, we will apply the proposed algorithm in the prey agent in order to examine whether it can avoid a large number of different pretrained predator agents. The observation of each agent has 14 dimensions representing the agents’ positions as well as the obstacles in the 2D space, while their action space consists of 5 actions. In Figure 7c, an instance from the predator-prey environment is presented. A.4 SPREAD ENVIRONMENT The spread environment consists of three large agents and three landmarks as well. At the beginning of the episode, the three agents and the three landmarks are spread randomly in the 2D space. The goal of the agents is to navigate to three different landmarks without colliding. The reward is the negative distance of each agent from the landmark. In the case of collision, there is an additional negative reward. The reward is the same for all agents. All the agents have the same observation space, which consists of 18 dimensions, while their action space consists of 5 actions. In Figure 7d, an instance from the spread environment, is presented. B ABLATION STUDY ON RL BACK-PROPAGATION We evaluate SMA2C without back-propagating the gradients of the RL loss to the parameters of the encoder. Therefore, the encoder is only trained based on ELBO. Figure 8 verifies that not performing back-propagation does not significantly affect the episodic returns that SMA2C achieves during weak generalization. Additionally, we compute the MI between the embeddings and the opponents’ identities similarly to Section 5.3 for the double speaker-listener environment. We observe that the MI decreases when we do not perform back-propagation to the embeddings of the encoder. C EMBEDDING VISUALIZATION Figure 9 visualizes the embedding space for the three different evaluated algorithms in the predatorprey environment. Note that the embeddings were generated using interactions between the opponents from the set T and the trained agent that we control. D IMPLEMENTATION DETAILS The pseudocode for training the VAE from Section 4.2 is provided below. We consider that 1000 trajectories are provided for each one of the opponents, which are generated against trained agents. We train the VAE for 1000 epochs, using Adam (Kingma & Ba, 2014) with 10−3 learning rate. Algorithm 1 Pseudocode of the proposed VAE algorithm for i = 1 :M do Sample an episode and compute the embedding z ← q(sample(Ei)) Sample a different episode and compute the embedding z+ ← q(sample(Ei)) for j = 1 :M do if i == j then continue Sample an episode and compute the embedding z− ← q(sample(Ej)) Update VAE parameters by maximizing 8 We train OMDDPG for 2 million steps in all the experiments. Since the encoder of the VAE has an LSTM layer and has to be trained on sequential data, we use a modified experience replay, that enables sampling of whole episodes, which has also been used by Hausknecht & Stone (2015). DDPG algorithm requires continuous action space. However, since our experimental environments have discrete action space, we use the Gumbel-Softmax trick (Jang et al., 2016) to create differentiable samples from a discrete distribution. Additionally, we regularize the actor loss by adding the squared logits in order to prevent them from getting large values. The pseudocode of OMDDPG is presented below 2. Algorithm 2 Pseudocode of the OMDDPG algorithm for e = 1 : K episodes do opp← sample(opponents) while episode is not fished do Get the observation of our agent o and the observation of the opponent o−1 Compute the action of the opponent a−1 Get a sample from the encoder z ← q(z|o−1, a−1) Compute the action a of the agent using exploration Perform the actions in the environment and get new observations and rewards Store the sequences of both agents in the experience replay if t%update frequency == 0 then Sample a batch of sequences from the experience replay Update the actor-critic parameters using 2 where s← concat(o, z) Update the target networks All neural networks have 2 hidden layers with ReLU (Maas et al., 2013) activation function. We use the Adam optimizer (Kingma & Ba, 2014) for all experiments. The target networks are updated with τ = 0.01. The hidden dimensions for all VAE layers is 100. The parameter λ in the VAE loss (equation 8) is always 1. We perform gradient updates every 50 time steps with a batch of 100 episodes. Table 3 summarizes the rest of the hyperparameters. We train A2C for 2 million steps in the speaker-listener environment, 5 million in the double speakerlistener, 10 million in the predator-prey and 15 million in the spread environment. A2C as an onpolicy algorithm is significantly less sample-efficient compared to DDPG, and as a result, more training steps are required. The pseudocode of SMA2C is presented below 3. We subtract the policy entropy from the actor loss (Mnih et al., 2016) to ensure sufficient exploration. The loss that SMA2C minimizes can be written as: min φ,θ,w,u 1 2 EB [(r + γVφ(s′)− Vφ(s))2 − Â log πθ(a|s) − δ log pu(τ−1|z) + βDKL(qw(z|τ )‖p(z))− bH(π)] (13) Algorithm 3 Pseudocode of the SMA2C algorithm Create D parallel environments t← 0 for e = 1 : K episodes do Sample D opponents opp← sample(opponents) while episode is not finished do for every environment in D do Get the observation of our agent o and the observation of the opponent o−1 Get a sample from the encoder z ← q(z|o, a, r, d) Compute the action a of the agent using exploration t← t+ 1 Perform the actions in the environment and get new observations, rewards and done if t%update frequency == 0 then Gather the sequences from all environments to a single batch Update the actor-critic and VAE parameters using 13 where s← concat(o, z) For the advantage computation, we use the GAE (Schulman et al., 2015b) with λGAE = 0.95. We create 10 parallel environment to break the correlation between consecutive samples. The actor and the critic share all hidden layers in all the environments except the double speaker-listener. We use the Adam optimizer (Kingma & Ba, 2014), and we clip the gradient norm to 0.5. Table 4 summarizes the rest of the hyperparameters. E AGENT GENERALIZATION GRAPHS Figure 10 presents the agent generalization graph that was used in all experiments in this paper.
1. What is the main contribution of the paper in multi-agent systems? 2. What are the strengths of the proposed approach, particularly in modeling opponents? 3. Do you have any concerns about the approach, especially when dealing with non-fixed opponents? 4. How does the reviewer assess the potential exploitability of the agent by opponents? 5. Can you provide relevant references to address the issue of exploitability in game theory?
Review
Review This paper proposes a reasonable and natural way of modeling opponents in multi-agent systems by learning a latent space with a VAE. This latent space will ideally learn the strategy that the opponent is playing to inform the agent's policy. With fixed opponents, the results across many tasks are convincing. My one concern with this modeling approach is that it will start breaking down if the opponents are *not* fixed as this potentially makes the agent more exploitable. The opponents could learn to send adversarial sequences to the opponent model that make it appear like they are playing one strategy but then they could change strategies at a critical point where it is too late for the agent to recover or perform optimally. This type of exploitability has been explored in the game theory community in [1,2] and the references therein. [1] Ganzfried, S., & Sandholm, T. Game theory-based opponent modeling in large imperfect-information games. AAMAS 2011. [2] Ganzfried, S., & Sandholm, T. Safe opponent exploitation. TEAC 2015.
ICLR
Title A Bilingual Generative Transformer for Semantic Sentence Embedding Abstract Semantic sentence embedding models encode natural language sentences into vectors, such that closeness in embedding space indicates closeness in the semantics between the sentences. Bilingual data offers a useful signal for learning such embeddings: properties shared by both sentences in a translation pair are likely semantic, while divergent properties are likely stylistic or language-specific. We propose a deep latent variable model that attempts to perform source separation on parallel sentences, isolating what they have in common in a latent semantic vector, and explaining what is left over with language-specific latent vectors. Our proposed approach differs from past work on semantic sentence encoding in two ways. First, by using a variational probabilistic framework, we introduce priors that encourage source separation, and can use our model’s posterior to predict sentence embeddings for monolingual data at test time. Second, we use highcapacity transformers as both data generating distributions and inference networks – contrasting with most past work on sentence embeddings. In experiments, our approach substantially outperforms the state-of-the-art on a standard suite of unsupervised semantic similarity evaluations. Further, we demonstrate that our approach yields the largest gains on more difficult subsets of these evaluations where simple word overlap is not a good indicator of similarity. 1 INTRODUCTION Learning useful representations of language has been a source of recent success in natural language processing (NLP). Much work has been done on learning representations for words (Mikolov et al., 2013; Pennington et al., 2014) and sentences (Kiros et al., 2015; Conneau et al., 2017). More recently, deep neural architectures have been used to learn contextualized word embeddings (Peters et al., 2018; Devlin et al., 2018) which have enabled state-of-the-art results on many tasks. We focus on learning semantic sentence embeddings in this paper, which play an important role in many downstream applications. Since they do not require any labelled data for fine-tuning, sentence embeddings are useful for a variety of problems right out of the box. These include Semantic Textual Similarity (STS; Agirre et al. (2012)), mining bitext (Zweigenbaum et al., 2018), and paraphrase identification (Dolan et al., 2004). Semantic similarity measures also have downstream uses such as fine-tuning machine translation systems (Wieting et al., 2019a). There are three main ingredients when designing a sentence embedding model: the architecture, the training data, and the objective function. Many architectures including LSTMs (Hill et al., 2016; Conneau et al., 2017; Schwenk & Douze, 2017; Subramanian et al., 2018), Transformers (Cer et al., 2018; Reimers & Gurevych, 2019), and averaging models (Wieting et al., 2016a; Arora et al., 2017) have found success for learning sentence embeddings. The choice of training data and objective are intimately intertwined, and there are a wide variety of options including next-sentence prediction (Kiros et al., 2015), machine translation (Espana-Bonet et al., 2017; Schwenk & Douze, 2017; Schwenk, 2018; Artetxe & Schwenk, 2018), natural language inference (NLI) (Conneau et al., 2017), and multi-task objectives which include some of the previously mentioned objectives (Cer et al., 2018) as well as additional tasks like constituency parsing (Subramanian et al., 2018). Surprisingly, despite ample testing of more powerful architectures, the best performing models for many sentence embedding tasks related to semantic similarity often use simple architectures that are mostly agnostic to the interactions between words. For instance, some of the top performing techniques use word embedding averaging (Wieting et al., 2016a), character n-grams (Wieting et al., 2016b), and subword embedding averaging (Wieting et al., 2019b) to create representations. These simple approaches are competitive with much more complicated architectures on in-domain data and generalize well to unseen domains, but are fundamentally limited by their inability to capture word order. Training these approaches generally relies on discriminative objectives defined on paraphrase data (Ganitkevitch et al., 2013; Wieting & Gimpel, 2018) or bilingual data (Wieting et al., 2019b). The inclusion of latent variables in these models has also been explored (Chen et al., 2019). Intuitively, bilingual data in particular is promising because it potentially offers a useful signal for learning the underlying semantics of sentences. Within a translation pair, properties shared by both sentences are more likely semantic, while those that are divergent are more likely stylistic or language-specific. While previous work learning from bilingual data perhaps takes advantage of this fact implicitly, the focus of this paper is modelling this intuition explicitly, and to the best of our knowledge, this has not not been explored in prior work. Specifically, we propose a deep generative model that is encouraged to perform source separation on parallel sentences, isolating what they have in common in a latent semantic embedding and explaining what is left over with language-specific latent vectors. At test time, we use inference networks (Kingma & Welling, 2013) for approximating the model’s posterior on the semantic and source-separated latent variables to encode monolingual sentences. Finally, since our model and training objective are generative, our approach does not require knowledge of the distance metrics to be used during evaluation,1 and it has the additional property of being able to generate text. In experiments, we evaluate our probabilistic source-separation approach on a standard suite of STS evaluations. We demonstrate that the proposed approach is effective, most notably allowing the learning of high-capacity deep transformer architectures (Vaswani et al., 2017) while still generalizing to new domains, significantly outperforming a variety of state-of-the-art baselines . Further, we conduct a thorough analysis by identifying subsets of the STS evaluation where simple word overlap is not able to accurately assess semantic similarity. On these most difficult instances, we find that our approach yields the largest gains, indicating that our system is modeling interactions between words to good effect. We also find that our model better handles cross-lingual semantic similarity than multilingual translation baseline approaches, indicating that stripping away language-specific information allows for better comparisons between sentences from different languages. Finally, we analyze our model to uncover what information was captured by the source separation into the semantic and language-specific variables and the relationship between this encoded information and language distance to English. We find that the language-specific variables tend to explain more superficial or language-specific properties such as overall sentence length, amount and location of punctuation, and the gender of articles (if gender is present in the language), but semantic and syntactic information is more concentrated in the shared semantic variables, matching our intuition. Language distance has an effect as well, where languages that share common structures with English put more information into the semantic variables, while more distant languages put more information into the language-specific variables. Lastly, we show outputs generated from our model that exhibit its ability to do a type of style transfer. 2 MODEL Our proposed training objective leverages a generative model of parallel text in two languages (e.g. English (en) and French (fr)) that form a pair consisting of an English sentence xen and a French sentence xfr. Importantly, this generative process utilizes three underlying latent vectors: languagespecific variation variables (language variables) zfr and zen respectively for each side of the translation, as well as a shared semantic variation variable (semantic variable) zsem. In this section we will first describe the generative model for the text and latent variables. In the following section we will describe the inference procedure of zsem given an input sentence, which corresponds to our core task of obtaining sentence embeddings useful for downstream tasks such as semantic similarity. Further, by encouraging the model to perform this source separation, the learned semantic encoders will more crisply represent the underlying semantics, increasing performance on downstream semantic tasks. 1In other words, we don’t assume cosine similarity as a metric, though it does work well in our experiments. The generative process of our model, the Bilingual Generative Transformer (BGT), is depicted in Figure 1 and its computation graph is shown in Figure 2. First, we sample latent variables 〈zfr, zen, zsem〉, where zi ∈ Rk, from a multivariate Gaussian prior N(0, Ik). These variables are then fed into a decoder that samples sentences; xen is sampled conditioned on zsem and zen, while xfr is sampled conditioned on zsem and zfr. Because sentences in both languages will use zsem in generation, we expect that in a well- trained model this variable will encode semantic, syntactic, or stylistic information shared across both sentences, while zfr and zen will handle any language-specific peculiarities or specific stylistic decisions that are less central to the sentence meaning and thus do not translate across sentences. In the following section, we further discuss how this is explicitly encouraged by the learning process. Decoder Architecture. Many latent variable models for text use LSTMs (Hochreiter & Schmidhuber, 1997) as their decoders (Yang et al., 2017; Ziegler & Rush, 2019; Ma et al., 2019). However, state-of-the-art models in neural machine translation have seen increased performance and speed using deep Transformer architectures. We also found in our experiments (see Appendix C for details) that Transformers led to increased performance in our setting, so they are used in our main model. We use two decoders in our model, one for modelling p(xfr|zsem, zfr; θ) and one for modeling p(xen|zsem, zen; θ). These decoders are depicted on the right side of Figure 2. Each decoder takes in two latent variables, a language variable and a semantic variable. These variables are concatenated together prior to being used by the decoder for reconstruction. We explore four ways of using this latent vector: (1) Concatenate it to the word embeddings (Word) (2) Use it as the initial hidden state (Hidden, LSTM only) (3) Use it as you would the attention context vector in the traditional sequenceto-sequence framework (Attention) and (4) Concatenate it to the hidden state immediately prior to computing the logits (Logit). Unlike Attention, there is no additional feedforward layer in this setting. We experimented with these four approaches, as well as combinations thereof, and report this analysis in Appendix A. From these experiments, we see that the closer the sentence embedding is to the softmax, the better the performance on downstream tasks evaluating its semantic content. We hypothesise that this is due to better gradient propagation because the sentence embedding is now closer to the error signal. Since Attention and Logit performed best, we use these in our Transformer experiments. 3 LEARNING AND INFERENCE Our model is trained on a training set X of parallel text consisting of N examples, X = {〈x1en, x1fr〉, . . . , 〈xNen, xNfr〉}, and Z is our collection of latent variables Z = (〈z1en, z1fr, z1sem〉, . . . , 〈zNen, zNfr, zNsem〉). We wish to maximize the likelihood of the parameters of the two decoders θ with respect to the observed X , marginalizing over the latent variables Z. p(X; θ) = ∫ Z p(X,Z; θ)dZ Unfortunately, this integral is intractable due to the complex relationship between X and Z. However, related latent variable models like variational autoencoders (VAEs (Kingma & Welling, 2013)) learn by optimizing a variational lower bound on the log marginal likelihood. This surrogate objective is called the evidence lower bound (ELBO) and introduces a variational approximation, q to the true posterior of the model p. The q distribution is parameterized by a neural network with parameters φ. ELBO can be written for our model as follows: ELBO =Eq(Z|X;φ)[log p(X|Z; θ)]− KL(q(Z|X;φ)||p(Z; θ)) This lower bound on the marginal can be optimized by gradient ascent by using the reparameterization trick (Kingma & Welling, 2013). This trick allows for the expectation under q to be approximated through sampling in a way that preserves backpropagation. We make several independence assumptions for q(zsem, zen, zfr|xen, xfr;φ). Specifically, to match our goal of source separation, we factor q as q(zsem, zen, zfr|xen, xfr;φ) = q(zsem|xen, xfr;φ)q(zen|xen)q(zfr|xfr;φ), with φ being the parameters of the encoders that make up the inference networks, defined in the next paragraph. Lastly, we note that the KL term in our ELBO equation encourages explaining variation that is shared by translations with the shared semantic variable and explaining language-specific variation with the corresponding language-specific variables. Information shared by the two sentences will result in a lower KL loss if it is encoded in the shared variable, otherwise that information will be replicated and the overall cost of encoding will increase. Encoder Architecture. We use three inference networks as shown on the left side of Figure 2: an English inference network to produce the English language variable, a French inference network to produce the French language variable, and a semantic inference network to produce the semantic variable. Just as in the decoder architecture, we use a Transformer for the encoders. The semantic inference network is a bilingual encoder that encodes each language. For each translation pair, we alternate which of the two parallel sentences is fed into the semantic encoder within a batch. Since the semantic encoder is meant to capture language agnostic semantic information, its outputs for a translation pair should be similar regardless of the language of the input sentence. We note that other operations are possible for combining the views each parallel sentence offers. For instance, we could feed both sentences into the semantic encoder and pool their representations. However, in practice we find that alternating works well and leave further study of this to future work. 4 EXPERIMENTS 4.1 BASELINE MODELS We experiment with fourteen baseline models, covering both the most effective approaches for learning sentence embeddings from the literature and ablations of our own BGT model. These baselines can be split into three groups as detailed below. Models from the Literature (Trained on Different Data) We compare to well known sentence embedding models Infersent (Conneau et al., 2017), GenSen (Subramanian et al., 2018), the Universal Sentence Encoder (USE) (Cer et al., 2018), as well as BERT (Devlin et al., 2018).2 We used the pretrained BERT model in two ways to create a sentence embedding. The first way is to concatenate the hidden states for the CLS token in the last four layers. The second way is to concatenate the hidden states of all word tokens in the last four layers and mean pool these representations. Both methods result in a 4096 dimension embedding. Finally, we compare to the newly released model, Sentence-Bert (Reimers & Gurevych, 2019). This model is similar to Infersent (Conneau et al., 2017) in that it is trained on natural language inference data, SNLI (Bowman et al., 2015). However, instead of using pretrained word embeddings, they fine-tune BERT in a way to induce sentence embeddings.3 Models from the Literature (Trained on Our Data) These models are amenable to being trained in the exact same setting as our own models as they only require parallel text. These include the sentence piece averaging model, SP, from (Wieting et al., 2019b), which is among the best of the averaging models (i.e. compared to averaging only words or character n-grams) as well the LSTM model, BILSTM, from (Wieting & Gimpel, 2017). These models use a contrastive loss with a margin. Following their settings, we fix the margin to 0.4 and tune the number of batches to pool for selecting negative examples from {40, 60, 80, 100}. For both models, we set the dimension of the embeddings to 1024. For BILSTM, we train a single layer bidirectional LSTM with hidden states of 512 dimensions. To create the sentence embedding, the forward and backward hidden states are concatenated and mean-pooled. Following (Wieting & Gimpel, 2017), we shuffle the inputs with probability p, tuning p from {0.3, 0.5}. We also implicitly compare to previous machine translation approaches like (Espana-Bonet et al., 2017; Schwenk & Douze, 2017; Artetxe & Schwenk, 2018) in Appendix A where we explore different variations of training LSTM sequence-to-sequence models. We find that our translation baselines reported in the tables below (both LSTM and Transformer) outperform the architectures from these works due to using the Attention and Logit methods mentioned in Section 2 , demonstrating that our baselines represent, or even over-represent, the state-of-the-art for machine translation approaches. BGT Ablations Lastly, we compare to ablations of our model to better understand the benefits of parallel data, language-specific variables, the KL loss term, and how much we gain from the more conventional translation baselines. • ENGLISHAE: English autoencoder on the English side of our en-fr data. • ENGLISHVAE: English variational autoencoder on the English side of our en-fr data. • ENGLISHTRANS: Translation from en to fr. • BILINGUALTRANS: Translation from both en to fr and fr to enwhere the encoding parameters are shared but each language has its own decoder. • BGT W/O LANGVARS: A model similar to BILINGUALTRANS, but it includes a prior over the embedding space and therefore a KL loss term. This model differs from BGT since it does not have any language-specific variables. • BGT W/O PRIOR: Follows the same architecture as BGT, but without the priors and KL loss term. 2Note that in all experiments using BERT, including Sentence-BERT, the large, uncased version is used. 3Most work evaluating accuracy on STS tasks has averaged the Pearson’s r over each individual dataset for each year of the STS competition. However, Reimers & Gurevych (2019) computed Spearman’s ρ over concatenated datasets for each year of the STS competition. To be consistent with previous work, we re-ran their model and calculated results using the standard method, and thus our results are not the same as those reported Reimers & Gurevych (2019). 4.2 EXPERIMENTAL SETTINGS The training data for our models is a mixture of OpenSubtitles 20184 en-fr data and en-fr Gigaword5 data. To create our dataset, we combined the complete corpora of each dataset and then randomly selected 1,000,000 sentence pairs to be used for training with 10,000 used for validation. We use sentencepiece (Kudo & Richardson, 2018) with a vocabulary size of 20,000 to segment the sentences, and we chose sentence pairs whose sentences are between 5 and 100 tokens each. In designing the model architectures for the encoders and decoders, we experimented with Transformers and LSTMs. Due to better performance, we use a 5 layer Transformer for each of the encoders and a single layer decoder for each of the decoders. This design decision was empirically motivated as we found using a larger decoder was slower and worsened performance, but conversely, adding more encoder layers improved performance. More discussion of these trade-offs along with ablations and comparisons to LSTMs are included in Appendix C. For all of our models, we set the dimension of the embeddings and hidden states for the encoders and decoders to 1024. Since we experiment with two different architectures,6 we follow two different optimization strategies. For training models with Transformers, we use Adam (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.98, and = 10−8. We use the same learning rate schedule as (Vaswani et al., 2017), i.e., the learning rate increases linearly for 4,000 steps to 5× 10−4, after which it is decayed proportionally to the inverse square root of the number of steps. For training the LSTM models, we use Adam with a fixed learning rate of 0.001. We train our models for 20 epochs. For models incorporating a translation loss, we used label smoothed cross entropy (Szegedy et al., 2016; Pereyra et al., 2017) with = 0.1. For ENGLISHVAE, BGT and BILINGUALTRANS, we anneal the KL term so that it increased linearly for 216 updates, which robustly gave good results in preliminary experiments. We also found that in training BGT, combining its loss with the BILINGUALTRANS objective during training of both models increased performance, and so this loss was summed with the BGT loss in all of our experiments. We note that this doesn’t affect our claim of BGT being a generative model, as this loss is only used in a multi-task objective at training time, and we calculate the generation probabilities according to standard BGT at test time. Lastly, in Appendix B, we illustrate that it is crucial to train the Transformers with large batch sizes. Without this, the model can learn the goal task (such as translation) with reasonable accuracy, but the learned semantic embeddings are of poor quality until batch sizes approximately reach 25,000 tokens. Therefore, we use a maximum batch size of 50,000 tokens in our ENGLISHTRANS, BILINGUALTRANS, and BGT W/O PRIOR, experiments and 25,000 tokens in our BGT W/O LANGVARS and BGT experiments. 4.3 EVALUATION Our primary evaluation are the 2012-2016 SemEval Semantic Textual Similarity (STS) shared tasks (Agirre et al., 2012; 2013; 2014; 2015; 2016), where the goal is to accurately predict the degree to which two sentences have the same meaning as measured by human judges. The evaluation metric is Pearson’s r with the gold labels. Secondly, we evaluate on Hard STS, where we combine and filter the STS datasets in order to make a more difficult evaluation. We hypothesize that these datasets contain many examples where their gold scores are easy to predict by either having similar structure and word choice and a high score or dissimilar structure and word choice and a low score. Therefore, we split the data using symmetric word error rate (SWER),7 finding sentence pairs with low SWER and low gold scores as well as sentence pairs with high SWER and high gold scores. This results in two datasets, Hard+ which have SWERs in the bottom 20% of all STS pairs and whose gold label is between 0 and 1,8 and 4http://opus.nlpl.eu/OpenSubtitles.php 5https://www.statmt.org/wmt10/training-giga-fren.tar 6We use LSTMs in our ablations. 7We define symmetric word error rate for sentences s1 and s2 as 12WER(s1, s2) + 1 2 WER(s2, s2), since word error rate (WER) is an asymmetric measure. 8STS scores are between 0 and 5. Hard- where the SWERs are in the top 20% of the gold scores are between 4 and 5. We also evaluate on a split where negation was likely present in the example.9 Examples are shown in Table 1. Lastly, we evaluate on STS in es and ar as well as cross-lingual evaluations for en-es, en-ar, and en-tr. We use the datasets from SemEval 2017 (Cer et al., 2017). For this setting, we train BILINGUALTRANS and BGT on 1 million examples from en-es, en-ar, and en-tr OpenSubtitles 2018 data. 4.4 RESULTS The results on the STS and Hard STS are shown in Table 2.10 From the results, we see that BGT has the highest overall performance. It does especially well compared to prior work on the two Hard STS datasets. We show further difficult splits in Table 3, including a negation split, beyond those used in Hard STS and compare the top two performing models in the STS task from Table 2. We also show easier splits in the bottom of the table. From these results, we see that both positive examples that have little shared vocabulary and structure and negative examples with significant shared vocabulary and structure benefit significantly from using a deeper architecture. Similarly, examples where negation occurs also benefit from our deeper model. These examples are difficult because more than just the identity of the words is needed to 9We selected examples for the negation split where one sentence contained not or ’t and the other did not. 10We obtained values for STS 2012-2016 from prior works using SentEval (Conneau & Kiela, 2018). Note that we include all datasets for the 2013 competition, including SMT, which is not included in SentEval. determine the relationship of the two sentences, and this is something that SP is not equipped for since it is unable to model word order. The bottom two rows show easier examples where positive examples have high overlap and low SWER and vice versa for negative examples. Both models perform similarly on this data, with the BGT model having a small edge consistent with the overall gap between these two models. Lastly, in Table 4, we show the results of STS evaluations in es and ar and cross-lingual evaluations for en-es, en-ar, and en-tr. From these results, we see that BGT has the best performance across all datasets, however the performance is significantly stronger than the BILINGUALTRANS and BGT W/O PRIOR baselines in the cross-lingual setting. Since BGT W/O LANGVARS also has significantly better performance on these tasks, most of this gain seems to be due to the prior have a regularizing effect. However, BGT outperforms BGT W/O LANGVARS overall, and we hypothesize that the gap in performance between these two models is due to BGT being able to strip away the language-specific information in the representations with its language-specific variables, allowing for the semantics of the sentences to be more directly compared. 5 ANALYSIS We next analyze our BGT model by examining what elements of syntax and semantics the language and semantic variables capture relative both to each-other and to the sentence embeddings from the BILINGUALTRANS models. We also analyze how the choice of language and its lexical and syntactic distance from English affects the semantic and syntactic information captured by the semantic and language-specific encoders. Finally, we also show that our model is capable of sentence generation in a type of style transfer, demonstrating its capabilities as a generative model. 5.1 STS We first show that the language variables are capturing little semantic information by evaluating the learned English language-specific variable from our BGT model on our suite of semantic tasks. The results in Table 5 show that these encoders perform closer to a random encoder than the semantic encoder from BGT. This is consistent with what we would expect to see if they are capturing extraneous language-specific information. 5.2 PROBING We probe our BGT semantic and language-specific encoders, along with our BILINGUALTRANS encoders as a baseline, to compare and contrast what aspects of syntax and semantics they are learning relative to each other across five languages with various degrees of similarity with English. All models are trained on the OpenSubtitles 2018 corpus. We use the datasets from (Conneau et al., 2018) for semantic tasks like number of subjects and number of objects, and syntactic tasks like tree depth, and top constituent. Additionally, we include predicting the word content and sentence length. We also add our own tasks to validate our intuitions about punctuation and language-specific information. In the first of these, punctuation number, we train a classifier to predict the number of punctuation marks11 in a sentence. To make the task more challenging, we limit each label to have at most 20,000 examples split among training, validation, and testing data.12 In the second task, punctuation first, we train a classifier to predict the identity of the first punctuation mark in the sentence. In our last task, gender, we detect examples where the gender of the articles in the sentence is incorrect in French of Spanish. To create an incorrect example, we switch articles from {le, la, un, une} for French and {el, la, los, las} for Spanish, with their (indefinite or definite for French and singular or plural for Spanish) counterpart with the opposite gender. This dataset was balanced so random chances gives 50% on the testing data. All tasks use 100,000 examples for training and 10,000 examples for validation and testing. The results of these experiments are shown in Table 6. These results show that the source separation is effective - stylistic and language-specific information like length, punctuation and language-specific gender information are more concentrated in the language variables, while word content, semantic and syntactic information are more concentrated in the semantic encoder. The choice of language is also seen to be influential on what these encoders are capturing. When the languages are closely related to English, like in French and Spanish, the performance difference between the semantic and English language encoder is larger for word content, subject number, object number than for more distantly related languages like Arabic and 11Punctuation were taken from the set { ’ ! ” # $ % & \’ ( ) ∗ + , − . / : ; < = > ? @ [ ] ˆ ‘ {— } ’̃ . }. 12The labels are from 1 punctuation mark up to 10 marks with an additional label consolidating 11 or more marks. Turkish. In fact, word content performance is directly tied to how well the alphabets of the two languages overlap. This relationship matches our intuition, because lexical information will be cheaper to encode in the semantic variable when it is shared between the languages. Similarly for the tasks of length, punctuation first, and punctuation number, the gap in performance between the two encoders also grows as the languages become more distant from English. Lastly, the gap on STS performance between the two encoders shrinks as the languages become more distant, which again is what we would expect, as the language-specific encoders are forced to capture more information. Japanese is an interesting case in these experiments, where the English language-specific encoder outperforms the semantic encoder on the semantic and syntactic probing tasks. Japanese is a very distant language to English both in its writing system and in its sentence structure (it is an SOV language, where English is an SVO language). However, despite these difference, the semantic encoder strongly outperforms the English language-specific encoder, suggesting that the underlying meaning of the sentence is much better captured by the semantic encoder. 5.3 GENERATION AND STYLE TRANSFER In this section, we qualitatively demonstrate the ability of our model to generate sentences. We focus on a style-transfer task where we have original seed sentences from which we calculate our semantic vector zsem and language specific vector zen. Specifically, we feed in a Source sentence into the semantic encoder to obtain zsem, and another Style sentence into the English languagespecific encoder to obtain zen. We then generate a new sentence using these two latent variables. This can be seen as a type of style transfer where we expect the model to generate a sentence that has the semantics of the Source sentence and the style of the Style sentence. We use our en-fr BGT model from Table 6 and show some examples in Table 7. All input sentences are from heldout en-fr OpenSubtitles data. From these examples, we see further evidence of the role of the semantic and language-specific encoders, where most of the semantics (e.g. topical word such as seen and tech in the Source sentence) are reflected in the output, but length and structure are more strongly influenced by the language-specific encoder. 6 CONCLUSION We propose Bilingual Generative Transformers, a model that uses parallel data to learn to perform source separation of common semantic information between two languages from language-specific information. We show that the model is able to accomplish this source separation through probing tasks and text generation in a style-transfer setting. We find that our model bests all baselines on semantic similarity tasks, with the largest gains coming from a new challenge we propose as Hard STS, designed to foil methods approximating semantic similarity as word overlap. We also find our model to be especially effective on cross-lingual semantic similarity, due to its stripping away of language-specific information allowing for the underlying semantics to be more directly compared. In future work, we will explore generalizing this approach to the multilingual setting. A LOCATION OF SENTENCE EMBEDDING IN DECODER FOR LEARNING REPRESENTATIONS As mentioned in Section 2, we experimented with 4 ways to incorporate the sentence embedding into the decoder: Word, Hidden, Attention, and Logit. We also experimented with combinations of these 4 approaches. We evaluate these embeddings on the STS tasks and show the results, along with the time to train the models 1 epoch in Table 8. For these experiments, we train a single layer bidirectional LSTM (BiLSTM) ENGLISHTRANS model with embedding size set to 1024 and hidden states set to 512 dimensions (in order to be roughly equivalent to our Transformer models). To form the sentence embedding in this variant, we mean pool the hidden states for each time step. The cell states of the decoder are initialized to the zero vector. From this analysis, we see that the best performance is achieved with Logit, when the sentence embedding is place just prior to the softmax. The performance is much better than Hidden or Hidden+Word used in prior work. For instance, recently (Artetxe & Schwenk, 2018) used the Hidden+Word strategy in learning multilingual sentence embeddings. A.1 VAE TRAINING We also found that incorporating the latent code of a VAE into the decoder using the Logit strategy increases the mutual information while having little effect on the log likelihood. We trained two LSTM VAE models following the settings and aggressive training strategy in (He et al., 2019), where one LSTM model used the Hidden strategy and the other used the Hidden + Logit strategy. We trained the models on the en side of our en-fr data. We found that the mutual information increased form 0.89 to 2.46, while the approximate negative log likelihood, estimated by importance weighting, increased slightly from 53.3 to 54.0 when using Logit. B RELATIONSHIP BETWEEN BATCH SIZE AND PERFORMANCE FOR TRANSFORMER AND LSTM It has been observed previously that the performance of Transformer models is sensitive to batch size Popel & Bojar (2018) . We found this to be especially true when training sequence-to-sequence models to learn sentence embeddings. Figure 3 shows plots of the average 2012-2016 STS performance of the learned sentence embedding as batch size increases for both the BiLSTM and Transformer. Initially, at a batch size of 2500 tokens, sentence embeddings learned are worse than random, even though validation perplexity does decrease during this time. Performance rises as batch size increases up to around 100,000 tokens. In contrast, the BiLSTM is more robust to batch size, peaking much earlier around 25,000 tokens, and even degrading at higher batch sizes. C MODEL ABLATIONS In this section, we vary the number of layers in the encoder and decoder in BGT W/O PRIOR. We see that performance increases as the number of encoder layers increases, and also that a large decoder hurts performance, allowing us to save training time by using a single layer. These results can be compared to those in Table 9 showing that Transformers outperform BiLSTMS in these experiments. D CLASSIFICATION EXPERIMENTS To explore our embeddings in more detail, we evaluated them on the Quora Question Pairs dataset13 (QQP). This is a paraphrase classification task, which is also part of GLUE (Wang et al., 2018). Since the test set is private, we deviated slightly from the standard evaluation protocol and split the development set into two halves of 20,215 examples each – one half for model selection and the other for evaluation. We evaluated in two ways, cosine, where we score all pairs with cosine similarity and then find the threshold that gives the best accuracy, and logistic regression where we use logistic regression. Its worth noting that the pretrained baseline models on this task were directly trained to produce the feature set used by the downstream classifier, while our embeddings are trained without this supervision. They also tend to have larger dimensions which also gives them an advantage which is discussed in more detail in (Wieting & Kiela, 2019). The results are shown in Table 10 and show that our BGT model outperforms the baseline models, SP, ENGLISHTRANS, 13data.quora.com/First-Quora-Dataset-Release-Question-Pairs and BILINGUALTRANS for both evaluations, and compares favorably to the pretrained models when evaluated using cosine similarity scores. The only models which perform better are USE which was trained on Quora data in an unsupervised way and Sentence-BERT which uses BERT. Our models are not as strong when using classification for final predictions. This indicates that the embeddings learned by our approach may be most useful when no downstream training is possible – though semisupervised objectives that consider the downstream task might aid our approach, like the baselines, if downstream training is the goal.
1. What is the main contribution of the paper, and how does it differ from previous models such as BERT? 2. What are the strengths of the proposed model, particularly in terms of its ability to capture semantic textual similarity? 3. What are the weaknesses of the paper, especially regarding the experiments and comparisons with other works? 4. How does the reviewer assess the clarity and reasonableness of the authors' approach, including their use of a variational probabilistic framework and separation of common and language-specific latent variables? 5. Are there any concerns about the fairness of the experiments, such as the choice of comparison models and training methods? 6. Are there any suggestions for additional analyses or experiments that could further support the authors' claims and improve the paper's contributions?
Review
Review This paper presents a bilingual generative model for sentence embedding based variational probabilistic framework. By separating a common latent variable from language-specific latent variables, the model is able to capture what's in common between parallel bilingual sentences and language-specific semantics. Experimental results show that the proposed model is able to produce sentence embeddings that reach higher correlation scores with human judgments on Semantic Textual Similarity tasks than previous models such as BERT. Strength: 1) the idea of separating common semantics and language-specific semantics in the latent space is pretty neat; 2) the writing is very clear and easy to follow; 3) the authors explore four approaches to use the latent vectors and four approaches to merge semantic vectors, makes the final choices reasonable. Weakness: 1) Experiments: My major concern is the fairness of the experiments. The authors compare their model with many state-of-the-art models that could produce sentence embeddings. However, how they produce the sentence embeddings with existing models is not convincing. For example, why using the hidden states of the last four layers of BERT? Moreover, the proposed model is trained with parallel bilingual data, while the BERT model in comparison is monolingual. Also, the proposed deep variational model is close to an auto-encoder framework. You can also train a bilingual encoder-decoder transformer model (perhaps with pre-trained BERT parameters) with auto-encoder objective using the same parallel data set. It seems to be a more comparable model to me. Although the proposed model is based on variational framework, there's no comparison with previous neural variational models that learn encodings of texts as well such as https://arxiv.org/abs/1511.06038. 2) Ablation study and analysis I really like the idea of separating common semantic latent variables with language-specific latent variables. However, I expected to see more analysis or experimental results to show why it is better than a monolingual variational sentence embedding framework.
ICLR
Title A Bilingual Generative Transformer for Semantic Sentence Embedding Abstract Semantic sentence embedding models encode natural language sentences into vectors, such that closeness in embedding space indicates closeness in the semantics between the sentences. Bilingual data offers a useful signal for learning such embeddings: properties shared by both sentences in a translation pair are likely semantic, while divergent properties are likely stylistic or language-specific. We propose a deep latent variable model that attempts to perform source separation on parallel sentences, isolating what they have in common in a latent semantic vector, and explaining what is left over with language-specific latent vectors. Our proposed approach differs from past work on semantic sentence encoding in two ways. First, by using a variational probabilistic framework, we introduce priors that encourage source separation, and can use our model’s posterior to predict sentence embeddings for monolingual data at test time. Second, we use highcapacity transformers as both data generating distributions and inference networks – contrasting with most past work on sentence embeddings. In experiments, our approach substantially outperforms the state-of-the-art on a standard suite of unsupervised semantic similarity evaluations. Further, we demonstrate that our approach yields the largest gains on more difficult subsets of these evaluations where simple word overlap is not a good indicator of similarity. 1 INTRODUCTION Learning useful representations of language has been a source of recent success in natural language processing (NLP). Much work has been done on learning representations for words (Mikolov et al., 2013; Pennington et al., 2014) and sentences (Kiros et al., 2015; Conneau et al., 2017). More recently, deep neural architectures have been used to learn contextualized word embeddings (Peters et al., 2018; Devlin et al., 2018) which have enabled state-of-the-art results on many tasks. We focus on learning semantic sentence embeddings in this paper, which play an important role in many downstream applications. Since they do not require any labelled data for fine-tuning, sentence embeddings are useful for a variety of problems right out of the box. These include Semantic Textual Similarity (STS; Agirre et al. (2012)), mining bitext (Zweigenbaum et al., 2018), and paraphrase identification (Dolan et al., 2004). Semantic similarity measures also have downstream uses such as fine-tuning machine translation systems (Wieting et al., 2019a). There are three main ingredients when designing a sentence embedding model: the architecture, the training data, and the objective function. Many architectures including LSTMs (Hill et al., 2016; Conneau et al., 2017; Schwenk & Douze, 2017; Subramanian et al., 2018), Transformers (Cer et al., 2018; Reimers & Gurevych, 2019), and averaging models (Wieting et al., 2016a; Arora et al., 2017) have found success for learning sentence embeddings. The choice of training data and objective are intimately intertwined, and there are a wide variety of options including next-sentence prediction (Kiros et al., 2015), machine translation (Espana-Bonet et al., 2017; Schwenk & Douze, 2017; Schwenk, 2018; Artetxe & Schwenk, 2018), natural language inference (NLI) (Conneau et al., 2017), and multi-task objectives which include some of the previously mentioned objectives (Cer et al., 2018) as well as additional tasks like constituency parsing (Subramanian et al., 2018). Surprisingly, despite ample testing of more powerful architectures, the best performing models for many sentence embedding tasks related to semantic similarity often use simple architectures that are mostly agnostic to the interactions between words. For instance, some of the top performing techniques use word embedding averaging (Wieting et al., 2016a), character n-grams (Wieting et al., 2016b), and subword embedding averaging (Wieting et al., 2019b) to create representations. These simple approaches are competitive with much more complicated architectures on in-domain data and generalize well to unseen domains, but are fundamentally limited by their inability to capture word order. Training these approaches generally relies on discriminative objectives defined on paraphrase data (Ganitkevitch et al., 2013; Wieting & Gimpel, 2018) or bilingual data (Wieting et al., 2019b). The inclusion of latent variables in these models has also been explored (Chen et al., 2019). Intuitively, bilingual data in particular is promising because it potentially offers a useful signal for learning the underlying semantics of sentences. Within a translation pair, properties shared by both sentences are more likely semantic, while those that are divergent are more likely stylistic or language-specific. While previous work learning from bilingual data perhaps takes advantage of this fact implicitly, the focus of this paper is modelling this intuition explicitly, and to the best of our knowledge, this has not not been explored in prior work. Specifically, we propose a deep generative model that is encouraged to perform source separation on parallel sentences, isolating what they have in common in a latent semantic embedding and explaining what is left over with language-specific latent vectors. At test time, we use inference networks (Kingma & Welling, 2013) for approximating the model’s posterior on the semantic and source-separated latent variables to encode monolingual sentences. Finally, since our model and training objective are generative, our approach does not require knowledge of the distance metrics to be used during evaluation,1 and it has the additional property of being able to generate text. In experiments, we evaluate our probabilistic source-separation approach on a standard suite of STS evaluations. We demonstrate that the proposed approach is effective, most notably allowing the learning of high-capacity deep transformer architectures (Vaswani et al., 2017) while still generalizing to new domains, significantly outperforming a variety of state-of-the-art baselines . Further, we conduct a thorough analysis by identifying subsets of the STS evaluation where simple word overlap is not able to accurately assess semantic similarity. On these most difficult instances, we find that our approach yields the largest gains, indicating that our system is modeling interactions between words to good effect. We also find that our model better handles cross-lingual semantic similarity than multilingual translation baseline approaches, indicating that stripping away language-specific information allows for better comparisons between sentences from different languages. Finally, we analyze our model to uncover what information was captured by the source separation into the semantic and language-specific variables and the relationship between this encoded information and language distance to English. We find that the language-specific variables tend to explain more superficial or language-specific properties such as overall sentence length, amount and location of punctuation, and the gender of articles (if gender is present in the language), but semantic and syntactic information is more concentrated in the shared semantic variables, matching our intuition. Language distance has an effect as well, where languages that share common structures with English put more information into the semantic variables, while more distant languages put more information into the language-specific variables. Lastly, we show outputs generated from our model that exhibit its ability to do a type of style transfer. 2 MODEL Our proposed training objective leverages a generative model of parallel text in two languages (e.g. English (en) and French (fr)) that form a pair consisting of an English sentence xen and a French sentence xfr. Importantly, this generative process utilizes three underlying latent vectors: languagespecific variation variables (language variables) zfr and zen respectively for each side of the translation, as well as a shared semantic variation variable (semantic variable) zsem. In this section we will first describe the generative model for the text and latent variables. In the following section we will describe the inference procedure of zsem given an input sentence, which corresponds to our core task of obtaining sentence embeddings useful for downstream tasks such as semantic similarity. Further, by encouraging the model to perform this source separation, the learned semantic encoders will more crisply represent the underlying semantics, increasing performance on downstream semantic tasks. 1In other words, we don’t assume cosine similarity as a metric, though it does work well in our experiments. The generative process of our model, the Bilingual Generative Transformer (BGT), is depicted in Figure 1 and its computation graph is shown in Figure 2. First, we sample latent variables 〈zfr, zen, zsem〉, where zi ∈ Rk, from a multivariate Gaussian prior N(0, Ik). These variables are then fed into a decoder that samples sentences; xen is sampled conditioned on zsem and zen, while xfr is sampled conditioned on zsem and zfr. Because sentences in both languages will use zsem in generation, we expect that in a well- trained model this variable will encode semantic, syntactic, or stylistic information shared across both sentences, while zfr and zen will handle any language-specific peculiarities or specific stylistic decisions that are less central to the sentence meaning and thus do not translate across sentences. In the following section, we further discuss how this is explicitly encouraged by the learning process. Decoder Architecture. Many latent variable models for text use LSTMs (Hochreiter & Schmidhuber, 1997) as their decoders (Yang et al., 2017; Ziegler & Rush, 2019; Ma et al., 2019). However, state-of-the-art models in neural machine translation have seen increased performance and speed using deep Transformer architectures. We also found in our experiments (see Appendix C for details) that Transformers led to increased performance in our setting, so they are used in our main model. We use two decoders in our model, one for modelling p(xfr|zsem, zfr; θ) and one for modeling p(xen|zsem, zen; θ). These decoders are depicted on the right side of Figure 2. Each decoder takes in two latent variables, a language variable and a semantic variable. These variables are concatenated together prior to being used by the decoder for reconstruction. We explore four ways of using this latent vector: (1) Concatenate it to the word embeddings (Word) (2) Use it as the initial hidden state (Hidden, LSTM only) (3) Use it as you would the attention context vector in the traditional sequenceto-sequence framework (Attention) and (4) Concatenate it to the hidden state immediately prior to computing the logits (Logit). Unlike Attention, there is no additional feedforward layer in this setting. We experimented with these four approaches, as well as combinations thereof, and report this analysis in Appendix A. From these experiments, we see that the closer the sentence embedding is to the softmax, the better the performance on downstream tasks evaluating its semantic content. We hypothesise that this is due to better gradient propagation because the sentence embedding is now closer to the error signal. Since Attention and Logit performed best, we use these in our Transformer experiments. 3 LEARNING AND INFERENCE Our model is trained on a training set X of parallel text consisting of N examples, X = {〈x1en, x1fr〉, . . . , 〈xNen, xNfr〉}, and Z is our collection of latent variables Z = (〈z1en, z1fr, z1sem〉, . . . , 〈zNen, zNfr, zNsem〉). We wish to maximize the likelihood of the parameters of the two decoders θ with respect to the observed X , marginalizing over the latent variables Z. p(X; θ) = ∫ Z p(X,Z; θ)dZ Unfortunately, this integral is intractable due to the complex relationship between X and Z. However, related latent variable models like variational autoencoders (VAEs (Kingma & Welling, 2013)) learn by optimizing a variational lower bound on the log marginal likelihood. This surrogate objective is called the evidence lower bound (ELBO) and introduces a variational approximation, q to the true posterior of the model p. The q distribution is parameterized by a neural network with parameters φ. ELBO can be written for our model as follows: ELBO =Eq(Z|X;φ)[log p(X|Z; θ)]− KL(q(Z|X;φ)||p(Z; θ)) This lower bound on the marginal can be optimized by gradient ascent by using the reparameterization trick (Kingma & Welling, 2013). This trick allows for the expectation under q to be approximated through sampling in a way that preserves backpropagation. We make several independence assumptions for q(zsem, zen, zfr|xen, xfr;φ). Specifically, to match our goal of source separation, we factor q as q(zsem, zen, zfr|xen, xfr;φ) = q(zsem|xen, xfr;φ)q(zen|xen)q(zfr|xfr;φ), with φ being the parameters of the encoders that make up the inference networks, defined in the next paragraph. Lastly, we note that the KL term in our ELBO equation encourages explaining variation that is shared by translations with the shared semantic variable and explaining language-specific variation with the corresponding language-specific variables. Information shared by the two sentences will result in a lower KL loss if it is encoded in the shared variable, otherwise that information will be replicated and the overall cost of encoding will increase. Encoder Architecture. We use three inference networks as shown on the left side of Figure 2: an English inference network to produce the English language variable, a French inference network to produce the French language variable, and a semantic inference network to produce the semantic variable. Just as in the decoder architecture, we use a Transformer for the encoders. The semantic inference network is a bilingual encoder that encodes each language. For each translation pair, we alternate which of the two parallel sentences is fed into the semantic encoder within a batch. Since the semantic encoder is meant to capture language agnostic semantic information, its outputs for a translation pair should be similar regardless of the language of the input sentence. We note that other operations are possible for combining the views each parallel sentence offers. For instance, we could feed both sentences into the semantic encoder and pool their representations. However, in practice we find that alternating works well and leave further study of this to future work. 4 EXPERIMENTS 4.1 BASELINE MODELS We experiment with fourteen baseline models, covering both the most effective approaches for learning sentence embeddings from the literature and ablations of our own BGT model. These baselines can be split into three groups as detailed below. Models from the Literature (Trained on Different Data) We compare to well known sentence embedding models Infersent (Conneau et al., 2017), GenSen (Subramanian et al., 2018), the Universal Sentence Encoder (USE) (Cer et al., 2018), as well as BERT (Devlin et al., 2018).2 We used the pretrained BERT model in two ways to create a sentence embedding. The first way is to concatenate the hidden states for the CLS token in the last four layers. The second way is to concatenate the hidden states of all word tokens in the last four layers and mean pool these representations. Both methods result in a 4096 dimension embedding. Finally, we compare to the newly released model, Sentence-Bert (Reimers & Gurevych, 2019). This model is similar to Infersent (Conneau et al., 2017) in that it is trained on natural language inference data, SNLI (Bowman et al., 2015). However, instead of using pretrained word embeddings, they fine-tune BERT in a way to induce sentence embeddings.3 Models from the Literature (Trained on Our Data) These models are amenable to being trained in the exact same setting as our own models as they only require parallel text. These include the sentence piece averaging model, SP, from (Wieting et al., 2019b), which is among the best of the averaging models (i.e. compared to averaging only words or character n-grams) as well the LSTM model, BILSTM, from (Wieting & Gimpel, 2017). These models use a contrastive loss with a margin. Following their settings, we fix the margin to 0.4 and tune the number of batches to pool for selecting negative examples from {40, 60, 80, 100}. For both models, we set the dimension of the embeddings to 1024. For BILSTM, we train a single layer bidirectional LSTM with hidden states of 512 dimensions. To create the sentence embedding, the forward and backward hidden states are concatenated and mean-pooled. Following (Wieting & Gimpel, 2017), we shuffle the inputs with probability p, tuning p from {0.3, 0.5}. We also implicitly compare to previous machine translation approaches like (Espana-Bonet et al., 2017; Schwenk & Douze, 2017; Artetxe & Schwenk, 2018) in Appendix A where we explore different variations of training LSTM sequence-to-sequence models. We find that our translation baselines reported in the tables below (both LSTM and Transformer) outperform the architectures from these works due to using the Attention and Logit methods mentioned in Section 2 , demonstrating that our baselines represent, or even over-represent, the state-of-the-art for machine translation approaches. BGT Ablations Lastly, we compare to ablations of our model to better understand the benefits of parallel data, language-specific variables, the KL loss term, and how much we gain from the more conventional translation baselines. • ENGLISHAE: English autoencoder on the English side of our en-fr data. • ENGLISHVAE: English variational autoencoder on the English side of our en-fr data. • ENGLISHTRANS: Translation from en to fr. • BILINGUALTRANS: Translation from both en to fr and fr to enwhere the encoding parameters are shared but each language has its own decoder. • BGT W/O LANGVARS: A model similar to BILINGUALTRANS, but it includes a prior over the embedding space and therefore a KL loss term. This model differs from BGT since it does not have any language-specific variables. • BGT W/O PRIOR: Follows the same architecture as BGT, but without the priors and KL loss term. 2Note that in all experiments using BERT, including Sentence-BERT, the large, uncased version is used. 3Most work evaluating accuracy on STS tasks has averaged the Pearson’s r over each individual dataset for each year of the STS competition. However, Reimers & Gurevych (2019) computed Spearman’s ρ over concatenated datasets for each year of the STS competition. To be consistent with previous work, we re-ran their model and calculated results using the standard method, and thus our results are not the same as those reported Reimers & Gurevych (2019). 4.2 EXPERIMENTAL SETTINGS The training data for our models is a mixture of OpenSubtitles 20184 en-fr data and en-fr Gigaword5 data. To create our dataset, we combined the complete corpora of each dataset and then randomly selected 1,000,000 sentence pairs to be used for training with 10,000 used for validation. We use sentencepiece (Kudo & Richardson, 2018) with a vocabulary size of 20,000 to segment the sentences, and we chose sentence pairs whose sentences are between 5 and 100 tokens each. In designing the model architectures for the encoders and decoders, we experimented with Transformers and LSTMs. Due to better performance, we use a 5 layer Transformer for each of the encoders and a single layer decoder for each of the decoders. This design decision was empirically motivated as we found using a larger decoder was slower and worsened performance, but conversely, adding more encoder layers improved performance. More discussion of these trade-offs along with ablations and comparisons to LSTMs are included in Appendix C. For all of our models, we set the dimension of the embeddings and hidden states for the encoders and decoders to 1024. Since we experiment with two different architectures,6 we follow two different optimization strategies. For training models with Transformers, we use Adam (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.98, and = 10−8. We use the same learning rate schedule as (Vaswani et al., 2017), i.e., the learning rate increases linearly for 4,000 steps to 5× 10−4, after which it is decayed proportionally to the inverse square root of the number of steps. For training the LSTM models, we use Adam with a fixed learning rate of 0.001. We train our models for 20 epochs. For models incorporating a translation loss, we used label smoothed cross entropy (Szegedy et al., 2016; Pereyra et al., 2017) with = 0.1. For ENGLISHVAE, BGT and BILINGUALTRANS, we anneal the KL term so that it increased linearly for 216 updates, which robustly gave good results in preliminary experiments. We also found that in training BGT, combining its loss with the BILINGUALTRANS objective during training of both models increased performance, and so this loss was summed with the BGT loss in all of our experiments. We note that this doesn’t affect our claim of BGT being a generative model, as this loss is only used in a multi-task objective at training time, and we calculate the generation probabilities according to standard BGT at test time. Lastly, in Appendix B, we illustrate that it is crucial to train the Transformers with large batch sizes. Without this, the model can learn the goal task (such as translation) with reasonable accuracy, but the learned semantic embeddings are of poor quality until batch sizes approximately reach 25,000 tokens. Therefore, we use a maximum batch size of 50,000 tokens in our ENGLISHTRANS, BILINGUALTRANS, and BGT W/O PRIOR, experiments and 25,000 tokens in our BGT W/O LANGVARS and BGT experiments. 4.3 EVALUATION Our primary evaluation are the 2012-2016 SemEval Semantic Textual Similarity (STS) shared tasks (Agirre et al., 2012; 2013; 2014; 2015; 2016), where the goal is to accurately predict the degree to which two sentences have the same meaning as measured by human judges. The evaluation metric is Pearson’s r with the gold labels. Secondly, we evaluate on Hard STS, where we combine and filter the STS datasets in order to make a more difficult evaluation. We hypothesize that these datasets contain many examples where their gold scores are easy to predict by either having similar structure and word choice and a high score or dissimilar structure and word choice and a low score. Therefore, we split the data using symmetric word error rate (SWER),7 finding sentence pairs with low SWER and low gold scores as well as sentence pairs with high SWER and high gold scores. This results in two datasets, Hard+ which have SWERs in the bottom 20% of all STS pairs and whose gold label is between 0 and 1,8 and 4http://opus.nlpl.eu/OpenSubtitles.php 5https://www.statmt.org/wmt10/training-giga-fren.tar 6We use LSTMs in our ablations. 7We define symmetric word error rate for sentences s1 and s2 as 12WER(s1, s2) + 1 2 WER(s2, s2), since word error rate (WER) is an asymmetric measure. 8STS scores are between 0 and 5. Hard- where the SWERs are in the top 20% of the gold scores are between 4 and 5. We also evaluate on a split where negation was likely present in the example.9 Examples are shown in Table 1. Lastly, we evaluate on STS in es and ar as well as cross-lingual evaluations for en-es, en-ar, and en-tr. We use the datasets from SemEval 2017 (Cer et al., 2017). For this setting, we train BILINGUALTRANS and BGT on 1 million examples from en-es, en-ar, and en-tr OpenSubtitles 2018 data. 4.4 RESULTS The results on the STS and Hard STS are shown in Table 2.10 From the results, we see that BGT has the highest overall performance. It does especially well compared to prior work on the two Hard STS datasets. We show further difficult splits in Table 3, including a negation split, beyond those used in Hard STS and compare the top two performing models in the STS task from Table 2. We also show easier splits in the bottom of the table. From these results, we see that both positive examples that have little shared vocabulary and structure and negative examples with significant shared vocabulary and structure benefit significantly from using a deeper architecture. Similarly, examples where negation occurs also benefit from our deeper model. These examples are difficult because more than just the identity of the words is needed to 9We selected examples for the negation split where one sentence contained not or ’t and the other did not. 10We obtained values for STS 2012-2016 from prior works using SentEval (Conneau & Kiela, 2018). Note that we include all datasets for the 2013 competition, including SMT, which is not included in SentEval. determine the relationship of the two sentences, and this is something that SP is not equipped for since it is unable to model word order. The bottom two rows show easier examples where positive examples have high overlap and low SWER and vice versa for negative examples. Both models perform similarly on this data, with the BGT model having a small edge consistent with the overall gap between these two models. Lastly, in Table 4, we show the results of STS evaluations in es and ar and cross-lingual evaluations for en-es, en-ar, and en-tr. From these results, we see that BGT has the best performance across all datasets, however the performance is significantly stronger than the BILINGUALTRANS and BGT W/O PRIOR baselines in the cross-lingual setting. Since BGT W/O LANGVARS also has significantly better performance on these tasks, most of this gain seems to be due to the prior have a regularizing effect. However, BGT outperforms BGT W/O LANGVARS overall, and we hypothesize that the gap in performance between these two models is due to BGT being able to strip away the language-specific information in the representations with its language-specific variables, allowing for the semantics of the sentences to be more directly compared. 5 ANALYSIS We next analyze our BGT model by examining what elements of syntax and semantics the language and semantic variables capture relative both to each-other and to the sentence embeddings from the BILINGUALTRANS models. We also analyze how the choice of language and its lexical and syntactic distance from English affects the semantic and syntactic information captured by the semantic and language-specific encoders. Finally, we also show that our model is capable of sentence generation in a type of style transfer, demonstrating its capabilities as a generative model. 5.1 STS We first show that the language variables are capturing little semantic information by evaluating the learned English language-specific variable from our BGT model on our suite of semantic tasks. The results in Table 5 show that these encoders perform closer to a random encoder than the semantic encoder from BGT. This is consistent with what we would expect to see if they are capturing extraneous language-specific information. 5.2 PROBING We probe our BGT semantic and language-specific encoders, along with our BILINGUALTRANS encoders as a baseline, to compare and contrast what aspects of syntax and semantics they are learning relative to each other across five languages with various degrees of similarity with English. All models are trained on the OpenSubtitles 2018 corpus. We use the datasets from (Conneau et al., 2018) for semantic tasks like number of subjects and number of objects, and syntactic tasks like tree depth, and top constituent. Additionally, we include predicting the word content and sentence length. We also add our own tasks to validate our intuitions about punctuation and language-specific information. In the first of these, punctuation number, we train a classifier to predict the number of punctuation marks11 in a sentence. To make the task more challenging, we limit each label to have at most 20,000 examples split among training, validation, and testing data.12 In the second task, punctuation first, we train a classifier to predict the identity of the first punctuation mark in the sentence. In our last task, gender, we detect examples where the gender of the articles in the sentence is incorrect in French of Spanish. To create an incorrect example, we switch articles from {le, la, un, une} for French and {el, la, los, las} for Spanish, with their (indefinite or definite for French and singular or plural for Spanish) counterpart with the opposite gender. This dataset was balanced so random chances gives 50% on the testing data. All tasks use 100,000 examples for training and 10,000 examples for validation and testing. The results of these experiments are shown in Table 6. These results show that the source separation is effective - stylistic and language-specific information like length, punctuation and language-specific gender information are more concentrated in the language variables, while word content, semantic and syntactic information are more concentrated in the semantic encoder. The choice of language is also seen to be influential on what these encoders are capturing. When the languages are closely related to English, like in French and Spanish, the performance difference between the semantic and English language encoder is larger for word content, subject number, object number than for more distantly related languages like Arabic and 11Punctuation were taken from the set { ’ ! ” # $ % & \’ ( ) ∗ + , − . / : ; < = > ? @ [ ] ˆ ‘ {— } ’̃ . }. 12The labels are from 1 punctuation mark up to 10 marks with an additional label consolidating 11 or more marks. Turkish. In fact, word content performance is directly tied to how well the alphabets of the two languages overlap. This relationship matches our intuition, because lexical information will be cheaper to encode in the semantic variable when it is shared between the languages. Similarly for the tasks of length, punctuation first, and punctuation number, the gap in performance between the two encoders also grows as the languages become more distant from English. Lastly, the gap on STS performance between the two encoders shrinks as the languages become more distant, which again is what we would expect, as the language-specific encoders are forced to capture more information. Japanese is an interesting case in these experiments, where the English language-specific encoder outperforms the semantic encoder on the semantic and syntactic probing tasks. Japanese is a very distant language to English both in its writing system and in its sentence structure (it is an SOV language, where English is an SVO language). However, despite these difference, the semantic encoder strongly outperforms the English language-specific encoder, suggesting that the underlying meaning of the sentence is much better captured by the semantic encoder. 5.3 GENERATION AND STYLE TRANSFER In this section, we qualitatively demonstrate the ability of our model to generate sentences. We focus on a style-transfer task where we have original seed sentences from which we calculate our semantic vector zsem and language specific vector zen. Specifically, we feed in a Source sentence into the semantic encoder to obtain zsem, and another Style sentence into the English languagespecific encoder to obtain zen. We then generate a new sentence using these two latent variables. This can be seen as a type of style transfer where we expect the model to generate a sentence that has the semantics of the Source sentence and the style of the Style sentence. We use our en-fr BGT model from Table 6 and show some examples in Table 7. All input sentences are from heldout en-fr OpenSubtitles data. From these examples, we see further evidence of the role of the semantic and language-specific encoders, where most of the semantics (e.g. topical word such as seen and tech in the Source sentence) are reflected in the output, but length and structure are more strongly influenced by the language-specific encoder. 6 CONCLUSION We propose Bilingual Generative Transformers, a model that uses parallel data to learn to perform source separation of common semantic information between two languages from language-specific information. We show that the model is able to accomplish this source separation through probing tasks and text generation in a style-transfer setting. We find that our model bests all baselines on semantic similarity tasks, with the largest gains coming from a new challenge we propose as Hard STS, designed to foil methods approximating semantic similarity as word overlap. We also find our model to be especially effective on cross-lingual semantic similarity, due to its stripping away of language-specific information allowing for the underlying semantics to be more directly compared. In future work, we will explore generalizing this approach to the multilingual setting. A LOCATION OF SENTENCE EMBEDDING IN DECODER FOR LEARNING REPRESENTATIONS As mentioned in Section 2, we experimented with 4 ways to incorporate the sentence embedding into the decoder: Word, Hidden, Attention, and Logit. We also experimented with combinations of these 4 approaches. We evaluate these embeddings on the STS tasks and show the results, along with the time to train the models 1 epoch in Table 8. For these experiments, we train a single layer bidirectional LSTM (BiLSTM) ENGLISHTRANS model with embedding size set to 1024 and hidden states set to 512 dimensions (in order to be roughly equivalent to our Transformer models). To form the sentence embedding in this variant, we mean pool the hidden states for each time step. The cell states of the decoder are initialized to the zero vector. From this analysis, we see that the best performance is achieved with Logit, when the sentence embedding is place just prior to the softmax. The performance is much better than Hidden or Hidden+Word used in prior work. For instance, recently (Artetxe & Schwenk, 2018) used the Hidden+Word strategy in learning multilingual sentence embeddings. A.1 VAE TRAINING We also found that incorporating the latent code of a VAE into the decoder using the Logit strategy increases the mutual information while having little effect on the log likelihood. We trained two LSTM VAE models following the settings and aggressive training strategy in (He et al., 2019), where one LSTM model used the Hidden strategy and the other used the Hidden + Logit strategy. We trained the models on the en side of our en-fr data. We found that the mutual information increased form 0.89 to 2.46, while the approximate negative log likelihood, estimated by importance weighting, increased slightly from 53.3 to 54.0 when using Logit. B RELATIONSHIP BETWEEN BATCH SIZE AND PERFORMANCE FOR TRANSFORMER AND LSTM It has been observed previously that the performance of Transformer models is sensitive to batch size Popel & Bojar (2018) . We found this to be especially true when training sequence-to-sequence models to learn sentence embeddings. Figure 3 shows plots of the average 2012-2016 STS performance of the learned sentence embedding as batch size increases for both the BiLSTM and Transformer. Initially, at a batch size of 2500 tokens, sentence embeddings learned are worse than random, even though validation perplexity does decrease during this time. Performance rises as batch size increases up to around 100,000 tokens. In contrast, the BiLSTM is more robust to batch size, peaking much earlier around 25,000 tokens, and even degrading at higher batch sizes. C MODEL ABLATIONS In this section, we vary the number of layers in the encoder and decoder in BGT W/O PRIOR. We see that performance increases as the number of encoder layers increases, and also that a large decoder hurts performance, allowing us to save training time by using a single layer. These results can be compared to those in Table 9 showing that Transformers outperform BiLSTMS in these experiments. D CLASSIFICATION EXPERIMENTS To explore our embeddings in more detail, we evaluated them on the Quora Question Pairs dataset13 (QQP). This is a paraphrase classification task, which is also part of GLUE (Wang et al., 2018). Since the test set is private, we deviated slightly from the standard evaluation protocol and split the development set into two halves of 20,215 examples each – one half for model selection and the other for evaluation. We evaluated in two ways, cosine, where we score all pairs with cosine similarity and then find the threshold that gives the best accuracy, and logistic regression where we use logistic regression. Its worth noting that the pretrained baseline models on this task were directly trained to produce the feature set used by the downstream classifier, while our embeddings are trained without this supervision. They also tend to have larger dimensions which also gives them an advantage which is discussed in more detail in (Wieting & Kiela, 2019). The results are shown in Table 10 and show that our BGT model outperforms the baseline models, SP, ENGLISHTRANS, 13data.quora.com/First-Quora-Dataset-Release-Question-Pairs and BILINGUALTRANS for both evaluations, and compares favorably to the pretrained models when evaluated using cosine similarity scores. The only models which perform better are USE which was trained on Quora data in an unsupervised way and Sentence-BERT which uses BERT. Our models are not as strong when using classification for final predictions. This indicates that the embeddings learned by our approach may be most useful when no downstream training is possible – though semisupervised objectives that consider the downstream task might aid our approach, like the baselines, if downstream training is the goal.
1. What is the main contribution of the paper regarding sentence embedding using transformer models? 2. What are the strengths and weaknesses of the proposed method compared to other works in the field? 3. How does the reviewer assess the experimental results and evaluation metrics used in the paper? 4. Are there any concerns or suggestions regarding the limitation of the method, particularly in representing semantic aspects? 5. Is there any question regarding the implementation or comparison with other related works?
Review
Review This paper addresses the problem of constructing a sentence embedding using a generative transformer model which encodes semantic aspects and language-specific aspect separately. They use transformers to encode and decode sentence embedding, and the objective reconstructs input with a latent variables (language variables for each language and semantic language). These latent variables are sampled from multivariate Gaussian prior, and the learning uses evidence lower bound (ELBO) for variational approximation of the joint distribution of latent variables and input. The method is evaluated on two tasks: sentence similarity task and machine translation evaluation metric tasks. Both tasks evaluates how similar are two sequences, and the metric is correlation score with score’s from human judge. The model shows promising results on the first task, but weaker results on the second task, especially when compared against pretty naively built sentence embedding from BERT model. I’m not expert in sentence embedding literature, so a bit tricky to evaluate, but baselines seem strong and experimental results on semantic textual similarity task. In terms of evaluation, I appreciated how they defined harder subset of the evaluation dataset and showed a larger improvements on those portions of the dataset. The The paper also includes analysis on what is captured by their language-specific latent vector and semantic latent vector. While I’m not totally convinced this distinction between language-specific characteristics and semantics of the sentence, it makes it easier to understand what’s going on in the model. One of my question is, why not test this method in more popular benchmark such as MNLI or other classification tasks? MNLI evaluates how each sentence pair relates to one another, thus would be a good benchmark for sentence embeddings as well. Having to encode all the information about a sentence into a single vector will make these sentence embedding model weaker than other models which can do cross sentence attentions and etc, but I think that’s the genuine limitation of sentence embedding research and has to be clarified as such. I recommend discussing and clarifying these points. I’m a bit unclear how these sentence embeddings are translated into a score that decides the degree to which sentences have the same meaning. Is it just cosine similarity of two sentence embedding vectors? While the purpose of these references is to generate sentences instead of building a sentence embedding, the method is related and comparison and discussion would be worthwhile. Generating Sentences from a Continuous Space Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, Samy Bengio https://arxiv.org/abs/1511.06349 Toward Controlled Generation of Text Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, Eric P. Xing https://arxiv.org/abs/1703.00955 Comments & Questions: - Methods using a large amount of unsupervised monolingual data shows very strong performance in a panoply of NLP tasks these days. If I understand correctly, this model is constrained by the amount of bitext — some analysis on this would be interesting. - Figure 1 mentions about “Section 3, 4” but I don’t think they are correct references? - BERT baseline seemed not to allow fine-tuning of the LM parameters. I think this makes the baseline significantly weaker? - It seems odd that only English semantic encoder is used to downstream application. - Does table 3 covers all the data? What proportion of the data is covered by each row? - Given the similarity of English and French, I’m not sure how “language-specific” such latent vectors are. It would be much more interesting analysis if it studies distant language pairs.
ICLR
Title A Bilingual Generative Transformer for Semantic Sentence Embedding Abstract Semantic sentence embedding models encode natural language sentences into vectors, such that closeness in embedding space indicates closeness in the semantics between the sentences. Bilingual data offers a useful signal for learning such embeddings: properties shared by both sentences in a translation pair are likely semantic, while divergent properties are likely stylistic or language-specific. We propose a deep latent variable model that attempts to perform source separation on parallel sentences, isolating what they have in common in a latent semantic vector, and explaining what is left over with language-specific latent vectors. Our proposed approach differs from past work on semantic sentence encoding in two ways. First, by using a variational probabilistic framework, we introduce priors that encourage source separation, and can use our model’s posterior to predict sentence embeddings for monolingual data at test time. Second, we use highcapacity transformers as both data generating distributions and inference networks – contrasting with most past work on sentence embeddings. In experiments, our approach substantially outperforms the state-of-the-art on a standard suite of unsupervised semantic similarity evaluations. Further, we demonstrate that our approach yields the largest gains on more difficult subsets of these evaluations where simple word overlap is not a good indicator of similarity. 1 INTRODUCTION Learning useful representations of language has been a source of recent success in natural language processing (NLP). Much work has been done on learning representations for words (Mikolov et al., 2013; Pennington et al., 2014) and sentences (Kiros et al., 2015; Conneau et al., 2017). More recently, deep neural architectures have been used to learn contextualized word embeddings (Peters et al., 2018; Devlin et al., 2018) which have enabled state-of-the-art results on many tasks. We focus on learning semantic sentence embeddings in this paper, which play an important role in many downstream applications. Since they do not require any labelled data for fine-tuning, sentence embeddings are useful for a variety of problems right out of the box. These include Semantic Textual Similarity (STS; Agirre et al. (2012)), mining bitext (Zweigenbaum et al., 2018), and paraphrase identification (Dolan et al., 2004). Semantic similarity measures also have downstream uses such as fine-tuning machine translation systems (Wieting et al., 2019a). There are three main ingredients when designing a sentence embedding model: the architecture, the training data, and the objective function. Many architectures including LSTMs (Hill et al., 2016; Conneau et al., 2017; Schwenk & Douze, 2017; Subramanian et al., 2018), Transformers (Cer et al., 2018; Reimers & Gurevych, 2019), and averaging models (Wieting et al., 2016a; Arora et al., 2017) have found success for learning sentence embeddings. The choice of training data and objective are intimately intertwined, and there are a wide variety of options including next-sentence prediction (Kiros et al., 2015), machine translation (Espana-Bonet et al., 2017; Schwenk & Douze, 2017; Schwenk, 2018; Artetxe & Schwenk, 2018), natural language inference (NLI) (Conneau et al., 2017), and multi-task objectives which include some of the previously mentioned objectives (Cer et al., 2018) as well as additional tasks like constituency parsing (Subramanian et al., 2018). Surprisingly, despite ample testing of more powerful architectures, the best performing models for many sentence embedding tasks related to semantic similarity often use simple architectures that are mostly agnostic to the interactions between words. For instance, some of the top performing techniques use word embedding averaging (Wieting et al., 2016a), character n-grams (Wieting et al., 2016b), and subword embedding averaging (Wieting et al., 2019b) to create representations. These simple approaches are competitive with much more complicated architectures on in-domain data and generalize well to unseen domains, but are fundamentally limited by their inability to capture word order. Training these approaches generally relies on discriminative objectives defined on paraphrase data (Ganitkevitch et al., 2013; Wieting & Gimpel, 2018) or bilingual data (Wieting et al., 2019b). The inclusion of latent variables in these models has also been explored (Chen et al., 2019). Intuitively, bilingual data in particular is promising because it potentially offers a useful signal for learning the underlying semantics of sentences. Within a translation pair, properties shared by both sentences are more likely semantic, while those that are divergent are more likely stylistic or language-specific. While previous work learning from bilingual data perhaps takes advantage of this fact implicitly, the focus of this paper is modelling this intuition explicitly, and to the best of our knowledge, this has not not been explored in prior work. Specifically, we propose a deep generative model that is encouraged to perform source separation on parallel sentences, isolating what they have in common in a latent semantic embedding and explaining what is left over with language-specific latent vectors. At test time, we use inference networks (Kingma & Welling, 2013) for approximating the model’s posterior on the semantic and source-separated latent variables to encode monolingual sentences. Finally, since our model and training objective are generative, our approach does not require knowledge of the distance metrics to be used during evaluation,1 and it has the additional property of being able to generate text. In experiments, we evaluate our probabilistic source-separation approach on a standard suite of STS evaluations. We demonstrate that the proposed approach is effective, most notably allowing the learning of high-capacity deep transformer architectures (Vaswani et al., 2017) while still generalizing to new domains, significantly outperforming a variety of state-of-the-art baselines . Further, we conduct a thorough analysis by identifying subsets of the STS evaluation where simple word overlap is not able to accurately assess semantic similarity. On these most difficult instances, we find that our approach yields the largest gains, indicating that our system is modeling interactions between words to good effect. We also find that our model better handles cross-lingual semantic similarity than multilingual translation baseline approaches, indicating that stripping away language-specific information allows for better comparisons between sentences from different languages. Finally, we analyze our model to uncover what information was captured by the source separation into the semantic and language-specific variables and the relationship between this encoded information and language distance to English. We find that the language-specific variables tend to explain more superficial or language-specific properties such as overall sentence length, amount and location of punctuation, and the gender of articles (if gender is present in the language), but semantic and syntactic information is more concentrated in the shared semantic variables, matching our intuition. Language distance has an effect as well, where languages that share common structures with English put more information into the semantic variables, while more distant languages put more information into the language-specific variables. Lastly, we show outputs generated from our model that exhibit its ability to do a type of style transfer. 2 MODEL Our proposed training objective leverages a generative model of parallel text in two languages (e.g. English (en) and French (fr)) that form a pair consisting of an English sentence xen and a French sentence xfr. Importantly, this generative process utilizes three underlying latent vectors: languagespecific variation variables (language variables) zfr and zen respectively for each side of the translation, as well as a shared semantic variation variable (semantic variable) zsem. In this section we will first describe the generative model for the text and latent variables. In the following section we will describe the inference procedure of zsem given an input sentence, which corresponds to our core task of obtaining sentence embeddings useful for downstream tasks such as semantic similarity. Further, by encouraging the model to perform this source separation, the learned semantic encoders will more crisply represent the underlying semantics, increasing performance on downstream semantic tasks. 1In other words, we don’t assume cosine similarity as a metric, though it does work well in our experiments. The generative process of our model, the Bilingual Generative Transformer (BGT), is depicted in Figure 1 and its computation graph is shown in Figure 2. First, we sample latent variables 〈zfr, zen, zsem〉, where zi ∈ Rk, from a multivariate Gaussian prior N(0, Ik). These variables are then fed into a decoder that samples sentences; xen is sampled conditioned on zsem and zen, while xfr is sampled conditioned on zsem and zfr. Because sentences in both languages will use zsem in generation, we expect that in a well- trained model this variable will encode semantic, syntactic, or stylistic information shared across both sentences, while zfr and zen will handle any language-specific peculiarities or specific stylistic decisions that are less central to the sentence meaning and thus do not translate across sentences. In the following section, we further discuss how this is explicitly encouraged by the learning process. Decoder Architecture. Many latent variable models for text use LSTMs (Hochreiter & Schmidhuber, 1997) as their decoders (Yang et al., 2017; Ziegler & Rush, 2019; Ma et al., 2019). However, state-of-the-art models in neural machine translation have seen increased performance and speed using deep Transformer architectures. We also found in our experiments (see Appendix C for details) that Transformers led to increased performance in our setting, so they are used in our main model. We use two decoders in our model, one for modelling p(xfr|zsem, zfr; θ) and one for modeling p(xen|zsem, zen; θ). These decoders are depicted on the right side of Figure 2. Each decoder takes in two latent variables, a language variable and a semantic variable. These variables are concatenated together prior to being used by the decoder for reconstruction. We explore four ways of using this latent vector: (1) Concatenate it to the word embeddings (Word) (2) Use it as the initial hidden state (Hidden, LSTM only) (3) Use it as you would the attention context vector in the traditional sequenceto-sequence framework (Attention) and (4) Concatenate it to the hidden state immediately prior to computing the logits (Logit). Unlike Attention, there is no additional feedforward layer in this setting. We experimented with these four approaches, as well as combinations thereof, and report this analysis in Appendix A. From these experiments, we see that the closer the sentence embedding is to the softmax, the better the performance on downstream tasks evaluating its semantic content. We hypothesise that this is due to better gradient propagation because the sentence embedding is now closer to the error signal. Since Attention and Logit performed best, we use these in our Transformer experiments. 3 LEARNING AND INFERENCE Our model is trained on a training set X of parallel text consisting of N examples, X = {〈x1en, x1fr〉, . . . , 〈xNen, xNfr〉}, and Z is our collection of latent variables Z = (〈z1en, z1fr, z1sem〉, . . . , 〈zNen, zNfr, zNsem〉). We wish to maximize the likelihood of the parameters of the two decoders θ with respect to the observed X , marginalizing over the latent variables Z. p(X; θ) = ∫ Z p(X,Z; θ)dZ Unfortunately, this integral is intractable due to the complex relationship between X and Z. However, related latent variable models like variational autoencoders (VAEs (Kingma & Welling, 2013)) learn by optimizing a variational lower bound on the log marginal likelihood. This surrogate objective is called the evidence lower bound (ELBO) and introduces a variational approximation, q to the true posterior of the model p. The q distribution is parameterized by a neural network with parameters φ. ELBO can be written for our model as follows: ELBO =Eq(Z|X;φ)[log p(X|Z; θ)]− KL(q(Z|X;φ)||p(Z; θ)) This lower bound on the marginal can be optimized by gradient ascent by using the reparameterization trick (Kingma & Welling, 2013). This trick allows for the expectation under q to be approximated through sampling in a way that preserves backpropagation. We make several independence assumptions for q(zsem, zen, zfr|xen, xfr;φ). Specifically, to match our goal of source separation, we factor q as q(zsem, zen, zfr|xen, xfr;φ) = q(zsem|xen, xfr;φ)q(zen|xen)q(zfr|xfr;φ), with φ being the parameters of the encoders that make up the inference networks, defined in the next paragraph. Lastly, we note that the KL term in our ELBO equation encourages explaining variation that is shared by translations with the shared semantic variable and explaining language-specific variation with the corresponding language-specific variables. Information shared by the two sentences will result in a lower KL loss if it is encoded in the shared variable, otherwise that information will be replicated and the overall cost of encoding will increase. Encoder Architecture. We use three inference networks as shown on the left side of Figure 2: an English inference network to produce the English language variable, a French inference network to produce the French language variable, and a semantic inference network to produce the semantic variable. Just as in the decoder architecture, we use a Transformer for the encoders. The semantic inference network is a bilingual encoder that encodes each language. For each translation pair, we alternate which of the two parallel sentences is fed into the semantic encoder within a batch. Since the semantic encoder is meant to capture language agnostic semantic information, its outputs for a translation pair should be similar regardless of the language of the input sentence. We note that other operations are possible for combining the views each parallel sentence offers. For instance, we could feed both sentences into the semantic encoder and pool their representations. However, in practice we find that alternating works well and leave further study of this to future work. 4 EXPERIMENTS 4.1 BASELINE MODELS We experiment with fourteen baseline models, covering both the most effective approaches for learning sentence embeddings from the literature and ablations of our own BGT model. These baselines can be split into three groups as detailed below. Models from the Literature (Trained on Different Data) We compare to well known sentence embedding models Infersent (Conneau et al., 2017), GenSen (Subramanian et al., 2018), the Universal Sentence Encoder (USE) (Cer et al., 2018), as well as BERT (Devlin et al., 2018).2 We used the pretrained BERT model in two ways to create a sentence embedding. The first way is to concatenate the hidden states for the CLS token in the last four layers. The second way is to concatenate the hidden states of all word tokens in the last four layers and mean pool these representations. Both methods result in a 4096 dimension embedding. Finally, we compare to the newly released model, Sentence-Bert (Reimers & Gurevych, 2019). This model is similar to Infersent (Conneau et al., 2017) in that it is trained on natural language inference data, SNLI (Bowman et al., 2015). However, instead of using pretrained word embeddings, they fine-tune BERT in a way to induce sentence embeddings.3 Models from the Literature (Trained on Our Data) These models are amenable to being trained in the exact same setting as our own models as they only require parallel text. These include the sentence piece averaging model, SP, from (Wieting et al., 2019b), which is among the best of the averaging models (i.e. compared to averaging only words or character n-grams) as well the LSTM model, BILSTM, from (Wieting & Gimpel, 2017). These models use a contrastive loss with a margin. Following their settings, we fix the margin to 0.4 and tune the number of batches to pool for selecting negative examples from {40, 60, 80, 100}. For both models, we set the dimension of the embeddings to 1024. For BILSTM, we train a single layer bidirectional LSTM with hidden states of 512 dimensions. To create the sentence embedding, the forward and backward hidden states are concatenated and mean-pooled. Following (Wieting & Gimpel, 2017), we shuffle the inputs with probability p, tuning p from {0.3, 0.5}. We also implicitly compare to previous machine translation approaches like (Espana-Bonet et al., 2017; Schwenk & Douze, 2017; Artetxe & Schwenk, 2018) in Appendix A where we explore different variations of training LSTM sequence-to-sequence models. We find that our translation baselines reported in the tables below (both LSTM and Transformer) outperform the architectures from these works due to using the Attention and Logit methods mentioned in Section 2 , demonstrating that our baselines represent, or even over-represent, the state-of-the-art for machine translation approaches. BGT Ablations Lastly, we compare to ablations of our model to better understand the benefits of parallel data, language-specific variables, the KL loss term, and how much we gain from the more conventional translation baselines. • ENGLISHAE: English autoencoder on the English side of our en-fr data. • ENGLISHVAE: English variational autoencoder on the English side of our en-fr data. • ENGLISHTRANS: Translation from en to fr. • BILINGUALTRANS: Translation from both en to fr and fr to enwhere the encoding parameters are shared but each language has its own decoder. • BGT W/O LANGVARS: A model similar to BILINGUALTRANS, but it includes a prior over the embedding space and therefore a KL loss term. This model differs from BGT since it does not have any language-specific variables. • BGT W/O PRIOR: Follows the same architecture as BGT, but without the priors and KL loss term. 2Note that in all experiments using BERT, including Sentence-BERT, the large, uncased version is used. 3Most work evaluating accuracy on STS tasks has averaged the Pearson’s r over each individual dataset for each year of the STS competition. However, Reimers & Gurevych (2019) computed Spearman’s ρ over concatenated datasets for each year of the STS competition. To be consistent with previous work, we re-ran their model and calculated results using the standard method, and thus our results are not the same as those reported Reimers & Gurevych (2019). 4.2 EXPERIMENTAL SETTINGS The training data for our models is a mixture of OpenSubtitles 20184 en-fr data and en-fr Gigaword5 data. To create our dataset, we combined the complete corpora of each dataset and then randomly selected 1,000,000 sentence pairs to be used for training with 10,000 used for validation. We use sentencepiece (Kudo & Richardson, 2018) with a vocabulary size of 20,000 to segment the sentences, and we chose sentence pairs whose sentences are between 5 and 100 tokens each. In designing the model architectures for the encoders and decoders, we experimented with Transformers and LSTMs. Due to better performance, we use a 5 layer Transformer for each of the encoders and a single layer decoder for each of the decoders. This design decision was empirically motivated as we found using a larger decoder was slower and worsened performance, but conversely, adding more encoder layers improved performance. More discussion of these trade-offs along with ablations and comparisons to LSTMs are included in Appendix C. For all of our models, we set the dimension of the embeddings and hidden states for the encoders and decoders to 1024. Since we experiment with two different architectures,6 we follow two different optimization strategies. For training models with Transformers, we use Adam (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.98, and = 10−8. We use the same learning rate schedule as (Vaswani et al., 2017), i.e., the learning rate increases linearly for 4,000 steps to 5× 10−4, after which it is decayed proportionally to the inverse square root of the number of steps. For training the LSTM models, we use Adam with a fixed learning rate of 0.001. We train our models for 20 epochs. For models incorporating a translation loss, we used label smoothed cross entropy (Szegedy et al., 2016; Pereyra et al., 2017) with = 0.1. For ENGLISHVAE, BGT and BILINGUALTRANS, we anneal the KL term so that it increased linearly for 216 updates, which robustly gave good results in preliminary experiments. We also found that in training BGT, combining its loss with the BILINGUALTRANS objective during training of both models increased performance, and so this loss was summed with the BGT loss in all of our experiments. We note that this doesn’t affect our claim of BGT being a generative model, as this loss is only used in a multi-task objective at training time, and we calculate the generation probabilities according to standard BGT at test time. Lastly, in Appendix B, we illustrate that it is crucial to train the Transformers with large batch sizes. Without this, the model can learn the goal task (such as translation) with reasonable accuracy, but the learned semantic embeddings are of poor quality until batch sizes approximately reach 25,000 tokens. Therefore, we use a maximum batch size of 50,000 tokens in our ENGLISHTRANS, BILINGUALTRANS, and BGT W/O PRIOR, experiments and 25,000 tokens in our BGT W/O LANGVARS and BGT experiments. 4.3 EVALUATION Our primary evaluation are the 2012-2016 SemEval Semantic Textual Similarity (STS) shared tasks (Agirre et al., 2012; 2013; 2014; 2015; 2016), where the goal is to accurately predict the degree to which two sentences have the same meaning as measured by human judges. The evaluation metric is Pearson’s r with the gold labels. Secondly, we evaluate on Hard STS, where we combine and filter the STS datasets in order to make a more difficult evaluation. We hypothesize that these datasets contain many examples where their gold scores are easy to predict by either having similar structure and word choice and a high score or dissimilar structure and word choice and a low score. Therefore, we split the data using symmetric word error rate (SWER),7 finding sentence pairs with low SWER and low gold scores as well as sentence pairs with high SWER and high gold scores. This results in two datasets, Hard+ which have SWERs in the bottom 20% of all STS pairs and whose gold label is between 0 and 1,8 and 4http://opus.nlpl.eu/OpenSubtitles.php 5https://www.statmt.org/wmt10/training-giga-fren.tar 6We use LSTMs in our ablations. 7We define symmetric word error rate for sentences s1 and s2 as 12WER(s1, s2) + 1 2 WER(s2, s2), since word error rate (WER) is an asymmetric measure. 8STS scores are between 0 and 5. Hard- where the SWERs are in the top 20% of the gold scores are between 4 and 5. We also evaluate on a split where negation was likely present in the example.9 Examples are shown in Table 1. Lastly, we evaluate on STS in es and ar as well as cross-lingual evaluations for en-es, en-ar, and en-tr. We use the datasets from SemEval 2017 (Cer et al., 2017). For this setting, we train BILINGUALTRANS and BGT on 1 million examples from en-es, en-ar, and en-tr OpenSubtitles 2018 data. 4.4 RESULTS The results on the STS and Hard STS are shown in Table 2.10 From the results, we see that BGT has the highest overall performance. It does especially well compared to prior work on the two Hard STS datasets. We show further difficult splits in Table 3, including a negation split, beyond those used in Hard STS and compare the top two performing models in the STS task from Table 2. We also show easier splits in the bottom of the table. From these results, we see that both positive examples that have little shared vocabulary and structure and negative examples with significant shared vocabulary and structure benefit significantly from using a deeper architecture. Similarly, examples where negation occurs also benefit from our deeper model. These examples are difficult because more than just the identity of the words is needed to 9We selected examples for the negation split where one sentence contained not or ’t and the other did not. 10We obtained values for STS 2012-2016 from prior works using SentEval (Conneau & Kiela, 2018). Note that we include all datasets for the 2013 competition, including SMT, which is not included in SentEval. determine the relationship of the two sentences, and this is something that SP is not equipped for since it is unable to model word order. The bottom two rows show easier examples where positive examples have high overlap and low SWER and vice versa for negative examples. Both models perform similarly on this data, with the BGT model having a small edge consistent with the overall gap between these two models. Lastly, in Table 4, we show the results of STS evaluations in es and ar and cross-lingual evaluations for en-es, en-ar, and en-tr. From these results, we see that BGT has the best performance across all datasets, however the performance is significantly stronger than the BILINGUALTRANS and BGT W/O PRIOR baselines in the cross-lingual setting. Since BGT W/O LANGVARS also has significantly better performance on these tasks, most of this gain seems to be due to the prior have a regularizing effect. However, BGT outperforms BGT W/O LANGVARS overall, and we hypothesize that the gap in performance between these two models is due to BGT being able to strip away the language-specific information in the representations with its language-specific variables, allowing for the semantics of the sentences to be more directly compared. 5 ANALYSIS We next analyze our BGT model by examining what elements of syntax and semantics the language and semantic variables capture relative both to each-other and to the sentence embeddings from the BILINGUALTRANS models. We also analyze how the choice of language and its lexical and syntactic distance from English affects the semantic and syntactic information captured by the semantic and language-specific encoders. Finally, we also show that our model is capable of sentence generation in a type of style transfer, demonstrating its capabilities as a generative model. 5.1 STS We first show that the language variables are capturing little semantic information by evaluating the learned English language-specific variable from our BGT model on our suite of semantic tasks. The results in Table 5 show that these encoders perform closer to a random encoder than the semantic encoder from BGT. This is consistent with what we would expect to see if they are capturing extraneous language-specific information. 5.2 PROBING We probe our BGT semantic and language-specific encoders, along with our BILINGUALTRANS encoders as a baseline, to compare and contrast what aspects of syntax and semantics they are learning relative to each other across five languages with various degrees of similarity with English. All models are trained on the OpenSubtitles 2018 corpus. We use the datasets from (Conneau et al., 2018) for semantic tasks like number of subjects and number of objects, and syntactic tasks like tree depth, and top constituent. Additionally, we include predicting the word content and sentence length. We also add our own tasks to validate our intuitions about punctuation and language-specific information. In the first of these, punctuation number, we train a classifier to predict the number of punctuation marks11 in a sentence. To make the task more challenging, we limit each label to have at most 20,000 examples split among training, validation, and testing data.12 In the second task, punctuation first, we train a classifier to predict the identity of the first punctuation mark in the sentence. In our last task, gender, we detect examples where the gender of the articles in the sentence is incorrect in French of Spanish. To create an incorrect example, we switch articles from {le, la, un, une} for French and {el, la, los, las} for Spanish, with their (indefinite or definite for French and singular or plural for Spanish) counterpart with the opposite gender. This dataset was balanced so random chances gives 50% on the testing data. All tasks use 100,000 examples for training and 10,000 examples for validation and testing. The results of these experiments are shown in Table 6. These results show that the source separation is effective - stylistic and language-specific information like length, punctuation and language-specific gender information are more concentrated in the language variables, while word content, semantic and syntactic information are more concentrated in the semantic encoder. The choice of language is also seen to be influential on what these encoders are capturing. When the languages are closely related to English, like in French and Spanish, the performance difference between the semantic and English language encoder is larger for word content, subject number, object number than for more distantly related languages like Arabic and 11Punctuation were taken from the set { ’ ! ” # $ % & \’ ( ) ∗ + , − . / : ; < = > ? @ [ ] ˆ ‘ {— } ’̃ . }. 12The labels are from 1 punctuation mark up to 10 marks with an additional label consolidating 11 or more marks. Turkish. In fact, word content performance is directly tied to how well the alphabets of the two languages overlap. This relationship matches our intuition, because lexical information will be cheaper to encode in the semantic variable when it is shared between the languages. Similarly for the tasks of length, punctuation first, and punctuation number, the gap in performance between the two encoders also grows as the languages become more distant from English. Lastly, the gap on STS performance between the two encoders shrinks as the languages become more distant, which again is what we would expect, as the language-specific encoders are forced to capture more information. Japanese is an interesting case in these experiments, where the English language-specific encoder outperforms the semantic encoder on the semantic and syntactic probing tasks. Japanese is a very distant language to English both in its writing system and in its sentence structure (it is an SOV language, where English is an SVO language). However, despite these difference, the semantic encoder strongly outperforms the English language-specific encoder, suggesting that the underlying meaning of the sentence is much better captured by the semantic encoder. 5.3 GENERATION AND STYLE TRANSFER In this section, we qualitatively demonstrate the ability of our model to generate sentences. We focus on a style-transfer task where we have original seed sentences from which we calculate our semantic vector zsem and language specific vector zen. Specifically, we feed in a Source sentence into the semantic encoder to obtain zsem, and another Style sentence into the English languagespecific encoder to obtain zen. We then generate a new sentence using these two latent variables. This can be seen as a type of style transfer where we expect the model to generate a sentence that has the semantics of the Source sentence and the style of the Style sentence. We use our en-fr BGT model from Table 6 and show some examples in Table 7. All input sentences are from heldout en-fr OpenSubtitles data. From these examples, we see further evidence of the role of the semantic and language-specific encoders, where most of the semantics (e.g. topical word such as seen and tech in the Source sentence) are reflected in the output, but length and structure are more strongly influenced by the language-specific encoder. 6 CONCLUSION We propose Bilingual Generative Transformers, a model that uses parallel data to learn to perform source separation of common semantic information between two languages from language-specific information. We show that the model is able to accomplish this source separation through probing tasks and text generation in a style-transfer setting. We find that our model bests all baselines on semantic similarity tasks, with the largest gains coming from a new challenge we propose as Hard STS, designed to foil methods approximating semantic similarity as word overlap. We also find our model to be especially effective on cross-lingual semantic similarity, due to its stripping away of language-specific information allowing for the underlying semantics to be more directly compared. In future work, we will explore generalizing this approach to the multilingual setting. A LOCATION OF SENTENCE EMBEDDING IN DECODER FOR LEARNING REPRESENTATIONS As mentioned in Section 2, we experimented with 4 ways to incorporate the sentence embedding into the decoder: Word, Hidden, Attention, and Logit. We also experimented with combinations of these 4 approaches. We evaluate these embeddings on the STS tasks and show the results, along with the time to train the models 1 epoch in Table 8. For these experiments, we train a single layer bidirectional LSTM (BiLSTM) ENGLISHTRANS model with embedding size set to 1024 and hidden states set to 512 dimensions (in order to be roughly equivalent to our Transformer models). To form the sentence embedding in this variant, we mean pool the hidden states for each time step. The cell states of the decoder are initialized to the zero vector. From this analysis, we see that the best performance is achieved with Logit, when the sentence embedding is place just prior to the softmax. The performance is much better than Hidden or Hidden+Word used in prior work. For instance, recently (Artetxe & Schwenk, 2018) used the Hidden+Word strategy in learning multilingual sentence embeddings. A.1 VAE TRAINING We also found that incorporating the latent code of a VAE into the decoder using the Logit strategy increases the mutual information while having little effect on the log likelihood. We trained two LSTM VAE models following the settings and aggressive training strategy in (He et al., 2019), where one LSTM model used the Hidden strategy and the other used the Hidden + Logit strategy. We trained the models on the en side of our en-fr data. We found that the mutual information increased form 0.89 to 2.46, while the approximate negative log likelihood, estimated by importance weighting, increased slightly from 53.3 to 54.0 when using Logit. B RELATIONSHIP BETWEEN BATCH SIZE AND PERFORMANCE FOR TRANSFORMER AND LSTM It has been observed previously that the performance of Transformer models is sensitive to batch size Popel & Bojar (2018) . We found this to be especially true when training sequence-to-sequence models to learn sentence embeddings. Figure 3 shows plots of the average 2012-2016 STS performance of the learned sentence embedding as batch size increases for both the BiLSTM and Transformer. Initially, at a batch size of 2500 tokens, sentence embeddings learned are worse than random, even though validation perplexity does decrease during this time. Performance rises as batch size increases up to around 100,000 tokens. In contrast, the BiLSTM is more robust to batch size, peaking much earlier around 25,000 tokens, and even degrading at higher batch sizes. C MODEL ABLATIONS In this section, we vary the number of layers in the encoder and decoder in BGT W/O PRIOR. We see that performance increases as the number of encoder layers increases, and also that a large decoder hurts performance, allowing us to save training time by using a single layer. These results can be compared to those in Table 9 showing that Transformers outperform BiLSTMS in these experiments. D CLASSIFICATION EXPERIMENTS To explore our embeddings in more detail, we evaluated them on the Quora Question Pairs dataset13 (QQP). This is a paraphrase classification task, which is also part of GLUE (Wang et al., 2018). Since the test set is private, we deviated slightly from the standard evaluation protocol and split the development set into two halves of 20,215 examples each – one half for model selection and the other for evaluation. We evaluated in two ways, cosine, where we score all pairs with cosine similarity and then find the threshold that gives the best accuracy, and logistic regression where we use logistic regression. Its worth noting that the pretrained baseline models on this task were directly trained to produce the feature set used by the downstream classifier, while our embeddings are trained without this supervision. They also tend to have larger dimensions which also gives them an advantage which is discussed in more detail in (Wieting & Kiela, 2019). The results are shown in Table 10 and show that our BGT model outperforms the baseline models, SP, ENGLISHTRANS, 13data.quora.com/First-Quora-Dataset-Release-Question-Pairs and BILINGUALTRANS for both evaluations, and compares favorably to the pretrained models when evaluated using cosine similarity scores. The only models which perform better are USE which was trained on Quora data in an unsupervised way and Sentence-BERT which uses BERT. Our models are not as strong when using classification for final predictions. This indicates that the embeddings learned by our approach may be most useful when no downstream training is possible – though semisupervised objectives that consider the downstream task might aid our approach, like the baselines, if downstream training is the goal.
1. What are the strengths and weaknesses of the proposed model in separating common and language-specific semantics? 2. How does the model perform in terms of sensitivity to word order on a sentence level? 3. What are the limitations of the empirical analysis presented in the paper? 4. How much do the gains shown in the experiments depend on tuning versus the inherent design of the model? 5. Are there any concerns regarding the superficiality of the analysis presented in the paper?
Review
Review The paper presents a model that, given parallel bilingual data, separates the common semantics from the language-specific semantics on a sentence level. Overall the presentation is clear and the experiments show gains over the baselines. One major point of confusion however is that, while early on in the paper (introduction), it is stated repeatedly that one of the strengths of the proposed model is that it is sensitive to word order on a sentence level, this particular aspect of the model is neither evaluated nor analysed. Instead the empirical analysis focuses on sentence length, punctuation and semantics. The analysis of all these three aspects is superficial: for sentence length, it consists of computing the sentence mean and median; for punctuation, it consists of masking punctuation; and the last part just computes vectors of nouns only (and states that this is "semantics"). But there is no analysis per se. Overall, the paper presents a model and shows gains over baselines. The extent to which these gains are due to tuning as opposed to the inherent design of the model is not clear. The analysis is superficial.
ICLR
Title Analyzing Transformers in Embedding Space Abstract Understanding Transformer-based models has attracted significant attention, as they lie at the heart of recent technological advances across machine learning. While most interpretability methods rely on running models over inputs, recent work has shown that a zero-pass approach, where parameters are interpreted directly without a forward/backward pass is feasible for some Transformer parameters, and for two-layer attention networks. In this work, we present a theoretical analysis where all parameters of a trained Transformer are interpreted by projecting them into the embedding space, that is, the space of vocabulary items they operate on. We derive a simple theoretical framework to support our arguments and provide ample evidence for its validity. First, an empirical analysis showing that parameters of both pretrained and fine-tuned models can be interpreted in embedding space. Second, we present two applications of our framework: (a) aligning the parameters of different models that share a vocabulary, and (b) constructing a classifier without training by “translating” the parameters of a fine-tuned classifier to parameters of a different model that was only pretrained. Overall, our findings open the door to interpretation methods that, at least in part, abstract away from model specifics and operate in the embedding space only. 1 INTRODUCTION Transformer-based models [Vaswani et al., 2017] currently dominate Natural Language Processing [Devlin et al., 2018; Radford et al., 2019; Zhang et al., 2022] as well as many other fields of machine learning [Dosovitskiy et al., 2020; Chen et al., 2020; Baevski et al., 2020]. Consequently, understanding their inner workings has been a topic of great interest. Typically, work on interpreting Transformers relies on feeding inputs to the model and analyzing the resulting activations [Adi et al., 2016; Shi et al., 2016; Clark et al., 2019]. Thus, interpretation involves an expensive forward, and sometimes also a backward pass, over multiple inputs. Moreover, such interpretation methods are conditioned on the input, and are not guaranteed to generalize to all inputs. In the evolving literature on static interpretation, i.e., without forward or backward passes, Geva et al. [2022b] showed that the value vectors of the Transformer feed-forward module (the second layer of the feed-forward network) can be interpreted by projecting them into the embedding space, i.e., multiplying them by the embedding matrix to obtain a representation over vocabulary items. Elhage et al. [2021] have shown that in a 2-layer attention network, weight matrices can be interpreted in the embedding space as well. In this work, we extend the theoretical analysis and findings of Elhage et al. [2021] and Geva et al. [2022b], and present a zero-pass framework to understand the behaviour of Transformers. Conceretely, we interpret all weights of a pretrained language model (LM) in embedding space, including both keys and values of the feed-forward module as well as all attention parameters. Our theory relies on a simple observation. Since Geva et al. [2022b] have shown that one can project hidden states to the embedding space via the embedding matrix, we can extend this to other parts of the model by projecting to the embedding space and then projecting back by multiplying with a right-inverse of the embedding matrix. Thus, we can recast inner products in the model as inner products in embedding space. Viewing inner products in this way, we can interpret such products as interactions between pairs of vocabulary items.1 This applies to (a) interactions between 1We refer to the unique items of the vocabulary as vocabulary items, and to the (possibly duplicate) elements of a tokenized input as tokens. 2. Pack them into a similarity matrix attention queries and keys as well as to (b) interactions between attention value vectors and the parameters that project them at the output of the attention module. Taking this perspective to an extreme, one can view Transformers as operating implicitly in the embedding space. This entails the existence of a single linear space that depends solely on the tokenizer, in which parameters of different Transformers can be compared. Thus, one can use the embedding space to compare and transfer information across different models that share a tokenizer. We provide extensive empirical evidence for the credibility of our proposal. On the interpretation front (Fig. 1, Left), we provide qualitative and quantitative evidence that Transformer parameters can be interpreted in embedding space. We also show that when fine-tuning a pretrained LM on a sentiment analysis task (over movie reviews), projecting changes in parameters into embedding space yields words that characterize sentiment towards movies. Second (Fig. 1, Center), we show that given two distinct instances of BERT pretrained with different random seeds [Sellam et al., 2022], we can align layers of the two instances by casting their weights into the embedding space. We find that indeed layer i of the first instance aligns well to layer i of the second instance, showing the different BERT instances converge to a semantically-similar solution. Last (Fig. 1, Right), we take a model fine-tuned on a sentiment analysis task and “transfer” the learned weights to a different model that was only pretrained by going through the embedding spaces of the two models. We show that in 30% of the cases, this procedure, termed stitching, results in a classifier that reaches an impressive accuracy of 70% on the IMDB benchmark [Maas et al., 2011] without any training. Overall, our findings suggest that analyzing Transformers in embedding space is fruitful for both interpretability and as a tool to relate different models that share a vocabulary, and opens the door to interpretation methods that operate in embedding space only. Our code is available at https: //anonymized. 2 BACKGROUND We now present the main components of the Transformer [Vaswani et al., 2017] relevant to our analysis. We discuss the residual stream view of Transformers, and recapitulate a view of the attention layer parameters as interaction matrices WVO and WQK [Elhage et al., 2021]. Similar to Elhage et al. [2021], we exclude biases and layer normalization from our analysis. 2.1 TRANSFORMER ARCHITECTURE The Transformer consists of a stack of layers, each includes an attention module followed by a Feed-Forward (FF) module. All inputs and outputs are sequences of N vectors of dimensionality d. The Attention Module takes as input a sequence of representations X ∈ RN×d, and each layer L is parameterized by four matrices W (L)Q ,W (L) K ,W (L) V ,W (L) O ∈ Rd×d (we henceforth omit the layer superscript for brevity). The input X is projected to produce queries, keys, and values: Qatt = XWQ,Katt = XWK , Vatt = XWV . Each one of Qatt,Katt, Vatt is split along the columns to H different heads of dimensionality RN× dH , denoted by Qiatt,Kiatt, V iatt respectively. We then compute H attention maps: Ai = softmax ( QiattK iT att√ d/H +M ) ∈ RN×N , where M ∈ RN×N is the attention mask. Each attention map is applied to the corresponding value head as AiV iatt, results are concatenated along columns and projected via WO. The input to the module is added via a residual connection, and thus the attention module’s output is: X + Concat [ A1V 1att, . . . , A iV iatt, . . . , A HV Hatt ] WO. (1) The FF Module is a two-layer neural network, applied to each position independently. Following past terminology [Sukhbaatar et al., 2019; Geva et al., 2020], weights of the first layer are called FF keys and weights of the second layer FF values. This is an analogy to attention, as the FF module too can be expressed as: f(QKT)V , where f is the activation function, Q ∈ RN×d is the output of the attention module and the input to the FF module, and K,V ∈ Rdff×d are the weights of the first and second layers of the FF module. Unlike attention, keys and values are learnable parameters. The output of the FF module is added to the output of the attention module to form the output of the layer via a residual connection. The output of the i-th layer is called the i-th hidden state. Embedding Matrix To process sequences of discrete tokens, Transformers use an embedding matrix E ∈ Rd×e that provides a d-dimensional representation to vocabulary items before entering the first Transformer layer. When training Transformers with a language modeling objective, the same embedding matrix E is often used [Press and Wolf, 2016] to take the output of the last Transformer layer and project it back to the vocabulary dimension, i.e., into the embedding space. In this work, we will interpret all components of the Transformer model in the embedding space. 2.2 THE RESIDUAL STREAM We rely on a useful view of the Transformer through its residual connections proposed by Elhage et al. [2021].2 Specifically, each layer takes a hidden state as input and adds information to the hidden state through its residual connection. Under this view, the hidden state is a residual stream passed along the layers, from which information is read, and to which information is written at each layer. Elhage et al. [2021] and Geva et al. [2022b] observed that the residual stream is often barely updated in the last layers, and thus the final prediction is determined in early layers and the hidden state is mostly passed through the later layers. An exciting consequence of the residual stream view is that we can project hidden states in every layer into embedding space by multiplying the hidden state with the embedding matrix E, treating the hidden state as if it were the output of the last layer. Geva et al. [2022a] used this approach to interpret the prediction of Transformer-based language models, and we follow a similar approach. 2.3 WQK AND WVO Following Elhage et al. [2021], we describe the attention module in terms of interaction matrices WQK and WVO which will be later used in our theoretical derivation. The computation of the attention module (§2.1) can be re-interpreted as follows. The attention projection matrices WQ,WK,WV can be split along the column axis to H equal parts denoted by W iQ,W i K,W i V ∈ Rd× d H for 1 ≤ i ≤ H . Similarly, the attention output matrix WO can be split along the row axis into H heads, W iO ∈ Rd/H×d. We define the interaction matrices as W iQK := W i QW iT K ∈ Rd×d, W iVO := W iVW iO ∈ Rd×d. 2Though earlier mentions include nostalgebraist [2020]. Importantly, W iQK,W i VO are input-independent. Intuitively, WQK encodes the amount of attention between pairs of tokens. Similarly, in W iVO, the matrices WV and WO can be viewed as a transition matrix that determines how attending to certain tokens affects the subsequent hidden state. We can restate the attention equations in terms of the interaction matrices. Recall (Eq. 1) that the output of the i’th head of the attention module is AiV iatt and the final output of the attention module is (without the residual connection): Concat [ A1V 1att, ..., A iV iatt, ..., A HV Hatt ] WO = H∑ i=1 Ai(XW iV)W i O = H∑ i=1 AiXW iVO. (2) Similarly, the attention map Ai at the i’th head in terms of WQK is (softmax is done row-wise): Ai = softmax ( (XW iQ)(XW i K) T√ d/H +M ) = softmax ( X(W iQK)X T√ d/H +M ) . (3) 3 PROJECTING TRANSFORMER PARAMETERS INTO EMBEDDING SPACE In this section, we propose that Transformer parameters can be projected into embedding space for interpretation purposes. Our results extend Elhage et al. [2021] who obtained similar results for a two-layer attention-only network. We empirically support our framework in §4-§5. Given a matrix A ∈ RN×d, we can project it into embedding space by multiplying by the embedding matrix E as  = AE ∈ RN×e. Let E′ be a right-inverse of E, that is, EE′ = I ∈ Rd×d.3 Then we can reconstruct the original matrix with E′ as A = A(EE′) = ÂE′. We will use this simple identity to reinterpret the model’s operation in embedding space. To simplify our analysis, we ignore layer norms and biases, a standard simplification justified in prior work [Elhage et al., 2021]. In interpretation experiments (§4), we do not use an exact right inverse such as the Moore–Penrose pseudo-inverse [Moore, 1920; Bjerhammar, 1951; Penrose, 1955] but instead use the transpose of the embedding matrix E′ = ET. This is since interpretation involves not only projecting using E′ but also applying a top-k operation where we inspect the vocabulary items with the largest logits. We empirically find that the Moore–Penrose pseudo-inverse does not work well for interpretation due to the top-k operation, and provide a justification and comprehensive empirical evidence in Appendix A. Conversely, ET empirically works well, and we conjecture this is due to the training procedure of LMs where E is used to embed discrete tokens into the hidden state dimension and ET is used to predict a distribution over the vocabulary items from the last hidden state. Attention Module Recall that W iVO := W iVW iO ∈ Rd×d is the interaction matrix between attention values and the output projection matrix for attention head i. By definition, the output of each head is: AiXW iVO = A iX̂E′W iVO. Since the output of the attention module is added to the residual stream, we can assume according to the residual stream view that it is meaningful to project it to the embedding space, similar to FF values. Thus, we expect the sequence of N e-dimensional vectors (AiXW iVO)E = A iX̂(E′W iVOE) to be interpretable. Importantly, the role of A i is just to mix the representations of the updated N input vectors. This is similar to the FF module, where FF values (the parameters of the second layer) are projected into embedding space, and FF keys (parameters of the first layer) determine the coefficients for mixing them. Hence, we can assume that the interpretable components are in the term X̂(E′W iVOE). Zooming in on this operation, we see that it takes the previous hidden state in the embedding space (X̂) and produces an output in the embedding space which will be incorporated into the next hidden state through the residual stream. Thus, E′W iVOE is a transition matrix that takes a representation the embedding space and outputs a new representation in the same space. Similarly, the matrix W iQK can be viewed as a bilinear map (Eq. 3). To interpret it in embedding space, we perform the following operation with E′: XW iQKX T = (XEE′)W iQK(XEE ′)T = (XE)E′W iQKE ′T(XE)T = X̂(E′W iQKE ′T)X̂T. 3E′ exists if d ≤ e and E is full-rank. Therefore, the interaction between tokens at different positions is determined by an e×e matrix that expresses the interaction between pairs of vocabulary items. FF Module Geva et al. [2022b] showed that FF value vectors V ∈ Rdff×d are meaningful when projected into embedding space, i.e., for a FF value vector v ∈ Rd, vE ∈ Re is interpretable (see §2.1). In vectorized form, the rows of V E ∈ Rdff×e are interpretable. On the other hand, the keys K of the FF layer are multiplied on the left by the output of the attention module, which are the queries of the FF layer. Denoting the output of the attention module by Q, we can write this product as QKT = Q̂E′KT = Q̂(KE′T)T. Because Q is a hidden state, we assume according to the residual stream view that Q̂ is interpretable in embedding space. When multiplying Q̂ by KE′T, we are capturing the interaction in embedding space between each query and key, and thus expect KE′T to be interpretable in embedding space as well. Overall, FF keys and values are intimately connected – the i-th key controls the coefficient of the i-th value, so we expect their interpretation to be related. While not central to this work, we empirically show that key-value pairs in the FF module are similar in embedding space in Appendix B.1. Subheads Another way to interpret the matrices W iVO and W iQK is through the subhead view. We use the following identity: AB = ∑b j=1 A:,jBj,:, which holds for arbitrary matrices A ∈ Ra×b, B ∈ Rb×c, where A:,j ∈ Ra×1 are the columns of the matrix A and Bj,: ∈ R1×c are the rows of the matrix B. Thus, we can decompose W iVO and W i QK into a sum of d H rank-1 matrices: W iVO = d H∑ j=1 W i,jV W i,j O , W i QK = d H∑ j=1 W i,jQ W i,j K T . where W i,jQ ,W i,j K ,W i,j V ∈ Rd×1 are columns of W iQ,W iK,W iV respectively, and W i,j O ∈ R1×d are the rows of W iO. We call these vectors subheads. This view is useful since it allows us to interpret subheads directly by multiplying them with the embedding matrix E. Moreover, it shows a parallel between interaction matrices in the attention module and the FF module. Just like the FF module includes key-value pairs as described above, for a given head, its interaction matrices are a sum of interactions between pairs of subheads (indexed by j), which are likely to be related in embedding space. We show this is indeed empirically the case for pairs of subheads in Appendix B.1. We summarize our approach for projecting the different components of the Transformer into embedding space in Table 1. 4 INTERPRETABILITY EXPERIMENTS In this section, we provide empirical evidence for the viability of our approach as a tool for interpreting Transformer parameters. 4.1 PARAMETER INTERPRETATION EXAMPLES We take GPT-2 medium [Radford et al., 2019] and manually analyze its parameters. GPT-2 medium has a total of 384 attention heads (24 layers and 16 heads per layer). We take the embedded transition matrices E′W iVOE for all heads and examine the top-k pairs of vocabulary items. As there are only 384 heads, we manually choose a few heads and present the top-k pairs in Appendix C.1 (k = 50). We observe that different heads capture different types of relations between pairs of vocabulary items including word parts, heads that focus on gender, geography, orthography, particular part-of-speech tags, and various semantic topics. In Appendix C.2 we perform a similar analysis for WQK. Appendix C.3 provides examples of key-value pairs from the FF modules of GPT-2 medium. We show random pairs (k, v) from the set of those pairs such that when looking at the top-100 vocabulary items for k and v, at least 15% overlap. Such pairs account for approximately 5% of all key-value pairs. The examples show how key-value pairs often revolve around similar topics such as media, months, organs, etc. Last, we show we can use embeddings to locate FF values (or keys) related to a particular topic. We take a few vocabulary items related to a certain topic, e.g., [‘cm’, ‘kg’, ‘inches’], average their embeddings,4 and rank all FF values (or keys) based on their dot-product with the average. Appendix C.4 shows a few examples of FF values found with this method that are related to programming, measurements, and animals. 4.2 HIDDEN STATE AND PARAMETERS An advantage of zero-pass interpretation is that it does not require running inputs through the model which is expensive and non-exhaustive. In this section (and this section only), we run a forward pass over inputs and examine if the representations in embedding space of dynamically-computed hidden states are “similar” to the representations of static parameter vectors that are activated. A technical side note: we use GPT-2, which applies layer norm to the Transformer output before projecting it to the embedding space with E. Thus, conservatively, layer norm should be considered as part of the projection operation.5 Empirically however, we observe that projecting parameters directly without layer norm works well, which simplifies our analysis in §3. An exception is when projecting hidden states in this section, where we apply layer norm before projection to improve performance, similar to Geva et al. [2022a]. Experimental Design We use GPT-2 medium and run it over 60 examples from IMDB [Maas et al., 2011]. This provides us with a dynamically-computed hidden state h for every token and at the output of every layer. For the projection ĥ ∈ Re of each such hidden state, we take the projections of the m most active parameter vectors {x̂i}mi=1 in the layer that computed h and check 4We subtract the average embedding µ from E before averaging, which improves interpretability. 5Layer norm consists of standardizing the mean and variance of the input followed by an affine transforma- tion. The latter part can be easily absorbed into E (while adding a bias term). if they cover the dominant vocabulary items of ĥ in embedding space. Specifically, let top-k(wE) be the k vocabulary items with largest logits in embedding space for a vector w ∈ Rd. We compute: Rk(x̂1, ..., x̂m, ĥ) = |top-k(ĥ) ∩ ⋃m i=1 top-k(x̂i)| k , to capture if activated parameter vectors cover the main vocabulary items corresponding to the hidden state. We find the m most active parameter vectors separately for FF keys (K), FF values (V ), attention value subheads (WV) (see §3), and attention output subheads (WO), where the activation of each parameter vector is determined by the vector’s “coefficient” as follows. For a FF key-value pair (k, v) the coefficient is σ(qTk), where q ∈ Rd is an input to the FF module, and σ is the FF nonlinearity. For attention value-output subhead pairs (v, o) the coefficient is xTv, where x is the input to this component (for attention head i, the input is one of the rows of AiX , see Eq. 2). Results and Discussion Figure 2 presents the Rk score averaged across tokens per layer. As a baseline, we compare Rk of the activated vectors {x̂i}mi=1 with the correctly-aligned hidden state ĥ at the output of the relevant layer (blue bars) against the the Rk when randomly sampling ĥrand from the set of all hidden states (orange bars). We conclude that the representations in embedding space induced by activated parameter vector mirror, at least to some extent, the representations of the hidden states themselves. Appendix §B.2 shows a variant of this experiment, where we compare activated parameters throughout GPT2-medium’s layers to the last hidden state, which produces the logits used for prediction. 4.3 INTERPRETATION OF FINE-TUNED MODELS We now show that we can interpret the changes a model goes through during fune-tuning through the lens of embedding space. We fine-tune the top-3 layers of the 12-layer GPT-2-base with a sequence classification head on IMDB sentiment analysis (binary classification) and compute the difference between the original parameters and the fine-tuned model. We then project the difference of parameter vectors into embedding space and test if change is interpretable w.r.t sentiment analysis. Appendix D shows examples for projected differences randomly sampled from the fine-tuned layers. Frequently, the difference, or its negation, is projected to nouns, adjectives and adverbs that express sentiment for a movie, such as ‘amazing’, ‘masterpiece’, ‘incompetence’, etc. This shows that the differences are indeed projected into vocabulary items that characterize movie reviews’ sentiment. Almost all parameter groups present this behavior, except for V and WO, which curiously are the parameters added to the residual stream. 5 ALIGNING MODELS IN EMBEDDING SPACE Assuming Transformers by and large operate in embedding space leads to an exciting possibility - we can relate different models to one another so long as they share a vocabulary and tokenizer. In §5.1, we show that we can align the layers of BERT models trained with different random seeds. In §5.2, we show the embedding space can be leveraged to “stitch” the parameters of a fine-tuned model to a model that was not fine-tuned. 5.1 LAYER ALIGNMENT Experimental Design Taking our approach to the extreme, the embedding space is a universal space, which depends only on the tokenizer, and in which Transformer parameters and hidden states reside. Consequently, we can align parameter vectors from different models in this space and compare them even if they come from different models, as long as they share a vocabulary. To demonstrate this, we use MultiBERT [Sellam et al., 2022], which contains 25 different instantiations of BERT initialized from different random seeds. We take parameters from two MultiBERT seeds and compute the Pearson correlation between their projection to embedding space. For example, let VA, VB be the FF values of models A and B. We can project the values into embedding space: VAEA, VBEB , where EA, EB are the respective embedding matrices, and compute Pearson correlation between projected values. This produces a similarity matrix S̃ ∈ R|VA|×|VB |, where each entry is the correlation coefficient between projected values from the two models. We bin S̃ by layer pairs and average the absolute value of the scores in each bin (different models might encode the same information in different directions, so we use absolute value) to produce a matrix S ∈ RL×L, where L is the number of layers. Specifically, the average (absolute) correlation between vectors that come from layer ℓA in model A and layer ℓB in Model B is registered in entry (ℓA, ℓB) of S. Last, to obtain a one-to-one layer alignment, we use the Hungarian algorithm [Kuhn, 1955], which assigns exactly one layer from the first model to a layer from the second model. The algorithm’s objective is to maximize, given a similarity matrix S, the sum of similarities of the chosen pairs, such that each index in one model is matched with exactly one index in the other. We repeat this for all parameter groups (WQ,WK,WV,WO,K). Results and Discussion Figure 3 (left) shows the resulting alignment. Clearly, parameters from a certain layer in model A tend to align to the same layer in model B across all parameter groups. This suggests that different layers from different models that were trained separately (but with the same training objective and data) serve a similar function. As further evidence, we show that if not projected, the matching appears absolutely random in Figure §3 (right). We show the same results for other seed pairs as well in Appendix B.3. 5.2 ZERO-SHOT STITCHING Model stitching [Lenc and Vedaldi, 2015; Csiszárik et al., 2021; Bansal et al., 2021] is a relatively under-explored feature of neural networks, particularly in NLP. The idea is that different models, sometimes trained on different data and with different architectures, learn representations that can be aligned through a linear transformation, termed stitching. Representations correspond to hidden states , and thus one can learn a transformation matrix from one model’s hidden states to an equivalent hidden state in the other model. Here, we show that going through embedding space one can align the hidden states of two models, i.e., stitch, without training. Given two models, we want to find a linear stitching transformation to align their representation spaces. According to our theory, given a hidden state v ∈ Rd1 from model A, we can project it to the embedding space as vEA, where EA is its embedding matrix. Then, we can re-project to the feature space of model B, with E+B ∈ Re×d2 , where E + B is the Penrose-Moore pseudo-inverse of the embedding matrix EB .6 This transformation can be expressed as multiplication with the kernel KAB := EAE + B ∈ Rd1×d2 . We employ the above approach to take representations of a fine-tuned classifier, A, and stitch them on top of a model B that was only pretrained, to obtain a new classifier based on B. Experimental Design We use the 24-layer GPT-2 medium as model A and 12-layer GPT-2 base model trained in §4.3 as model B. We fine-tune the last three layers of model B on IMDB, as explained in §4.3. Stitching is simple and is performed as follows. Given the sequence of N hidden states HℓA ∈ RN×d1 at the output of layer ℓ of model A (ℓ is a hyperparameter), we apply the stitching layer, which multiplies the hidden states with the kernel, computing HℓAKAB . This results in hidden states HB ∈ RN×d2 , used as input to the three fine-tuned layers from B. 6Since we are not interested in interpretation we use an exact right-inverse and not the transpose. Results and Discussion Stitching produces models with accuracies that are higher than random on IMDB evaluation set, but not consistently. Figure 4 shows the accuracy of stitched models against the layer index from model A over which stitching is performed. Out of 11 random seeds, three models obtained accuracy that is significantly higher than the baseline 50% accuracy, reaching an accuracy of roughly 70%, when stitching is done over the top layers. 6 RELATED WORK Interpreting Transformer is a broad area of research that has attracted much attention in recent years. A large body of work has focused on analyzing hidden representations, mostly through probing [Adi et al., 2016; Shi et al., 2016; Tenney et al., 2019; Rogers et al., 2020]. Voita et al. [2019a] used statistical tools to analyze the evolution of hidden representations throughout layers. Recently, Mickus et al. [2022] proposed to decompose the hidden representations into the contributions of different Transformer components. Unlike these works, we interpret parameters rather than the hidden representations. Another substantial effort has been to interpret specific network components. Previous work analyzed single neurons [Dalvi et al., 2018; Durrani et al., 2020], attention heads [Clark et al., 2019; Voita et al., 2019b], and feedforward values [Geva et al., 2020; Dai et al., 2021; Elhage et al., 2022]. While these works mostly rely on input-dependent neuron activations, we inspect “static” model parameters, and provide a comprehensive view of all Transformer components. Our work is most related to efforts to interpret specific groups of Transformer parameters. Cammarata et al. [2020] made observations about the interpretability of weights of neural networks. Elhage et al. [2021] analyzed 2-layer attention networks. We extend their analysis to multi-layer pre-trained Transformer models. Geva et al. [2020; 2022a;b] interpreted feedforward values in embedding space. We coalesce these lines of work and offer a unified interpretation framework for Transformers in embedding space. 7 DISCUSSION Our work has a few limitations that we care to highlight. First, it focuses on interpreting models through the vocabulary lens. While we have shown evidence for this, it does not preclude other factors from being involved in the computation process. Second, we used E′ = ET, but future research might find variants of E that improve performance. Last, we assume Transformer components can be projected to the embedding space with a single matrix multiplication, but this might depend on model training, e.g., in GPT-2 it involves a layer norm operation as explained in §4.2. Notwithstanding, we believe the benefits of our work overshadow its limitations. We provide a simple and efficient approach, which equips researchers with new tools to interpret Transformer models and relate them to one another. Apart from Elhage et al. [2021], there has been little work pursuing the embedding space approach, and we “sharpen” the tools they laid down and adjust them to existing pre-trained Transformers. Moreover, our framework allows us to view parameters from different models as residents of the same universal embedding space, where they can be compared in model-agnostic fashion. We demonstrate two applications of this observation (model alignment and stitching) and argue future work can yield many additional applications. A RETHINKING INTERPRETATION The process of interpreting a vector v in Geva et al. [2022b] proceeds in two steps: first the projection of the vector to the embedding space (vE); then, we use the list of the tokens that were assigned the largest values in the projected vector, i.e.: top-k(vE), as the interpretation of the projected vector. This is reasonable since (a) the most activated coordinates contribute the most when added to the residual stream, and (b) this matches how we eventually decode: we project to the embedding space and consider the top-1 token (or one of the few top tokens, when using beam search). In this work, we interpret inner products and matrix multiplications in the embedding space: given two vectors x, y ∈ Rd, their inner product xTy can be considered in the embedding space by multiplying with E and then by one of its right inverses (e.g., its pseudo-inverse E+ [Moore, 1920; Bjerhammar, 1951; Penrose, 1955]): xTy = xTEE+y = (xTE)(yE+T)T. Assume xE is interpretable in the embedding space, crudely meaning that it represents logits over vocabulary items. We expect y, which interacts with x, to also be interpretable in the embedding space. Consequently, we would like to take yE+T to be the projection of y. However, this projection does not take into account the subsequent interpretation using top-k. The projected vector yE+T might be harder to interpret in terms of its most activated tokens. To alleviate this problem, we need a different “inverse” matrix E′ that works well when considering the top-k operation. Formally, we want an E′ with the following “robustness” guarantee: keep-k(xE)Tkeep-k(yE′) ≈ xTy, where keep-k(v) is equal to v for coordinates whose absolute value is in the top-k, and zero elsewhere. This is a stronger notion of inverse – not only is EE′ ≈ I , but even when truncating the vector in the embedding space we can still reconstruct it with E′. We claim that ET is a decent instantiation of E′ and provide some empirical evidence. While a substantive line of work [Ethayarajh, 2019; Gao et al., 2019; Wang et al., 2020; Rudman et al., 2021] has shown that embedding matrices are not isotropic (an isotropic matrix E has to satisfy EET = αI for some scalar α), we show that it is isotropic enough to make ET a legitimate compromise. We randomly sample 300 vectors drawn from the normal distribution N (0, 1), and compute for every pair x, y the cosine similarity between xTy and keep-k(xE)Tkeep-k(yE′) for k = 1000, and then average over all pairs. We repeat this for E′ ∈ {E+T, E} and obtain a score of 0.10 for E+T, and 0.83 for E, showing the E is better under when using top-k. More globally, we compare E′ ∈ {E+T, E} for k ∈ {10, 50, 100, 200, 300, 500} with three distributions: - x, y drawn from the normal N (0, 1) distribution - x, y chosen randomly from the FF values - x, y drawn from hidden states along Transformer computations. In Figure 5 (Left) we show the results, where dashed lines represent E+ and solid lines represent ET. For small values of k (used for interpretation), ET is superior to E+ across all distributions. Interestingly, the hidden state distribution is the only distribution where E+ has similar performance to ET. Curiously, when looking at higher values of k the trend is reversed (k = {512, 1024, 2048, 4096, 10000, 15000, 20000, 30000}) - see Figure 5 (Right). This settles the deviation from findings showing embedding matrices are not isotropic, as we see that indeed as k grows, ET becomes an increasingly bad approximate right-inverse of the embedding matrix. The only distribution that keeps high performance with ET is the hidden state distribution, which is an interesting future direction of investigation. B ADDITIONAL MATERIAL B.1 CORRESPONDING PARAMETER PAIRS ARE RELATED We define the following metric applying on vectors after projecting them into the embedding space: Simk(x̂, ŷ) = |top-k(x̂) ∩ top-k(ŷ)| |top-k(x̂) ∪ top-k(ŷ)| where top-k(v) is the set of k top activated indices in the vector v (which correspond to tokens in the embedding space). This metric is the Jaccard index [Jaccard, 1912] applied to the top-k tokens from each vector. In Figure 6, Left, we demonstrate that corresponding FF key and value vectors are more similar (in embedding space) than two random key and value vectors. In Figure 6, Right, we show a similar result for attention value and output vectors. In Figure 6, Bottom, the same analysis in done for attention query and key vectors. This shows that there is a much higher-than-chance relation between corresponding FF keys and values (and the same for attention values and outputs). B.2 FINAL PREDICTION AND PARAMETERS We show that the final prediction of the model is correlated in embedding space with the most activated parameters from each layer. This implies that these objects are germane to the analysis of the final prediction in the embedding space, which in turn suggests that the embedding space is a viable choice for interpreting these vectors. Figure 7 shows that just like §4.2, correspondence is better when hidden states are not randomized, suggesting there parameter interpretations have an impact on the final prediction. B.3 PARAMETER ALIGNMENT PLOTS FOR ADDITIONAL MODEL PAIRS Alignment in embedding space of layers of pairs of BERT models trained with different random seeds for additional model pairs. SEED 1 VS SEED 2 K V WK WQ WV WO K V WK WQ WV WO SEED 2 VS SEED 3 K V WK WQ WV WO K V WK WQ WV WO SEED 3 VS SEED 4 K V WK WQ WV WO K V WK WQ WV WO SEED 4 VS SEED 5 K V WK WQ WV WO K V WK WQ WV WO C EXAMPLE CASES C.1 VALUE-OUTPUT MATRICES Below we show value-output pairs from different heads of GPT-2 Medium. For each head, we show the 50 pairs with largest value in the e × e transition matrix. There are 384 attention heads in GPT-2 medium from which we manually choose a subset. Throughout the section some lists were marked with asterisks indicating the way this particular list was created: * - pairs of the form (x, x) were excluded from the list C.1.1 LOW LEVEL LANGUAGE MODELING Layer 21 Head 7* (’FN’, ’NF’), (’ Ramos’, ’Ram’), (’ Hughes’, ’Hug’), (’GR’, ’gran’), (’NF’, ’FN’), (’CL’, ’CLA’), (’ McCain’, ’McC’), (’ Marshall’, ’Marsh’), (’Hug’, ’ Hughes’), (’ Tanner’, ’Tan’), (’NH’, ’nih’), (’NR’, ’NRS’), (’Bow’, ’ Bowman’), (’Marsh’, ’ Marshall’), (’ Jacobs’, ’Jac’), (’ Hayes’, ’Hay’), (’Hay’, ’ Hayes’), (’ McCorm’, ’McC’), (’NR’, ’NI’), (’ Dawson’, ’ sidx’), (’Tan’, ’ Tanner’), (’GR’, ’gra’), (’jac’, ’JA’), (’zo’, ’zos’), (’NF’, ’NI’), (’ McCull’, ’McC’), (’Jac’, ’ Jacobs’), (’ Beet’, ’ Beetle’), (’FG’, ’GF’), (’ja’, ’jas’), (’ Wilkinson’, ’Wil’), (’Ram’, ’ Ramos’), (’GR’, ’GRE’), (’FN’, ’ NF’), (’McC’, ’ McCorm’), (’ Scarborough’, ’Scar’), (’Ba’, ’ Baal’), (’FG’, ’FP’), (’FN’, ’FH’), (’Gar’, ’ Garfield’), (’jac’, ’jas’), (’nut’, ’nuts’), (’ Wis’, ’WI’), (’ Vaughan’, ’ Vaughn’), (’PF’, ’FP’), (’RN’, ’RNA’), (’jac’, ’ Jacobs’), (’FN’, ’FM’), (’Kn’, ’ Knox’), (’nic’, ’NI’) Layer 19 Head 13 (guessing the first letter/consonant of the word) (’senal’, ’ R’), # arsenal (’senal’, ’R’), (’vernment’, ’ G’), # government (’ Madness’, ’ M’), (’ Mayhem’, ’ M’), (’nesday’, ’ W’), # wednesday (’vernment’, ’G’), (’ Madness’, ’M’), (’lace’, ’ N’), # necklace (’nesday’, ’W’), (’senal’, ’Rs’), (’vernment’, ’ g’), (’farious’, ’ N’), # nefarious (’eneg’, ’ C’), (’senal’, ’ r’), (’ruary’, ’ F’), # february (’senal’, ’RIC’), (’ondo’, ’ R’), (’ Mandela’, ’ N’), # nelson (’ Mayhem’, ’M’), (’senal’, ’ RD’), (’estine’, ’ C’), (’vernment’, ’Gs’), (’senal’, ’RF’), (’esis’, ’ N’), (’Reviewed’, ’ N’), (’arette’, ’ C’), # cigarette (’rome’, ’ N’), (’theless’, ’ N’), # nonetheless (’lace’, ’N’), (’DEN’, ’ H’), (’ versa’, ’ V’), (’bably’, ’ P’), # probably (’vernment’, ’GF’), (’vernment’, ’g’), (’vernment’, ’GP’), (’ornia’, ’ C’), # california (’ilipp’, ’ F’), (’umbered’, ’ N’), (’arettes’, ’ C’), (’senal’, ’RS’), (’onsense’, ’ N’), (’senal’, ’RD’), (’senal’, ’RAL’), (’uci’, ’ F’), (’ondo’, ’R’), (’senal’, ’ RI’), (’iday’, ’ H’), # holiday (’senal’, ’ Rx’), (’odor’, ’ F’) Layer 20 Head 9 (’ behalf’, ’On’), (’ behalf’, ’ On’), (’ behalf’, ’ on’), (’ periods’, ’during’), (’ bounds’, ’within’), (’ envelope’, ’ inside’), (’door’, ’outside’), (’ envelope’, ’inside’), (’ regime’, ’ Under’), (’ periods’, ’ during’), (’lihood’, ’ LIKE’), (’ occasions’, ’ on’), (’ regime’, ’Under’), (’door’, ’inside’), (’period’, ’during’), (’lihood’, ’Like’), (’ periods’, ’ During’), (’ envelope’, ’Inside’), (’ sake’, ’for’), (’ doors’, ’ inside’), (’ regime’, ’ under’), (’ behalf’, ’ ON’), (’ purposes’, ’for’), (’ occasions’, ’On’), (’ doors’, ’inside’), (’ basis’, ’ on’), (’ regimes’, ’ Under’), (’doors’, ’outside’), (’ Osc’, ’inside’), (’ periods’, ’During’), (’door’, ’ inside’), (’ regime’, ’ UNDER’), (’ regimes’, ’ under’), (’ regimes’, ’Under’), (’doors’, ’inside’), (’zx’, ’inside’), (’ period’, ’during’), (’ascript’, ’inside’), (’door’, ’Inside’), (’ occasions’, ’ On’), (’ysc’, ’BuyableInstoreAndOnline’) , (’ envelope’, ’ Inside’), (’ pauses’, ’during’), (’ regime’, ’under’), (’ occasion’, ’ on’), (’ doors’, ’outside’), (’ banner’, ’ UNDER’), (’ envelope’, ’within’), (’abouts’, ’ here’), (’ duration’, ’during’) Layer 22 Head 5 (named entities, mostly made of two parts) (’enegger’, ’ Schwartz’), (’shire’, ’ Lincoln’), (’xual’, ’Weiss’), (’nery’, ’ Nun’), (’ Qiao’, ’ Huang’), (’schild’, ’ Schwarz’), (’oslov’, ’ Czech’), (’ Rica’, ’ Costa’), (’ Qiao’, ’ Qiao’), (’xual’, ’ RW’), (’ Nadu’, ’ Tamil’), (’ Nadu’, ’Tam’), (’shire’, ’ Baldwin’), (’swick’, ’ Hoff’), (’xual’, ’ Weiss’), (’ Takeru’, ’ Yamato’), (’xual’, ’ Grassley’), (’swick’, ’ Schwartz’), (’enegger’, ’ Schiff’), (’enegger’, ’Weiss’), (’xual’, ’RW’), (’shire’, ’ Nottingham’), (’shire’, ’ Barrett’), (’arest’, ’ Buch’), (’ Fei’, ’ Fei’), (’miah’, ’Jere’), (’swick’, ’ Owl’), (’ufact’, ’ Swanson’), (’akuya’, ’ Tanaka’), (’ Sachs’, ’ Feinstein’), (’enegger’, ’ Wagner’), (’otle’, ’Roberts’), (’shire’, ’ Neville’), (’oslov’, ’ Prague’), (’sburg’, ’ Hammond’), (’ ILCS’, ’ Dunham’), (’ Malfoy’, ’ Draco’), (’yip’, ’Billy’), (’iversal’, ’ Monroe’), (’iversal’, ’Murray’), (’Yang’, ’Yang’), (’akuya’, ’ Krishna’), (’schild’, ’ Schwartz’), (’tz’, ’ Rabb’), (’shire’, ’gow’), (’enegger’, ’ Feldman’), (’cair’, ’ Chou’), (’enegger’, ’ Duffy’), (’enegger’, ’Sch’), (’ Jensen’, ’ Jensen’) Layer 22 Head 13 (’ Additionally’, ’ the’), (’ Unfortunately’, ’ the’), (’ Nevertheless’, ’ the’), (’ Sadly’, ’ the’), (’ However’, ’ the’), (’ Furthermore’, ’ the’), (’ Additionally’, ’,’), (’ During’, ’ the’), (’ Moreover’, ’ the’), (’ Whilst’, ’ the’), (’ Since’, ’ the’), (’ Unfortunately’, ’,’), (’ Additionally’, ’-’), (’ Perhaps’, ’ the’), (’ Sadly’, ’,’), (’ Throughout’, ’ the’), (’ Nevertheless’, ’,’), (’ While’, ’ the’), (’ However’, ’,’), (’ Although’, ’ the’), (’ There’, ’ the’), (’ Furthermore’, ’,’), (’ Eventually’, ’ the’), (’ Meanwhile’, ’ the’), (’ Hopefully’, ’ the’), (’ Nevertheless’, ’-’), (’ During’, ’,’), (’ Regardless’, ’ the’), (’ However’, ’-’), (’ Whilst’, ’,’), (’ Additionally’, ’ and’), (’ Moreover’, ’,’), (’ Unfortunately’, ’-’), (’ They’, ’ the’), (’ Sadly’, ’-’), (’ Whereas’, ’ the’), (’ Additionally’, ’ a’), (’ Furthermore’, ’-’), (’ Unlike’, ’ the’), (’ Typically’, ’ the’), (’ Since’, ’,’), (’ Normally’, ’ the’), (’ Perhaps’, ’,’), (’ During’, ’-’), (’ Throughout’, ’,’), (’ While’, ’,’), (’ Nevertheless’, ’ a’), (’ Interestingly’, ’ the’), (’ Unfortunately’, ’ and’), (’ Unfortunately’, ’ a’) C.1.2 GENDER Layer 18 Head 1 (’ Marie’, ’women’), (’ Marie’, ’ actresses’), (’ Anne’, ’women’), (’ Anne’, ’Women’), (’ Marie’, ’woman’), (’ Marie’, ’Women’), (’ Anne’, ’woman’), (’ Marie’, ’Woman’), (’ Anne’, ’ actresses’), (’ Marie’, ’ heroine’), (’Jane’, ’Women’), (’ Anne’, ’ heroine’), (’Jane’, ’women’), (’ actresses’, ’Women’), (’ Anne’, ’Woman’), (’ Esther’, ’Women’), (’ Esther’, ’women’), (’ Marie’, ’girls’), (’ Anne’, ’Mrs’), (’ Marie’, ’ actress’), (’ actresses’, ’women’), (’Jane’, ’Woman’), (’ Marie’, ’ girls’), (’Jane’, ’ actresses’), (’Anne’, ’Woman’), (’ Marie’, ’Girls’), (’Anne’, ’women’), (’ Anne’, ’Girls’), (’ actresses’, ’Woman’), (’ Marie’, ’ Women’), (’ Anne’, ’ Women’), (’ Anne’, ’ girls’), (’ Anne’, ’girl’), (’Anne’, ’Women’), (’Women’, ’Woman’), (’ Anne’, ’girls’), (’Anne’, ’ actresses’), (’ Michelle’, ’women’), (’ Marie’, ’ Actress’), (’ Marie’, ’girl’), (’ Anne’, ’ Feminist’), (’ Marie’, ’ women’), (’ Devi’, ’Women’), (’ Elizabeth’, ’Women’), (’ Anne’, ’ actress’), (’Anne’, ’Mrs’), (’Answer’, ’answered’), (’Anne’, ’woman’), (’maid’, ’Woman’), (’Marie’, ’women’) C.1.3 GEOGRAPHY Layer 16 Head 6* (’ Mumbai’, ’ Chennai’), (’ Mumbai’, ’India’), (’ Chennai’, ’ Mumbai’), (’ Tasmania’, ’ Queensland’), (’ Rahul’, ’India’), (’ Gujar’, ’India’), (’ Bangalore’, ’ Chennai’), (’Scotland’, ’England’), (’ Kerala’, ’ Chennai’), (’ Mumbai’, ’ Delhi’), (’Scotland’, ’Britain’), (’ Mumbai’, ’ Bangalore’), (’India’, ’Pakistan’), (’Ireland’, ’Scotland’), (’ Bangalore’, ’ Mumbai’), (’ Chennai’, ’ Bangalore’), (’ Gujar’, ’ Aadhaar’), (’ Maharashtra’, ’ Mumbai’), (’ Gujarat’, ’ Maharashtra’), (’ Gujar’, ’ Gujarat’), (’Australia’, ’Australian’), (’ Gujarat’, ’India’), (’ Gujar’, ’ Rahul’), (’ Mumbai’, ’ Maharashtra’), (’England’, ’Britain’), (’ Chennai’, ’India’), (’ Bombay’, ’ Mumbai’), (’ Kerala’, ’ Tamil’), (’ Mumbai’, ’ Hindi’), (’ Tasman’, ’ Tasmania’), (’India’, ’ Mumbai’), (’ Gujar’, ’ Hindi’), (’ Gujar’, ’ Maharashtra’), (’Austral’, ’ Australians’), (’ Kerala’, ’ Maharashtra’), (’ Bangalore’, ’India’), (’ Kerala’, ’India’), (’ Bombay’, ’India’), (’Austral’, ’Australia’), (’India’, ’ Aadhaar’), (’ Mumbai’, ’ Sharma’), (’Austral’, ’Australian’), (’ Kerala’, ’ Mumbai’), (’England’, ’Scotland’), (’ Gujar’, ’ Mumbai’), (’ Mumbai’, ’ Rahul’), (’ Tasman’, ’ Queensland’), (’ Chennai’, ’ Tamil’), (’ Maharashtra’, ’ Gujarat’), (’ Modi’, ’India’) Layer 18 Head 9 (’ Winnipeg’, ’ Winnipeg’), (’ Edmonton’, ’ Winnipeg’), (’ Winnipeg’, ’ Ottawa’), (’ Calgary’, ’ Winnipeg’), (’ Ottawa’, ’ Winnipeg’), (’ Winnipeg’, ’ Calgary’), (’ Winnipeg’, ’CBC’), (’ Winnipeg’, ’Canada’), (’ Canberra’, ’ Canberra’), (’ RCMP’, ’ Winnipeg’), (’ Ottawa’, ’CBC’), (’ Winnipeg’, ’Canadian’), (’Toronto’, ’ Winnipeg’), (’ Winnipeg’, ’ Canadians’), (’ Edmonton’, ’ Ottawa’), (’ Winnipeg’, ’ RCMP’), (’ Winnipeg’, ’ Edmonton’), (’ Ottawa’, ’Canadian’), (’Canadian’, ’ Winnipeg’), (’Toronto’, ’ Calgary’), (’ Winnipeg’, ’ Quebec’), (’ Winnipeg’, ’ Canad’), (’Toronto’, ’Canadian’), (’ Edmonton’, ’ Edmonton’), (’ Ottawa’, ’ Calgary’), (’ Leafs’, ’ Winnipeg’), (’ Edmonton’, ’ Calgary’), (’ Ottawa’, ’Canada’), (’ Calgary’, ’Canadian’), (’Toronto’, ’Canada’), (’ Calgary’, ’ Calgary’), (’Ott’, ’ Winnipeg’), (’ Winnipeg’, ’ Saskatchewan’), (’ Winnipeg’, ’ Canadian’), (’ Ottawa’, ’ Ottawa’), (’ Calgary’, ’ Ottawa’), (’ Winnipeg’, ’ Manitoba’), (’ Canadians’, ’ Winnipeg’), (’ Winnipeg’, ’ Canada’), (’ RCMP’, ’ Calgary’), (’Toronto’, ’ Manitoba’), (’Toronto’, ’ Ottawa’), (’CBC’, ’ Winnipeg’), (’Canadian’, ’Canada’), (’ Edmonton’, ’Canadian’), (’ RCMP’, ’ Ottawa’), (’ Winnipeg’, ’ipeg’), (’Toronto’, ’Toronto’), (’Canadian’, ’ Calgary’), (’ Ottawa’, ’ Canadians’) Layer 16 Head 2* (’ Australians’, ’Austral’), (’Austral’, ’Australia’), (’Austral’, ’ Canberra’), (’ Canberra’, ’Austral’), (’ Edmonton’, ’ Winnipeg’), (’Austral’, ’Australian’), (’ Edmonton’, ’ Alberta’), (’ Australians’, ’Australia’), (’Austral’, ’ Australians’), (’ovych’, ’Ukraine’), (’ Canad’, ’ Quebec’), (’ Australians’, ’Australian’), (’ Manitoba’, ’ Winnipeg’), (’ Winnipeg’, ’ Manitoba’), (’Canada’, ’Canadian’), (’ Bulgar’, ’Moscow’), (’ Edmonton’, ’ Manitoba’), (’Austral’, ’berra’), (’Australian’, ’Austral’), (’ovych’, ’ Ukrainians’), (’ Canadians’, ’Canada’), (’ Australians’, ’ Canberra’), (’Canadian’, ’Canada’), (’ovych’, ’ Yanukovych’), (’ Trudeau’, ’Canada’), (’ Bulgar’, ’ Dmitry’), (’Austral’, ’ Australia’), (’ Canad’, ’ Mulcair’), (’ Canberra’, ’berra’), (’oglu’, ’Turkish’), (’Canada’, ’udeau’), (’ Oilers’, ’ Edmonton’), (’ Canberra’, ’Australia’), (’ Edmonton’, ’Canada’), (’ Calgary’, ’ Edmonton’), (’ Calgary’, ’ Alberta’), (’ Trudeau’, ’udeau’), (’ Edmonton’, ’ Calgary’), (’ Trudeau’, ’Canadian’), (’ Canberra’, ’Australian’), (’ Canucks’, ’ Vancouver’), (’Australian’, ’Australia’), (’ Fraser’, ’ Vancouver’), (’ Edmonton’, ’Canadian’), (’elaide’, ’Austral’), (’ Braz’, ’Tex’), (’ RCMP’, ’Canada’), (’sov’, ’Moscow’), (’ Bulgar’, ’Russia’), (’Canada’, ’ Canadians’) Layer 21 Head 12* (’ Indones’, ’ Indonesian’), (’ Nguyen’, ’ Vietnamese’), (’ Jakarta’, ’ Indonesian’), (’ Indonesia’, ’ Indonesian’), (’oglu’, ’Turkish’), (’ Indones’, ’ Indonesia’), (’ Indones’, ’ Jakarta’), (’ Koreans’, ’ Korean’), (’oglu’, ’ Turkish’), (’ Taiwanese’, ’ Taiwan’), (’ Nguyen’, ’ Thai’), (’Brazil’, ’ Brazilian’), (’ Indonesia’, ’ Indones’), (’ Taiwanese’, ’Tai’), (’oglu’, ’ Istanbul’), (’ Indonesian’, ’ Indones’), (’ Jakarta’, ’ Indones’), (’ Nguyen’, ’ Laos’), (’ Sloven’, ’ Slovenia’), (’ Korean’, ’ Koreans’), (’ Nguyen’, ’ Cambod’), (’zzi’, ’Italy’), (’Tai’, ’ Taiwanese’), (’ Jakarta’, ’ Indonesia’), (’ Indonesian’, ’ Indonesia’), (’ Bulgaria’, ’ Bulgarian’), (’ Icelandic’, ’ Iceland’), (’ Koreans’, ’ Korea’), (’ Brazilian’, ’Brazil’), (’ Bulgar’, ’ Bulgarian’), (’ Malays’, ’ Malaysian’), (’oglu’, ’ Ankara’), (’ Bulgarian’, ’ Bulgaria’), (’ Indones’, ’ Malays’), (’ Tai’, ’ Taiwanese’), (’oglu’, ’Turkey’), (’ Janeiro’, ’Brazil’), (’zzi’, ’Italian’), (’ Malays’, ’ Kuala’), (’ Fuk’, ’Japanese’), (’ Indonesian’, ’ Jakarta’), (’ Taiwan’, ’ Taiwanese’), (’oglu’, ’ Erdogan’), (’ Nguyen’, ’ Viet’), (’ Filipino’, ’ Philippine’), (’ Indonesia’, ’ Jakarta’), (’ Jong’, ’ Koreans’), (’ Duterte’, ’ Filipino’), (’ Azerbai’, ’ Azerbaijan’), (’ Bulgarian’, ’ Bulgar’) C.1.4 BRITISH SPELLING Layer 19 Head 4 (’ Whilst’, ’ realise’), (’ Whilst’, ’ Whilst’), (’ Whilst’, ’ realised’), (’ Whilst’, ’ organise’), (’ Whilst’, ’ recognise’), (’ Whilst’, ’ civilisation’), (’ Whilst’, ’ organisation’), (’ Whilst’, ’ whilst’), (’ Whilst’, ’ organising’), (’ Whilst’, ’ organised’), (’ Whilst’, ’ organis’), (’ Whilst’, ’ util’), (’ Whilst’, ’ apologise’), (’ Whilst’, ’ emphas’), (’ Whilst’, ’ analyse’), (’ Whilst’, ’ organisations’), (’ Whilst’, ’ recognised’), (’ Whilst’, ’ flavours’), (’ Whilst’, ’ colour’), (’ Whilst’, ’colour’), (’ Whilst’, ’ Nasa’), (’ Whilst’, ’ Nato’), (’ Whilst’, ’ analys’), (’ Whilst’, ’ flavour’), (’ Whilst’, ’ colourful’), (’ Whilst’, ’ colours’), (’ organising’, ’ realise’), (’ Whilst’, ’ behavioural’), (’ Whilst’, ’ coloured’), (’ Whilst’, ’ learnt’), (’ Whilst’, ’ favourable’), (’ Whilst’, ’isation’), (’ Whilst’, ’ programmes’), (’ organis’, ’ realise’), (’ Whilst’, ’ authorised’), (’ Whilst’, ’ practise’), (’ Whilst’, ’ criticised’), (’ Whilst’, ’ organisers’), (’ organising’, ’ organise’), (’ Whilst’, ’ analysed’), (’ Whilst’, ’ programme’), (’ Whilst’, ’ behaviours’), (’ Whilst’, ’ humour’), (’ Whilst’, ’isations’), (’ Whilst’, ’ tyres’), (’ Whilst’, ’ aluminium’), (’ organised’, ’ realise’), (’ Whilst’, ’ favour’), (’ Whilst’, ’ ageing’), (’ organis’, ’ organise’) C.1.5 RELATED WORDS Layer 13 Head 8* (’ mirac’, ’ miraculous’), (’ mirac’, ’ miracle’), (’ nuanced’, ’ nuance’), (’Better’, ’ smarter’), (’ equitable’, ’ healthier’), (’ liberating’, ’ liberated’), (’ unaffected’, ’ untouched’), (’ equitable’, ’ unbiased’), (’ inconsistent’, ’failed’), (’ emanc’, ’ liberated’), (’ equitable’, ’ humane’), (’ liberated’, ’ liberating’), (’ incompatible’, ’failed’), (’ mirac’, ’ miracles’), (’ consensual’, ’ peacefully’), (’ uncond’, ’ unconditional’), (’ unexpected’, ’ unexpectedly’), (’ unconditional’, ’ untouched’), (’Better’, ’ healthier’), (’ unexpectedly’, ’ unexpected’), (’ graceful’, ’ peacefully’), (’ emanc’, ’ emancipation’), (’ effortlessly’, ’ seamlessly’), (’ honorable’, ’ peacefully’), (’ unconditional’, ’ uncond’), (’ rubbish’, ’ excuses’), (’ emanc’, ’ liberating’), (’ equitable’, ’ peacefully’), (’ Feather’, ’ gracious’), (’ emancipation’, ’ liberated’), (’ nuanced’, ’ nuances’), (’icable’, ’ avoids’), (’ liberated’, ’ freeing’), (’ liberating’, ’ freeing’), (’ inconsistent’, ’ lousy’), (’ lousy’, ’failed’), (’ unconditional’, ’ unaffected’), (’ equitable’, ’ivable’), (’ equitable’, ’Honest’), (’erning’, ’ principled’), (’ survival’, ’surv’), (’ocre’, ’ lackluster’), (’ equitable’, ’ liberating’), (’Bah’, ’Instead’), (’ incompatible’, ’ inappropriate ’), (’ emancipation’, ’ emanc’), (’ unchanged’, ’ unaffected’), (’ peacefully’, ’ peaceful’), (’ equitable’, ’ safer’), (’ unconditional’, ’ uninterrupted ’) Layer 12 Head 14* (’ perished’, ’ died’), (’ perished’, ’ dies’), (’ testify’, ’ testifying’), (’ intervened’, ’ interven’), (’ advises’, ’ advising’), (’ disbanded’, ’ disband’), (’lost’, ’ perished’), (’ died’, ’ perished’), (’ applauded’, ’ applaud’), (’ dictates’, ’ dictate’), (’ prev’, ’ prevailed’), (’ advise’, ’ advising’), (’shed’, ’thood’), (’Reviewed’, ’orsi’), (’ dies’, ’ perished’), (’published’, ’ publishes’), (’ prevailed’, ’ prevail’), (’ died’, ’ dies’), (’ testified’, ’ testifying’), (’ testifying’, ’ testify’), (’ dictates’, ’ governs’), (’ complicit’, ’ complicity’), (’ dictated’, ’ dictate’), (’enough’, ’CHO’), (’ skelet’, ’independence’), (’ Recomm’, ’ prescribe’), (’essential’, ’ perished’), (’noticed’, ’CHO’), (’avorable’, ’ approving’), (’ perish’, ’ perished’), (’ overseeing’, ’ oversee’), (’ skelet’, ’shed’), (’EY’, ’chart’), (’ presiding’, ’ overseeing’), (’ fundament’, ’pees’), (’ sanction’, ’appro’), (’ prevail’, ’ prevailed’), (’ governs’, ’ regulates’), (’tails’, ’shed’), (’ Period’, ’chart’), (’lihood’, ’hower’), (’ prev’, ’ prevail’), (’ aids’, ’helps’), (’ dictated’, ’ dict’), (’ dictated’, ’ dictates’), (’ Dise’, ’itta’), (’REC’, ’CHO’), (’exclusive’, ’ORTS’), (’ Helpful’, ’helps’), (’bart’, ’ciples’) Layer 14 Head 1* (’ misunderstand’, ’ incorrectly’) , (’ Proper’, ’ properly’), (’ inaccur’, ’ incorrectly’), (’ misunderstand’, ’ wrongly’), (’ misinterpret’, ’ incorrectly’), (’ incorrect’, ’ incorrectly’), (’ mistakes’, ’ incorrectly’), (’ misunderstanding’, ’ incorrectly’), (’ proper’, ’ properly’), (’fail’, ’ incorrectly’), (’ faulty’, ’ incorrectly’), (’ misrepresent’, ’ incorrectly’), (’ failing’, ’ fails’), (’ inaccurate’, ’ incorrectly’), (’ errors’, ’ incorrectly’), (’ harmful’, ’ Worse’), (’ misunderstand’, ’ wrong’), (’ misunderstand’, ’ improperly’), (’wrong’, ’ incorrectly’), (’ harmful’, ’ incorrectly’), (’ mistake’, ’ incorrectly’), (’ mis’, ’ incorrectly’), (’fail’, ’ fails’), (’ detrimental’, ’ Worse’), (’ rightful’, ’ properly’), (’ misunderstand’, ’ inappropriately’), (’ harmful’, ’ unnecessarily’), (’ neglect’, ’ unnecessarily’), (’ correctly’, ’ properly’), (’ Worst’, ’ Worse’), (’ failure’, ’ fails’), (’ satisfactory’, ’ adequately’), (’ defective’, ’ incorrectly’), (’ misunderstand’, ’ mistakenly’), (’ harming’, ’ Worse’), (’ mishand’, ’ incorrectly’), (’adequ’, ’ adequately’), (’ misuse’, ’ incorrectly’), (’Failure’, ’ fails’), (’ hurts’, ’ Worse’), (’ misunderstand’, ’wrong’), (’ mistakenly’, ’ incorrectly’), (’ failures’, ’ fails’), (’ adequate’, ’ adequately’), (’ properly’, ’ correctly’), (’ hurting’, ’ Worse’), (’ Proper’, ’ correctly’), (’ fail’, ’ fails’), (’ mistaken’, ’ incorrectly’), (’ harming’, ’ adversely’) Layer 14 Head 13* (’ editors’, ’ editorial’), (’ broadcasters’, ’ broadcasting’) , (’ broadcasting’, ’ broadcasts’), (’ broadcast’, ’ broadcasts’), (’ Broadcasting’, ’ broadcasters’) , (’ editors’, ’ Editorial’), (’ broadcasters’, ’ broadcast’), (’ Broadcasting’, ’ broadcast’), (’ lectures’, ’ lecture’), (’ Broadcast’, ’ broadcasting’), (’ broadcasters’, ’ broadcaster’), (’ broadcasters’, ’ broadcasts’), (’ Publishers’, ’ publishing’), (’ broadcasting’, ’ broadcast’), (’ broadcasters’, ’ Broadcasting’) , (’ Publishers’, ’ Publishing’), (’ lecture’, ’ lectures’), (’ Editors’, ’ editorial’), (’ broadcast’, ’ broadcasting’), (’ Broadcasting’, ’ broadcasts’), (’ broadcasting’, ’ broadcasters’) , (’ journalism’, ’ journalistic’), (’reports’, ’Journal’), (’ Broadcast’, ’ Broadcasting’), (’ Publishers’, ’Publisher’), (’azeera’, ’ Broadcasting’), (’Reporting’, ’Journal’), (’ journalistic’, ’ journalism’), (’ Broadcasting’, ’ broadcaster’), (’ broadcasting’, ’ broadcaster’), (’ broadcaster’, ’ broadcasting’), (’ editors’, ’ publication’), (’ journalism’, ’journal’), (’ Journalists’, ’Journal’), (’ documentary’, ’ documentaries’) , (’ filming’, ’ filmed’), (’ publishers’, ’ publishing’), (’ journalism’, ’Journal’), (’ Broadcast’, ’ broadcasts’), (’ broadcast’, ’ broadcasters’), (’ articles’, ’Journal’), (’ reporting’, ’reports’), (’ manuscripts’, ’ manuscript’), (’ publish’, ’ publishing’), (’azeera’, ’ broadcasters’), (’ Publishers’, ’ publication’), (’ Publishers’, ’ publications’), (’ newspapers’, ’ Newsp’), (’ Broadcast’, ’ broadcasters’), (’ Readers’, ’Journal’) C.2 QUERY-KEY MATRICES Layer 22 Head 1 (’ usual’, ’ usual’), (’ occasional’, ’ occasional’), (’ aforementioned’, ’ aforementioned’), (’ general’, ’ usual’), (’ usual’, ’ slightest’), (’agn’, ’ealous’), (’ traditional’, ’ usual’), (’ free’, ’amina’), (’ major’, ’ major’), (’ frequent’, ’ occasional’), (’ generous’, ’ generous’), (’ free’, ’lam’), (’ regular’, ’ usual’), (’ standard’, ’ usual’), (’ main’, ’ usual’), (’ complete’, ’ Finished’), (’ main’, ’liest’), (’ traditional’, ’ traditional’), (’ latest’, ’ aforementioned’), (’ current’, ’ aforementioned’), (’ normal’, ’ usual’), (’ dominant’, ’ dominant’), (’ free’, ’ministic’), (’ brief’, ’ brief’), (’ biggest’, ’liest’), (’usual’, ’ usual’), (’ rash’, ’ rash’), (’ regular’, ’ occasional’), (’ specialized’, ’ specialized’), (’ free’, ’iosis’), (’ free’, ’hero’), (’ specialty’, ’ specialty’), (’ general’, ’iosis’), (’ nearby’, ’ nearby’), (’ best’, ’liest’), (’ officially’, ’ formal’), (’ immediate’, ’mediate’), (’ special’, ’ ultimate’), (’ free’, ’otropic’), (’ rigorous’, ’ comparative’), (’ actual’, ’ slightest’), (’ complete’, ’ comparative’), (’ typical’, ’ usual’), (’ modern’, ’ modern’), (’ best’, ’ smartest’), (’ free’, ’ free’), (’ highest’, ’ widest’), (’ specialist’, ’ specialist’), (’ appropriate’, ’ slightest’), (’ usual’, ’liest’) Layer 0 Head 9 (’59’, ’27’), (’212’, ’39’), (’212’, ’38’), (’217’, ’39’), (’37’, ’27’), (’59’, ’26’), (’54’, ’88’), (’156’, ’39’), (’212’, ’79’), (’59’, ’28’), (’57’, ’27’), (’212’, ’57’), (’156’, ’29’), (’36’, ’27’), (’217’, ’79’), (’59’, ’38’), (’63’, ’27’), (’72’, ’39’), (’57’, ’26’), (’57’, ’34’), (’59’, ’34’), (’156’, ’27’), (’91’, ’27’), (’156’, ’38’), (’63’, ’26’), (’59’, ’25’), (’138’, ’27’), (’217’, ’38’), (’72’, ’27’), (’54’, ’27’), (’36’, ’29’), (’72’, ’26’), (’307’, ’39’), (’37’, ’26’), (’217’, ’57’), (’37’, ’29’), (’54’, ’38’), (’59’, ’29’), (’37’, ’28’), (’307’, ’38’), (’57’, ’29’), (’63’, ’29’), (’71’, ’27’), (’138’, ’78’), (’59’, ’88’), (’89’, ’27’), (’561’, ’79’), (’212’, ’29’), (’183’, ’27’), (’54’, ’29’) Layer 17 Head 6* (’ legally’, ’ legal’), (’ legal’, ’ sentencing’), (’ legal’, ’ arbitration’), (’ boycot’, ’ boycott’), (’ legal’, ’ criminal’), (’ legal’, ’ Judicial’), (’ legal’, ’ rulings’), (’ judicial’, ’ sentencing’), (’ marketing’, ’ advertising’), (’ legal’, ’ confidential’), (’ protesting’, ’ protest’), (’ recruited’, ’ recruit’), (’ recruited’, ’ recruits’), (’ judicial’, ’ criminal’), (’ legal’, ’ exemptions’), (’ demographics’, ’ demographic’), (’ boycott’, ’ boycot’), (’ sentencing’, ’ criminal’), (’ recruitment’, ’ recruits’), (’ recruitment’, ’ recruit’), (’ Constitutional’, ’ sentencing’) , (’ Legal’, ’ sentencing’), (’ constitutional’, ’ sentencing’) , (’ legal’, ’ subpoena’), (’ injury’, ’ injuries’), (’ FOIA’, ’ confidential’), (’ legal’, ’ licenses’), (’ donation’, ’ donations’), (’ disclosure’, ’ confidential’), (’ negotiation’, ’ negotiating’), (’ Judicial’, ’ legal’), (’ legally’, ’ criminal’), (’ legally’, ’ confidential’), (’ legal’, ’ jur’), (’ legal’, ’ enforcement’), (’ legal’, ’ lawyers’), (’ legally’, ’ enforcement’), (’ recruitment’, ’ recruiting’), (’ recruiting’, ’ recruit’), (’ criminal’, ’ sentencing’), (’ legal’, ’ attorneys’), (’ negotiations’, ’ negotiating’), (’ legally’, ’ arbitration’), (’ recruited’, ’ recruiting’), (’ legally’, ’ exemptions’), (’ legal’, ’ judicial’), (’ voting’, ’ Vote’), (’ negotiated’, ’ negotiating’), (’ legislative’, ’ veto’), (’ fund
1. What is the main contribution of the paper regarding transformer weights interpretation? 2. What are the strengths and weaknesses of the proposed approach, particularly in its assumptions and applications? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or limitations regarding the approach's usefulness and reliability in practice? 5. Are there any previous works that have studied similar ideas, and how does the current paper build upon or differ from them?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents an approach to interpreting transformer weights in embedding space via projections. The authors build on top of previous observations that the prediction is determined in the early layers of a transformer and do not change much in later layers and use this to justify directly projecting hidden states in every layer into embedding space. Then the embedding projections for different layer types within the transformer architecture are derived along with an approximate projection substituting the transpose of the embedding matrix for its inverse. Then, we see three applications of the embedding view of transformer weights: (1) interpreting weights for pre-trained and fine-tuned transformers, (2) aligning weights of transformers trained with different seeds over the same vocabulary, and (3) creating a new classifier without training that combines a fine-tuned head from one model with a pre-trained body from another by translation weights in embedding space. Strengths And Weaknesses Pros: The authors introduce an empirically validated view of transformer weights in embedding space. The 3 applications demonstrate a convincing level of evidence for thinking about transformers in embedding space instead of feature space. Cons: There are some strong assumptions made for the approximate projections which are used from the empirical studies. In particular, that we can project directly to embedding space, use transpose of the embedding matrix instead of the inverse, and ignore normalization layers. While interesting, the applications do not seem useful in practice (parameter interpretation or alignment) nor reliable enough (zero-shot stitching). The embedding view for the predictions has been studied previously by Geva et al., 2022 as noted by the authors. If the goal is interpretability, it's not too satisfying to get a flat interpretation instead of a sequence of modifications made via a path through the network as we saw with the work by Ehlage et al. 2021. Clarity, Quality, Novelty And Reproducibility Clarity The paper is well written overall. Quality The authors provide empirical support for their approach to analyzing transformers in embedding space but there are still a few missing pieces for the insights to be more robust/reliable and useful. Novelty Viewing the predictions in embedding space is not a new idea but extending it to all the layers of the transformer is a novel concept. Reproducibility The experiments are pretty straightforward but it is unclear whether the attention heads and examples in the appendix are representative or a heavily filtered set.
ICLR
Title Analyzing Transformers in Embedding Space Abstract Understanding Transformer-based models has attracted significant attention, as they lie at the heart of recent technological advances across machine learning. While most interpretability methods rely on running models over inputs, recent work has shown that a zero-pass approach, where parameters are interpreted directly without a forward/backward pass is feasible for some Transformer parameters, and for two-layer attention networks. In this work, we present a theoretical analysis where all parameters of a trained Transformer are interpreted by projecting them into the embedding space, that is, the space of vocabulary items they operate on. We derive a simple theoretical framework to support our arguments and provide ample evidence for its validity. First, an empirical analysis showing that parameters of both pretrained and fine-tuned models can be interpreted in embedding space. Second, we present two applications of our framework: (a) aligning the parameters of different models that share a vocabulary, and (b) constructing a classifier without training by “translating” the parameters of a fine-tuned classifier to parameters of a different model that was only pretrained. Overall, our findings open the door to interpretation methods that, at least in part, abstract away from model specifics and operate in the embedding space only. 1 INTRODUCTION Transformer-based models [Vaswani et al., 2017] currently dominate Natural Language Processing [Devlin et al., 2018; Radford et al., 2019; Zhang et al., 2022] as well as many other fields of machine learning [Dosovitskiy et al., 2020; Chen et al., 2020; Baevski et al., 2020]. Consequently, understanding their inner workings has been a topic of great interest. Typically, work on interpreting Transformers relies on feeding inputs to the model and analyzing the resulting activations [Adi et al., 2016; Shi et al., 2016; Clark et al., 2019]. Thus, interpretation involves an expensive forward, and sometimes also a backward pass, over multiple inputs. Moreover, such interpretation methods are conditioned on the input, and are not guaranteed to generalize to all inputs. In the evolving literature on static interpretation, i.e., without forward or backward passes, Geva et al. [2022b] showed that the value vectors of the Transformer feed-forward module (the second layer of the feed-forward network) can be interpreted by projecting them into the embedding space, i.e., multiplying them by the embedding matrix to obtain a representation over vocabulary items. Elhage et al. [2021] have shown that in a 2-layer attention network, weight matrices can be interpreted in the embedding space as well. In this work, we extend the theoretical analysis and findings of Elhage et al. [2021] and Geva et al. [2022b], and present a zero-pass framework to understand the behaviour of Transformers. Conceretely, we interpret all weights of a pretrained language model (LM) in embedding space, including both keys and values of the feed-forward module as well as all attention parameters. Our theory relies on a simple observation. Since Geva et al. [2022b] have shown that one can project hidden states to the embedding space via the embedding matrix, we can extend this to other parts of the model by projecting to the embedding space and then projecting back by multiplying with a right-inverse of the embedding matrix. Thus, we can recast inner products in the model as inner products in embedding space. Viewing inner products in this way, we can interpret such products as interactions between pairs of vocabulary items.1 This applies to (a) interactions between 1We refer to the unique items of the vocabulary as vocabulary items, and to the (possibly duplicate) elements of a tokenized input as tokens. 2. Pack them into a similarity matrix attention queries and keys as well as to (b) interactions between attention value vectors and the parameters that project them at the output of the attention module. Taking this perspective to an extreme, one can view Transformers as operating implicitly in the embedding space. This entails the existence of a single linear space that depends solely on the tokenizer, in which parameters of different Transformers can be compared. Thus, one can use the embedding space to compare and transfer information across different models that share a tokenizer. We provide extensive empirical evidence for the credibility of our proposal. On the interpretation front (Fig. 1, Left), we provide qualitative and quantitative evidence that Transformer parameters can be interpreted in embedding space. We also show that when fine-tuning a pretrained LM on a sentiment analysis task (over movie reviews), projecting changes in parameters into embedding space yields words that characterize sentiment towards movies. Second (Fig. 1, Center), we show that given two distinct instances of BERT pretrained with different random seeds [Sellam et al., 2022], we can align layers of the two instances by casting their weights into the embedding space. We find that indeed layer i of the first instance aligns well to layer i of the second instance, showing the different BERT instances converge to a semantically-similar solution. Last (Fig. 1, Right), we take a model fine-tuned on a sentiment analysis task and “transfer” the learned weights to a different model that was only pretrained by going through the embedding spaces of the two models. We show that in 30% of the cases, this procedure, termed stitching, results in a classifier that reaches an impressive accuracy of 70% on the IMDB benchmark [Maas et al., 2011] without any training. Overall, our findings suggest that analyzing Transformers in embedding space is fruitful for both interpretability and as a tool to relate different models that share a vocabulary, and opens the door to interpretation methods that operate in embedding space only. Our code is available at https: //anonymized. 2 BACKGROUND We now present the main components of the Transformer [Vaswani et al., 2017] relevant to our analysis. We discuss the residual stream view of Transformers, and recapitulate a view of the attention layer parameters as interaction matrices WVO and WQK [Elhage et al., 2021]. Similar to Elhage et al. [2021], we exclude biases and layer normalization from our analysis. 2.1 TRANSFORMER ARCHITECTURE The Transformer consists of a stack of layers, each includes an attention module followed by a Feed-Forward (FF) module. All inputs and outputs are sequences of N vectors of dimensionality d. The Attention Module takes as input a sequence of representations X ∈ RN×d, and each layer L is parameterized by four matrices W (L)Q ,W (L) K ,W (L) V ,W (L) O ∈ Rd×d (we henceforth omit the layer superscript for brevity). The input X is projected to produce queries, keys, and values: Qatt = XWQ,Katt = XWK , Vatt = XWV . Each one of Qatt,Katt, Vatt is split along the columns to H different heads of dimensionality RN× dH , denoted by Qiatt,Kiatt, V iatt respectively. We then compute H attention maps: Ai = softmax ( QiattK iT att√ d/H +M ) ∈ RN×N , where M ∈ RN×N is the attention mask. Each attention map is applied to the corresponding value head as AiV iatt, results are concatenated along columns and projected via WO. The input to the module is added via a residual connection, and thus the attention module’s output is: X + Concat [ A1V 1att, . . . , A iV iatt, . . . , A HV Hatt ] WO. (1) The FF Module is a two-layer neural network, applied to each position independently. Following past terminology [Sukhbaatar et al., 2019; Geva et al., 2020], weights of the first layer are called FF keys and weights of the second layer FF values. This is an analogy to attention, as the FF module too can be expressed as: f(QKT)V , where f is the activation function, Q ∈ RN×d is the output of the attention module and the input to the FF module, and K,V ∈ Rdff×d are the weights of the first and second layers of the FF module. Unlike attention, keys and values are learnable parameters. The output of the FF module is added to the output of the attention module to form the output of the layer via a residual connection. The output of the i-th layer is called the i-th hidden state. Embedding Matrix To process sequences of discrete tokens, Transformers use an embedding matrix E ∈ Rd×e that provides a d-dimensional representation to vocabulary items before entering the first Transformer layer. When training Transformers with a language modeling objective, the same embedding matrix E is often used [Press and Wolf, 2016] to take the output of the last Transformer layer and project it back to the vocabulary dimension, i.e., into the embedding space. In this work, we will interpret all components of the Transformer model in the embedding space. 2.2 THE RESIDUAL STREAM We rely on a useful view of the Transformer through its residual connections proposed by Elhage et al. [2021].2 Specifically, each layer takes a hidden state as input and adds information to the hidden state through its residual connection. Under this view, the hidden state is a residual stream passed along the layers, from which information is read, and to which information is written at each layer. Elhage et al. [2021] and Geva et al. [2022b] observed that the residual stream is often barely updated in the last layers, and thus the final prediction is determined in early layers and the hidden state is mostly passed through the later layers. An exciting consequence of the residual stream view is that we can project hidden states in every layer into embedding space by multiplying the hidden state with the embedding matrix E, treating the hidden state as if it were the output of the last layer. Geva et al. [2022a] used this approach to interpret the prediction of Transformer-based language models, and we follow a similar approach. 2.3 WQK AND WVO Following Elhage et al. [2021], we describe the attention module in terms of interaction matrices WQK and WVO which will be later used in our theoretical derivation. The computation of the attention module (§2.1) can be re-interpreted as follows. The attention projection matrices WQ,WK,WV can be split along the column axis to H equal parts denoted by W iQ,W i K,W i V ∈ Rd× d H for 1 ≤ i ≤ H . Similarly, the attention output matrix WO can be split along the row axis into H heads, W iO ∈ Rd/H×d. We define the interaction matrices as W iQK := W i QW iT K ∈ Rd×d, W iVO := W iVW iO ∈ Rd×d. 2Though earlier mentions include nostalgebraist [2020]. Importantly, W iQK,W i VO are input-independent. Intuitively, WQK encodes the amount of attention between pairs of tokens. Similarly, in W iVO, the matrices WV and WO can be viewed as a transition matrix that determines how attending to certain tokens affects the subsequent hidden state. We can restate the attention equations in terms of the interaction matrices. Recall (Eq. 1) that the output of the i’th head of the attention module is AiV iatt and the final output of the attention module is (without the residual connection): Concat [ A1V 1att, ..., A iV iatt, ..., A HV Hatt ] WO = H∑ i=1 Ai(XW iV)W i O = H∑ i=1 AiXW iVO. (2) Similarly, the attention map Ai at the i’th head in terms of WQK is (softmax is done row-wise): Ai = softmax ( (XW iQ)(XW i K) T√ d/H +M ) = softmax ( X(W iQK)X T√ d/H +M ) . (3) 3 PROJECTING TRANSFORMER PARAMETERS INTO EMBEDDING SPACE In this section, we propose that Transformer parameters can be projected into embedding space for interpretation purposes. Our results extend Elhage et al. [2021] who obtained similar results for a two-layer attention-only network. We empirically support our framework in §4-§5. Given a matrix A ∈ RN×d, we can project it into embedding space by multiplying by the embedding matrix E as  = AE ∈ RN×e. Let E′ be a right-inverse of E, that is, EE′ = I ∈ Rd×d.3 Then we can reconstruct the original matrix with E′ as A = A(EE′) = ÂE′. We will use this simple identity to reinterpret the model’s operation in embedding space. To simplify our analysis, we ignore layer norms and biases, a standard simplification justified in prior work [Elhage et al., 2021]. In interpretation experiments (§4), we do not use an exact right inverse such as the Moore–Penrose pseudo-inverse [Moore, 1920; Bjerhammar, 1951; Penrose, 1955] but instead use the transpose of the embedding matrix E′ = ET. This is since interpretation involves not only projecting using E′ but also applying a top-k operation where we inspect the vocabulary items with the largest logits. We empirically find that the Moore–Penrose pseudo-inverse does not work well for interpretation due to the top-k operation, and provide a justification and comprehensive empirical evidence in Appendix A. Conversely, ET empirically works well, and we conjecture this is due to the training procedure of LMs where E is used to embed discrete tokens into the hidden state dimension and ET is used to predict a distribution over the vocabulary items from the last hidden state. Attention Module Recall that W iVO := W iVW iO ∈ Rd×d is the interaction matrix between attention values and the output projection matrix for attention head i. By definition, the output of each head is: AiXW iVO = A iX̂E′W iVO. Since the output of the attention module is added to the residual stream, we can assume according to the residual stream view that it is meaningful to project it to the embedding space, similar to FF values. Thus, we expect the sequence of N e-dimensional vectors (AiXW iVO)E = A iX̂(E′W iVOE) to be interpretable. Importantly, the role of A i is just to mix the representations of the updated N input vectors. This is similar to the FF module, where FF values (the parameters of the second layer) are projected into embedding space, and FF keys (parameters of the first layer) determine the coefficients for mixing them. Hence, we can assume that the interpretable components are in the term X̂(E′W iVOE). Zooming in on this operation, we see that it takes the previous hidden state in the embedding space (X̂) and produces an output in the embedding space which will be incorporated into the next hidden state through the residual stream. Thus, E′W iVOE is a transition matrix that takes a representation the embedding space and outputs a new representation in the same space. Similarly, the matrix W iQK can be viewed as a bilinear map (Eq. 3). To interpret it in embedding space, we perform the following operation with E′: XW iQKX T = (XEE′)W iQK(XEE ′)T = (XE)E′W iQKE ′T(XE)T = X̂(E′W iQKE ′T)X̂T. 3E′ exists if d ≤ e and E is full-rank. Therefore, the interaction between tokens at different positions is determined by an e×e matrix that expresses the interaction between pairs of vocabulary items. FF Module Geva et al. [2022b] showed that FF value vectors V ∈ Rdff×d are meaningful when projected into embedding space, i.e., for a FF value vector v ∈ Rd, vE ∈ Re is interpretable (see §2.1). In vectorized form, the rows of V E ∈ Rdff×e are interpretable. On the other hand, the keys K of the FF layer are multiplied on the left by the output of the attention module, which are the queries of the FF layer. Denoting the output of the attention module by Q, we can write this product as QKT = Q̂E′KT = Q̂(KE′T)T. Because Q is a hidden state, we assume according to the residual stream view that Q̂ is interpretable in embedding space. When multiplying Q̂ by KE′T, we are capturing the interaction in embedding space between each query and key, and thus expect KE′T to be interpretable in embedding space as well. Overall, FF keys and values are intimately connected – the i-th key controls the coefficient of the i-th value, so we expect their interpretation to be related. While not central to this work, we empirically show that key-value pairs in the FF module are similar in embedding space in Appendix B.1. Subheads Another way to interpret the matrices W iVO and W iQK is through the subhead view. We use the following identity: AB = ∑b j=1 A:,jBj,:, which holds for arbitrary matrices A ∈ Ra×b, B ∈ Rb×c, where A:,j ∈ Ra×1 are the columns of the matrix A and Bj,: ∈ R1×c are the rows of the matrix B. Thus, we can decompose W iVO and W i QK into a sum of d H rank-1 matrices: W iVO = d H∑ j=1 W i,jV W i,j O , W i QK = d H∑ j=1 W i,jQ W i,j K T . where W i,jQ ,W i,j K ,W i,j V ∈ Rd×1 are columns of W iQ,W iK,W iV respectively, and W i,j O ∈ R1×d are the rows of W iO. We call these vectors subheads. This view is useful since it allows us to interpret subheads directly by multiplying them with the embedding matrix E. Moreover, it shows a parallel between interaction matrices in the attention module and the FF module. Just like the FF module includes key-value pairs as described above, for a given head, its interaction matrices are a sum of interactions between pairs of subheads (indexed by j), which are likely to be related in embedding space. We show this is indeed empirically the case for pairs of subheads in Appendix B.1. We summarize our approach for projecting the different components of the Transformer into embedding space in Table 1. 4 INTERPRETABILITY EXPERIMENTS In this section, we provide empirical evidence for the viability of our approach as a tool for interpreting Transformer parameters. 4.1 PARAMETER INTERPRETATION EXAMPLES We take GPT-2 medium [Radford et al., 2019] and manually analyze its parameters. GPT-2 medium has a total of 384 attention heads (24 layers and 16 heads per layer). We take the embedded transition matrices E′W iVOE for all heads and examine the top-k pairs of vocabulary items. As there are only 384 heads, we manually choose a few heads and present the top-k pairs in Appendix C.1 (k = 50). We observe that different heads capture different types of relations between pairs of vocabulary items including word parts, heads that focus on gender, geography, orthography, particular part-of-speech tags, and various semantic topics. In Appendix C.2 we perform a similar analysis for WQK. Appendix C.3 provides examples of key-value pairs from the FF modules of GPT-2 medium. We show random pairs (k, v) from the set of those pairs such that when looking at the top-100 vocabulary items for k and v, at least 15% overlap. Such pairs account for approximately 5% of all key-value pairs. The examples show how key-value pairs often revolve around similar topics such as media, months, organs, etc. Last, we show we can use embeddings to locate FF values (or keys) related to a particular topic. We take a few vocabulary items related to a certain topic, e.g., [‘cm’, ‘kg’, ‘inches’], average their embeddings,4 and rank all FF values (or keys) based on their dot-product with the average. Appendix C.4 shows a few examples of FF values found with this method that are related to programming, measurements, and animals. 4.2 HIDDEN STATE AND PARAMETERS An advantage of zero-pass interpretation is that it does not require running inputs through the model which is expensive and non-exhaustive. In this section (and this section only), we run a forward pass over inputs and examine if the representations in embedding space of dynamically-computed hidden states are “similar” to the representations of static parameter vectors that are activated. A technical side note: we use GPT-2, which applies layer norm to the Transformer output before projecting it to the embedding space with E. Thus, conservatively, layer norm should be considered as part of the projection operation.5 Empirically however, we observe that projecting parameters directly without layer norm works well, which simplifies our analysis in §3. An exception is when projecting hidden states in this section, where we apply layer norm before projection to improve performance, similar to Geva et al. [2022a]. Experimental Design We use GPT-2 medium and run it over 60 examples from IMDB [Maas et al., 2011]. This provides us with a dynamically-computed hidden state h for every token and at the output of every layer. For the projection ĥ ∈ Re of each such hidden state, we take the projections of the m most active parameter vectors {x̂i}mi=1 in the layer that computed h and check 4We subtract the average embedding µ from E before averaging, which improves interpretability. 5Layer norm consists of standardizing the mean and variance of the input followed by an affine transforma- tion. The latter part can be easily absorbed into E (while adding a bias term). if they cover the dominant vocabulary items of ĥ in embedding space. Specifically, let top-k(wE) be the k vocabulary items with largest logits in embedding space for a vector w ∈ Rd. We compute: Rk(x̂1, ..., x̂m, ĥ) = |top-k(ĥ) ∩ ⋃m i=1 top-k(x̂i)| k , to capture if activated parameter vectors cover the main vocabulary items corresponding to the hidden state. We find the m most active parameter vectors separately for FF keys (K), FF values (V ), attention value subheads (WV) (see §3), and attention output subheads (WO), where the activation of each parameter vector is determined by the vector’s “coefficient” as follows. For a FF key-value pair (k, v) the coefficient is σ(qTk), where q ∈ Rd is an input to the FF module, and σ is the FF nonlinearity. For attention value-output subhead pairs (v, o) the coefficient is xTv, where x is the input to this component (for attention head i, the input is one of the rows of AiX , see Eq. 2). Results and Discussion Figure 2 presents the Rk score averaged across tokens per layer. As a baseline, we compare Rk of the activated vectors {x̂i}mi=1 with the correctly-aligned hidden state ĥ at the output of the relevant layer (blue bars) against the the Rk when randomly sampling ĥrand from the set of all hidden states (orange bars). We conclude that the representations in embedding space induced by activated parameter vector mirror, at least to some extent, the representations of the hidden states themselves. Appendix §B.2 shows a variant of this experiment, where we compare activated parameters throughout GPT2-medium’s layers to the last hidden state, which produces the logits used for prediction. 4.3 INTERPRETATION OF FINE-TUNED MODELS We now show that we can interpret the changes a model goes through during fune-tuning through the lens of embedding space. We fine-tune the top-3 layers of the 12-layer GPT-2-base with a sequence classification head on IMDB sentiment analysis (binary classification) and compute the difference between the original parameters and the fine-tuned model. We then project the difference of parameter vectors into embedding space and test if change is interpretable w.r.t sentiment analysis. Appendix D shows examples for projected differences randomly sampled from the fine-tuned layers. Frequently, the difference, or its negation, is projected to nouns, adjectives and adverbs that express sentiment for a movie, such as ‘amazing’, ‘masterpiece’, ‘incompetence’, etc. This shows that the differences are indeed projected into vocabulary items that characterize movie reviews’ sentiment. Almost all parameter groups present this behavior, except for V and WO, which curiously are the parameters added to the residual stream. 5 ALIGNING MODELS IN EMBEDDING SPACE Assuming Transformers by and large operate in embedding space leads to an exciting possibility - we can relate different models to one another so long as they share a vocabulary and tokenizer. In §5.1, we show that we can align the layers of BERT models trained with different random seeds. In §5.2, we show the embedding space can be leveraged to “stitch” the parameters of a fine-tuned model to a model that was not fine-tuned. 5.1 LAYER ALIGNMENT Experimental Design Taking our approach to the extreme, the embedding space is a universal space, which depends only on the tokenizer, and in which Transformer parameters and hidden states reside. Consequently, we can align parameter vectors from different models in this space and compare them even if they come from different models, as long as they share a vocabulary. To demonstrate this, we use MultiBERT [Sellam et al., 2022], which contains 25 different instantiations of BERT initialized from different random seeds. We take parameters from two MultiBERT seeds and compute the Pearson correlation between their projection to embedding space. For example, let VA, VB be the FF values of models A and B. We can project the values into embedding space: VAEA, VBEB , where EA, EB are the respective embedding matrices, and compute Pearson correlation between projected values. This produces a similarity matrix S̃ ∈ R|VA|×|VB |, where each entry is the correlation coefficient between projected values from the two models. We bin S̃ by layer pairs and average the absolute value of the scores in each bin (different models might encode the same information in different directions, so we use absolute value) to produce a matrix S ∈ RL×L, where L is the number of layers. Specifically, the average (absolute) correlation between vectors that come from layer ℓA in model A and layer ℓB in Model B is registered in entry (ℓA, ℓB) of S. Last, to obtain a one-to-one layer alignment, we use the Hungarian algorithm [Kuhn, 1955], which assigns exactly one layer from the first model to a layer from the second model. The algorithm’s objective is to maximize, given a similarity matrix S, the sum of similarities of the chosen pairs, such that each index in one model is matched with exactly one index in the other. We repeat this for all parameter groups (WQ,WK,WV,WO,K). Results and Discussion Figure 3 (left) shows the resulting alignment. Clearly, parameters from a certain layer in model A tend to align to the same layer in model B across all parameter groups. This suggests that different layers from different models that were trained separately (but with the same training objective and data) serve a similar function. As further evidence, we show that if not projected, the matching appears absolutely random in Figure §3 (right). We show the same results for other seed pairs as well in Appendix B.3. 5.2 ZERO-SHOT STITCHING Model stitching [Lenc and Vedaldi, 2015; Csiszárik et al., 2021; Bansal et al., 2021] is a relatively under-explored feature of neural networks, particularly in NLP. The idea is that different models, sometimes trained on different data and with different architectures, learn representations that can be aligned through a linear transformation, termed stitching. Representations correspond to hidden states , and thus one can learn a transformation matrix from one model’s hidden states to an equivalent hidden state in the other model. Here, we show that going through embedding space one can align the hidden states of two models, i.e., stitch, without training. Given two models, we want to find a linear stitching transformation to align their representation spaces. According to our theory, given a hidden state v ∈ Rd1 from model A, we can project it to the embedding space as vEA, where EA is its embedding matrix. Then, we can re-project to the feature space of model B, with E+B ∈ Re×d2 , where E + B is the Penrose-Moore pseudo-inverse of the embedding matrix EB .6 This transformation can be expressed as multiplication with the kernel KAB := EAE + B ∈ Rd1×d2 . We employ the above approach to take representations of a fine-tuned classifier, A, and stitch them on top of a model B that was only pretrained, to obtain a new classifier based on B. Experimental Design We use the 24-layer GPT-2 medium as model A and 12-layer GPT-2 base model trained in §4.3 as model B. We fine-tune the last three layers of model B on IMDB, as explained in §4.3. Stitching is simple and is performed as follows. Given the sequence of N hidden states HℓA ∈ RN×d1 at the output of layer ℓ of model A (ℓ is a hyperparameter), we apply the stitching layer, which multiplies the hidden states with the kernel, computing HℓAKAB . This results in hidden states HB ∈ RN×d2 , used as input to the three fine-tuned layers from B. 6Since we are not interested in interpretation we use an exact right-inverse and not the transpose. Results and Discussion Stitching produces models with accuracies that are higher than random on IMDB evaluation set, but not consistently. Figure 4 shows the accuracy of stitched models against the layer index from model A over which stitching is performed. Out of 11 random seeds, three models obtained accuracy that is significantly higher than the baseline 50% accuracy, reaching an accuracy of roughly 70%, when stitching is done over the top layers. 6 RELATED WORK Interpreting Transformer is a broad area of research that has attracted much attention in recent years. A large body of work has focused on analyzing hidden representations, mostly through probing [Adi et al., 2016; Shi et al., 2016; Tenney et al., 2019; Rogers et al., 2020]. Voita et al. [2019a] used statistical tools to analyze the evolution of hidden representations throughout layers. Recently, Mickus et al. [2022] proposed to decompose the hidden representations into the contributions of different Transformer components. Unlike these works, we interpret parameters rather than the hidden representations. Another substantial effort has been to interpret specific network components. Previous work analyzed single neurons [Dalvi et al., 2018; Durrani et al., 2020], attention heads [Clark et al., 2019; Voita et al., 2019b], and feedforward values [Geva et al., 2020; Dai et al., 2021; Elhage et al., 2022]. While these works mostly rely on input-dependent neuron activations, we inspect “static” model parameters, and provide a comprehensive view of all Transformer components. Our work is most related to efforts to interpret specific groups of Transformer parameters. Cammarata et al. [2020] made observations about the interpretability of weights of neural networks. Elhage et al. [2021] analyzed 2-layer attention networks. We extend their analysis to multi-layer pre-trained Transformer models. Geva et al. [2020; 2022a;b] interpreted feedforward values in embedding space. We coalesce these lines of work and offer a unified interpretation framework for Transformers in embedding space. 7 DISCUSSION Our work has a few limitations that we care to highlight. First, it focuses on interpreting models through the vocabulary lens. While we have shown evidence for this, it does not preclude other factors from being involved in the computation process. Second, we used E′ = ET, but future research might find variants of E that improve performance. Last, we assume Transformer components can be projected to the embedding space with a single matrix multiplication, but this might depend on model training, e.g., in GPT-2 it involves a layer norm operation as explained in §4.2. Notwithstanding, we believe the benefits of our work overshadow its limitations. We provide a simple and efficient approach, which equips researchers with new tools to interpret Transformer models and relate them to one another. Apart from Elhage et al. [2021], there has been little work pursuing the embedding space approach, and we “sharpen” the tools they laid down and adjust them to existing pre-trained Transformers. Moreover, our framework allows us to view parameters from different models as residents of the same universal embedding space, where they can be compared in model-agnostic fashion. We demonstrate two applications of this observation (model alignment and stitching) and argue future work can yield many additional applications. A RETHINKING INTERPRETATION The process of interpreting a vector v in Geva et al. [2022b] proceeds in two steps: first the projection of the vector to the embedding space (vE); then, we use the list of the tokens that were assigned the largest values in the projected vector, i.e.: top-k(vE), as the interpretation of the projected vector. This is reasonable since (a) the most activated coordinates contribute the most when added to the residual stream, and (b) this matches how we eventually decode: we project to the embedding space and consider the top-1 token (or one of the few top tokens, when using beam search). In this work, we interpret inner products and matrix multiplications in the embedding space: given two vectors x, y ∈ Rd, their inner product xTy can be considered in the embedding space by multiplying with E and then by one of its right inverses (e.g., its pseudo-inverse E+ [Moore, 1920; Bjerhammar, 1951; Penrose, 1955]): xTy = xTEE+y = (xTE)(yE+T)T. Assume xE is interpretable in the embedding space, crudely meaning that it represents logits over vocabulary items. We expect y, which interacts with x, to also be interpretable in the embedding space. Consequently, we would like to take yE+T to be the projection of y. However, this projection does not take into account the subsequent interpretation using top-k. The projected vector yE+T might be harder to interpret in terms of its most activated tokens. To alleviate this problem, we need a different “inverse” matrix E′ that works well when considering the top-k operation. Formally, we want an E′ with the following “robustness” guarantee: keep-k(xE)Tkeep-k(yE′) ≈ xTy, where keep-k(v) is equal to v for coordinates whose absolute value is in the top-k, and zero elsewhere. This is a stronger notion of inverse – not only is EE′ ≈ I , but even when truncating the vector in the embedding space we can still reconstruct it with E′. We claim that ET is a decent instantiation of E′ and provide some empirical evidence. While a substantive line of work [Ethayarajh, 2019; Gao et al., 2019; Wang et al., 2020; Rudman et al., 2021] has shown that embedding matrices are not isotropic (an isotropic matrix E has to satisfy EET = αI for some scalar α), we show that it is isotropic enough to make ET a legitimate compromise. We randomly sample 300 vectors drawn from the normal distribution N (0, 1), and compute for every pair x, y the cosine similarity between xTy and keep-k(xE)Tkeep-k(yE′) for k = 1000, and then average over all pairs. We repeat this for E′ ∈ {E+T, E} and obtain a score of 0.10 for E+T, and 0.83 for E, showing the E is better under when using top-k. More globally, we compare E′ ∈ {E+T, E} for k ∈ {10, 50, 100, 200, 300, 500} with three distributions: - x, y drawn from the normal N (0, 1) distribution - x, y chosen randomly from the FF values - x, y drawn from hidden states along Transformer computations. In Figure 5 (Left) we show the results, where dashed lines represent E+ and solid lines represent ET. For small values of k (used for interpretation), ET is superior to E+ across all distributions. Interestingly, the hidden state distribution is the only distribution where E+ has similar performance to ET. Curiously, when looking at higher values of k the trend is reversed (k = {512, 1024, 2048, 4096, 10000, 15000, 20000, 30000}) - see Figure 5 (Right). This settles the deviation from findings showing embedding matrices are not isotropic, as we see that indeed as k grows, ET becomes an increasingly bad approximate right-inverse of the embedding matrix. The only distribution that keeps high performance with ET is the hidden state distribution, which is an interesting future direction of investigation. B ADDITIONAL MATERIAL B.1 CORRESPONDING PARAMETER PAIRS ARE RELATED We define the following metric applying on vectors after projecting them into the embedding space: Simk(x̂, ŷ) = |top-k(x̂) ∩ top-k(ŷ)| |top-k(x̂) ∪ top-k(ŷ)| where top-k(v) is the set of k top activated indices in the vector v (which correspond to tokens in the embedding space). This metric is the Jaccard index [Jaccard, 1912] applied to the top-k tokens from each vector. In Figure 6, Left, we demonstrate that corresponding FF key and value vectors are more similar (in embedding space) than two random key and value vectors. In Figure 6, Right, we show a similar result for attention value and output vectors. In Figure 6, Bottom, the same analysis in done for attention query and key vectors. This shows that there is a much higher-than-chance relation between corresponding FF keys and values (and the same for attention values and outputs). B.2 FINAL PREDICTION AND PARAMETERS We show that the final prediction of the model is correlated in embedding space with the most activated parameters from each layer. This implies that these objects are germane to the analysis of the final prediction in the embedding space, which in turn suggests that the embedding space is a viable choice for interpreting these vectors. Figure 7 shows that just like §4.2, correspondence is better when hidden states are not randomized, suggesting there parameter interpretations have an impact on the final prediction. B.3 PARAMETER ALIGNMENT PLOTS FOR ADDITIONAL MODEL PAIRS Alignment in embedding space of layers of pairs of BERT models trained with different random seeds for additional model pairs. SEED 1 VS SEED 2 K V WK WQ WV WO K V WK WQ WV WO SEED 2 VS SEED 3 K V WK WQ WV WO K V WK WQ WV WO SEED 3 VS SEED 4 K V WK WQ WV WO K V WK WQ WV WO SEED 4 VS SEED 5 K V WK WQ WV WO K V WK WQ WV WO C EXAMPLE CASES C.1 VALUE-OUTPUT MATRICES Below we show value-output pairs from different heads of GPT-2 Medium. For each head, we show the 50 pairs with largest value in the e × e transition matrix. There are 384 attention heads in GPT-2 medium from which we manually choose a subset. Throughout the section some lists were marked with asterisks indicating the way this particular list was created: * - pairs of the form (x, x) were excluded from the list C.1.1 LOW LEVEL LANGUAGE MODELING Layer 21 Head 7* (’FN’, ’NF’), (’ Ramos’, ’Ram’), (’ Hughes’, ’Hug’), (’GR’, ’gran’), (’NF’, ’FN’), (’CL’, ’CLA’), (’ McCain’, ’McC’), (’ Marshall’, ’Marsh’), (’Hug’, ’ Hughes’), (’ Tanner’, ’Tan’), (’NH’, ’nih’), (’NR’, ’NRS’), (’Bow’, ’ Bowman’), (’Marsh’, ’ Marshall’), (’ Jacobs’, ’Jac’), (’ Hayes’, ’Hay’), (’Hay’, ’ Hayes’), (’ McCorm’, ’McC’), (’NR’, ’NI’), (’ Dawson’, ’ sidx’), (’Tan’, ’ Tanner’), (’GR’, ’gra’), (’jac’, ’JA’), (’zo’, ’zos’), (’NF’, ’NI’), (’ McCull’, ’McC’), (’Jac’, ’ Jacobs’), (’ Beet’, ’ Beetle’), (’FG’, ’GF’), (’ja’, ’jas’), (’ Wilkinson’, ’Wil’), (’Ram’, ’ Ramos’), (’GR’, ’GRE’), (’FN’, ’ NF’), (’McC’, ’ McCorm’), (’ Scarborough’, ’Scar’), (’Ba’, ’ Baal’), (’FG’, ’FP’), (’FN’, ’FH’), (’Gar’, ’ Garfield’), (’jac’, ’jas’), (’nut’, ’nuts’), (’ Wis’, ’WI’), (’ Vaughan’, ’ Vaughn’), (’PF’, ’FP’), (’RN’, ’RNA’), (’jac’, ’ Jacobs’), (’FN’, ’FM’), (’Kn’, ’ Knox’), (’nic’, ’NI’) Layer 19 Head 13 (guessing the first letter/consonant of the word) (’senal’, ’ R’), # arsenal (’senal’, ’R’), (’vernment’, ’ G’), # government (’ Madness’, ’ M’), (’ Mayhem’, ’ M’), (’nesday’, ’ W’), # wednesday (’vernment’, ’G’), (’ Madness’, ’M’), (’lace’, ’ N’), # necklace (’nesday’, ’W’), (’senal’, ’Rs’), (’vernment’, ’ g’), (’farious’, ’ N’), # nefarious (’eneg’, ’ C’), (’senal’, ’ r’), (’ruary’, ’ F’), # february (’senal’, ’RIC’), (’ondo’, ’ R’), (’ Mandela’, ’ N’), # nelson (’ Mayhem’, ’M’), (’senal’, ’ RD’), (’estine’, ’ C’), (’vernment’, ’Gs’), (’senal’, ’RF’), (’esis’, ’ N’), (’Reviewed’, ’ N’), (’arette’, ’ C’), # cigarette (’rome’, ’ N’), (’theless’, ’ N’), # nonetheless (’lace’, ’N’), (’DEN’, ’ H’), (’ versa’, ’ V’), (’bably’, ’ P’), # probably (’vernment’, ’GF’), (’vernment’, ’g’), (’vernment’, ’GP’), (’ornia’, ’ C’), # california (’ilipp’, ’ F’), (’umbered’, ’ N’), (’arettes’, ’ C’), (’senal’, ’RS’), (’onsense’, ’ N’), (’senal’, ’RD’), (’senal’, ’RAL’), (’uci’, ’ F’), (’ondo’, ’R’), (’senal’, ’ RI’), (’iday’, ’ H’), # holiday (’senal’, ’ Rx’), (’odor’, ’ F’) Layer 20 Head 9 (’ behalf’, ’On’), (’ behalf’, ’ On’), (’ behalf’, ’ on’), (’ periods’, ’during’), (’ bounds’, ’within’), (’ envelope’, ’ inside’), (’door’, ’outside’), (’ envelope’, ’inside’), (’ regime’, ’ Under’), (’ periods’, ’ during’), (’lihood’, ’ LIKE’), (’ occasions’, ’ on’), (’ regime’, ’Under’), (’door’, ’inside’), (’period’, ’during’), (’lihood’, ’Like’), (’ periods’, ’ During’), (’ envelope’, ’Inside’), (’ sake’, ’for’), (’ doors’, ’ inside’), (’ regime’, ’ under’), (’ behalf’, ’ ON’), (’ purposes’, ’for’), (’ occasions’, ’On’), (’ doors’, ’inside’), (’ basis’, ’ on’), (’ regimes’, ’ Under’), (’doors’, ’outside’), (’ Osc’, ’inside’), (’ periods’, ’During’), (’door’, ’ inside’), (’ regime’, ’ UNDER’), (’ regimes’, ’ under’), (’ regimes’, ’Under’), (’doors’, ’inside’), (’zx’, ’inside’), (’ period’, ’during’), (’ascript’, ’inside’), (’door’, ’Inside’), (’ occasions’, ’ On’), (’ysc’, ’BuyableInstoreAndOnline’) , (’ envelope’, ’ Inside’), (’ pauses’, ’during’), (’ regime’, ’under’), (’ occasion’, ’ on’), (’ doors’, ’outside’), (’ banner’, ’ UNDER’), (’ envelope’, ’within’), (’abouts’, ’ here’), (’ duration’, ’during’) Layer 22 Head 5 (named entities, mostly made of two parts) (’enegger’, ’ Schwartz’), (’shire’, ’ Lincoln’), (’xual’, ’Weiss’), (’nery’, ’ Nun’), (’ Qiao’, ’ Huang’), (’schild’, ’ Schwarz’), (’oslov’, ’ Czech’), (’ Rica’, ’ Costa’), (’ Qiao’, ’ Qiao’), (’xual’, ’ RW’), (’ Nadu’, ’ Tamil’), (’ Nadu’, ’Tam’), (’shire’, ’ Baldwin’), (’swick’, ’ Hoff’), (’xual’, ’ Weiss’), (’ Takeru’, ’ Yamato’), (’xual’, ’ Grassley’), (’swick’, ’ Schwartz’), (’enegger’, ’ Schiff’), (’enegger’, ’Weiss’), (’xual’, ’RW’), (’shire’, ’ Nottingham’), (’shire’, ’ Barrett’), (’arest’, ’ Buch’), (’ Fei’, ’ Fei’), (’miah’, ’Jere’), (’swick’, ’ Owl’), (’ufact’, ’ Swanson’), (’akuya’, ’ Tanaka’), (’ Sachs’, ’ Feinstein’), (’enegger’, ’ Wagner’), (’otle’, ’Roberts’), (’shire’, ’ Neville’), (’oslov’, ’ Prague’), (’sburg’, ’ Hammond’), (’ ILCS’, ’ Dunham’), (’ Malfoy’, ’ Draco’), (’yip’, ’Billy’), (’iversal’, ’ Monroe’), (’iversal’, ’Murray’), (’Yang’, ’Yang’), (’akuya’, ’ Krishna’), (’schild’, ’ Schwartz’), (’tz’, ’ Rabb’), (’shire’, ’gow’), (’enegger’, ’ Feldman’), (’cair’, ’ Chou’), (’enegger’, ’ Duffy’), (’enegger’, ’Sch’), (’ Jensen’, ’ Jensen’) Layer 22 Head 13 (’ Additionally’, ’ the’), (’ Unfortunately’, ’ the’), (’ Nevertheless’, ’ the’), (’ Sadly’, ’ the’), (’ However’, ’ the’), (’ Furthermore’, ’ the’), (’ Additionally’, ’,’), (’ During’, ’ the’), (’ Moreover’, ’ the’), (’ Whilst’, ’ the’), (’ Since’, ’ the’), (’ Unfortunately’, ’,’), (’ Additionally’, ’-’), (’ Perhaps’, ’ the’), (’ Sadly’, ’,’), (’ Throughout’, ’ the’), (’ Nevertheless’, ’,’), (’ While’, ’ the’), (’ However’, ’,’), (’ Although’, ’ the’), (’ There’, ’ the’), (’ Furthermore’, ’,’), (’ Eventually’, ’ the’), (’ Meanwhile’, ’ the’), (’ Hopefully’, ’ the’), (’ Nevertheless’, ’-’), (’ During’, ’,’), (’ Regardless’, ’ the’), (’ However’, ’-’), (’ Whilst’, ’,’), (’ Additionally’, ’ and’), (’ Moreover’, ’,’), (’ Unfortunately’, ’-’), (’ They’, ’ the’), (’ Sadly’, ’-’), (’ Whereas’, ’ the’), (’ Additionally’, ’ a’), (’ Furthermore’, ’-’), (’ Unlike’, ’ the’), (’ Typically’, ’ the’), (’ Since’, ’,’), (’ Normally’, ’ the’), (’ Perhaps’, ’,’), (’ During’, ’-’), (’ Throughout’, ’,’), (’ While’, ’,’), (’ Nevertheless’, ’ a’), (’ Interestingly’, ’ the’), (’ Unfortunately’, ’ and’), (’ Unfortunately’, ’ a’) C.1.2 GENDER Layer 18 Head 1 (’ Marie’, ’women’), (’ Marie’, ’ actresses’), (’ Anne’, ’women’), (’ Anne’, ’Women’), (’ Marie’, ’woman’), (’ Marie’, ’Women’), (’ Anne’, ’woman’), (’ Marie’, ’Woman’), (’ Anne’, ’ actresses’), (’ Marie’, ’ heroine’), (’Jane’, ’Women’), (’ Anne’, ’ heroine’), (’Jane’, ’women’), (’ actresses’, ’Women’), (’ Anne’, ’Woman’), (’ Esther’, ’Women’), (’ Esther’, ’women’), (’ Marie’, ’girls’), (’ Anne’, ’Mrs’), (’ Marie’, ’ actress’), (’ actresses’, ’women’), (’Jane’, ’Woman’), (’ Marie’, ’ girls’), (’Jane’, ’ actresses’), (’Anne’, ’Woman’), (’ Marie’, ’Girls’), (’Anne’, ’women’), (’ Anne’, ’Girls’), (’ actresses’, ’Woman’), (’ Marie’, ’ Women’), (’ Anne’, ’ Women’), (’ Anne’, ’ girls’), (’ Anne’, ’girl’), (’Anne’, ’Women’), (’Women’, ’Woman’), (’ Anne’, ’girls’), (’Anne’, ’ actresses’), (’ Michelle’, ’women’), (’ Marie’, ’ Actress’), (’ Marie’, ’girl’), (’ Anne’, ’ Feminist’), (’ Marie’, ’ women’), (’ Devi’, ’Women’), (’ Elizabeth’, ’Women’), (’ Anne’, ’ actress’), (’Anne’, ’Mrs’), (’Answer’, ’answered’), (’Anne’, ’woman’), (’maid’, ’Woman’), (’Marie’, ’women’) C.1.3 GEOGRAPHY Layer 16 Head 6* (’ Mumbai’, ’ Chennai’), (’ Mumbai’, ’India’), (’ Chennai’, ’ Mumbai’), (’ Tasmania’, ’ Queensland’), (’ Rahul’, ’India’), (’ Gujar’, ’India’), (’ Bangalore’, ’ Chennai’), (’Scotland’, ’England’), (’ Kerala’, ’ Chennai’), (’ Mumbai’, ’ Delhi’), (’Scotland’, ’Britain’), (’ Mumbai’, ’ Bangalore’), (’India’, ’Pakistan’), (’Ireland’, ’Scotland’), (’ Bangalore’, ’ Mumbai’), (’ Chennai’, ’ Bangalore’), (’ Gujar’, ’ Aadhaar’), (’ Maharashtra’, ’ Mumbai’), (’ Gujarat’, ’ Maharashtra’), (’ Gujar’, ’ Gujarat’), (’Australia’, ’Australian’), (’ Gujarat’, ’India’), (’ Gujar’, ’ Rahul’), (’ Mumbai’, ’ Maharashtra’), (’England’, ’Britain’), (’ Chennai’, ’India’), (’ Bombay’, ’ Mumbai’), (’ Kerala’, ’ Tamil’), (’ Mumbai’, ’ Hindi’), (’ Tasman’, ’ Tasmania’), (’India’, ’ Mumbai’), (’ Gujar’, ’ Hindi’), (’ Gujar’, ’ Maharashtra’), (’Austral’, ’ Australians’), (’ Kerala’, ’ Maharashtra’), (’ Bangalore’, ’India’), (’ Kerala’, ’India’), (’ Bombay’, ’India’), (’Austral’, ’Australia’), (’India’, ’ Aadhaar’), (’ Mumbai’, ’ Sharma’), (’Austral’, ’Australian’), (’ Kerala’, ’ Mumbai’), (’England’, ’Scotland’), (’ Gujar’, ’ Mumbai’), (’ Mumbai’, ’ Rahul’), (’ Tasman’, ’ Queensland’), (’ Chennai’, ’ Tamil’), (’ Maharashtra’, ’ Gujarat’), (’ Modi’, ’India’) Layer 18 Head 9 (’ Winnipeg’, ’ Winnipeg’), (’ Edmonton’, ’ Winnipeg’), (’ Winnipeg’, ’ Ottawa’), (’ Calgary’, ’ Winnipeg’), (’ Ottawa’, ’ Winnipeg’), (’ Winnipeg’, ’ Calgary’), (’ Winnipeg’, ’CBC’), (’ Winnipeg’, ’Canada’), (’ Canberra’, ’ Canberra’), (’ RCMP’, ’ Winnipeg’), (’ Ottawa’, ’CBC’), (’ Winnipeg’, ’Canadian’), (’Toronto’, ’ Winnipeg’), (’ Winnipeg’, ’ Canadians’), (’ Edmonton’, ’ Ottawa’), (’ Winnipeg’, ’ RCMP’), (’ Winnipeg’, ’ Edmonton’), (’ Ottawa’, ’Canadian’), (’Canadian’, ’ Winnipeg’), (’Toronto’, ’ Calgary’), (’ Winnipeg’, ’ Quebec’), (’ Winnipeg’, ’ Canad’), (’Toronto’, ’Canadian’), (’ Edmonton’, ’ Edmonton’), (’ Ottawa’, ’ Calgary’), (’ Leafs’, ’ Winnipeg’), (’ Edmonton’, ’ Calgary’), (’ Ottawa’, ’Canada’), (’ Calgary’, ’Canadian’), (’Toronto’, ’Canada’), (’ Calgary’, ’ Calgary’), (’Ott’, ’ Winnipeg’), (’ Winnipeg’, ’ Saskatchewan’), (’ Winnipeg’, ’ Canadian’), (’ Ottawa’, ’ Ottawa’), (’ Calgary’, ’ Ottawa’), (’ Winnipeg’, ’ Manitoba’), (’ Canadians’, ’ Winnipeg’), (’ Winnipeg’, ’ Canada’), (’ RCMP’, ’ Calgary’), (’Toronto’, ’ Manitoba’), (’Toronto’, ’ Ottawa’), (’CBC’, ’ Winnipeg’), (’Canadian’, ’Canada’), (’ Edmonton’, ’Canadian’), (’ RCMP’, ’ Ottawa’), (’ Winnipeg’, ’ipeg’), (’Toronto’, ’Toronto’), (’Canadian’, ’ Calgary’), (’ Ottawa’, ’ Canadians’) Layer 16 Head 2* (’ Australians’, ’Austral’), (’Austral’, ’Australia’), (’Austral’, ’ Canberra’), (’ Canberra’, ’Austral’), (’ Edmonton’, ’ Winnipeg’), (’Austral’, ’Australian’), (’ Edmonton’, ’ Alberta’), (’ Australians’, ’Australia’), (’Austral’, ’ Australians’), (’ovych’, ’Ukraine’), (’ Canad’, ’ Quebec’), (’ Australians’, ’Australian’), (’ Manitoba’, ’ Winnipeg’), (’ Winnipeg’, ’ Manitoba’), (’Canada’, ’Canadian’), (’ Bulgar’, ’Moscow’), (’ Edmonton’, ’ Manitoba’), (’Austral’, ’berra’), (’Australian’, ’Austral’), (’ovych’, ’ Ukrainians’), (’ Canadians’, ’Canada’), (’ Australians’, ’ Canberra’), (’Canadian’, ’Canada’), (’ovych’, ’ Yanukovych’), (’ Trudeau’, ’Canada’), (’ Bulgar’, ’ Dmitry’), (’Austral’, ’ Australia’), (’ Canad’, ’ Mulcair’), (’ Canberra’, ’berra’), (’oglu’, ’Turkish’), (’Canada’, ’udeau’), (’ Oilers’, ’ Edmonton’), (’ Canberra’, ’Australia’), (’ Edmonton’, ’Canada’), (’ Calgary’, ’ Edmonton’), (’ Calgary’, ’ Alberta’), (’ Trudeau’, ’udeau’), (’ Edmonton’, ’ Calgary’), (’ Trudeau’, ’Canadian’), (’ Canberra’, ’Australian’), (’ Canucks’, ’ Vancouver’), (’Australian’, ’Australia’), (’ Fraser’, ’ Vancouver’), (’ Edmonton’, ’Canadian’), (’elaide’, ’Austral’), (’ Braz’, ’Tex’), (’ RCMP’, ’Canada’), (’sov’, ’Moscow’), (’ Bulgar’, ’Russia’), (’Canada’, ’ Canadians’) Layer 21 Head 12* (’ Indones’, ’ Indonesian’), (’ Nguyen’, ’ Vietnamese’), (’ Jakarta’, ’ Indonesian’), (’ Indonesia’, ’ Indonesian’), (’oglu’, ’Turkish’), (’ Indones’, ’ Indonesia’), (’ Indones’, ’ Jakarta’), (’ Koreans’, ’ Korean’), (’oglu’, ’ Turkish’), (’ Taiwanese’, ’ Taiwan’), (’ Nguyen’, ’ Thai’), (’Brazil’, ’ Brazilian’), (’ Indonesia’, ’ Indones’), (’ Taiwanese’, ’Tai’), (’oglu’, ’ Istanbul’), (’ Indonesian’, ’ Indones’), (’ Jakarta’, ’ Indones’), (’ Nguyen’, ’ Laos’), (’ Sloven’, ’ Slovenia’), (’ Korean’, ’ Koreans’), (’ Nguyen’, ’ Cambod’), (’zzi’, ’Italy’), (’Tai’, ’ Taiwanese’), (’ Jakarta’, ’ Indonesia’), (’ Indonesian’, ’ Indonesia’), (’ Bulgaria’, ’ Bulgarian’), (’ Icelandic’, ’ Iceland’), (’ Koreans’, ’ Korea’), (’ Brazilian’, ’Brazil’), (’ Bulgar’, ’ Bulgarian’), (’ Malays’, ’ Malaysian’), (’oglu’, ’ Ankara’), (’ Bulgarian’, ’ Bulgaria’), (’ Indones’, ’ Malays’), (’ Tai’, ’ Taiwanese’), (’oglu’, ’Turkey’), (’ Janeiro’, ’Brazil’), (’zzi’, ’Italian’), (’ Malays’, ’ Kuala’), (’ Fuk’, ’Japanese’), (’ Indonesian’, ’ Jakarta’), (’ Taiwan’, ’ Taiwanese’), (’oglu’, ’ Erdogan’), (’ Nguyen’, ’ Viet’), (’ Filipino’, ’ Philippine’), (’ Indonesia’, ’ Jakarta’), (’ Jong’, ’ Koreans’), (’ Duterte’, ’ Filipino’), (’ Azerbai’, ’ Azerbaijan’), (’ Bulgarian’, ’ Bulgar’) C.1.4 BRITISH SPELLING Layer 19 Head 4 (’ Whilst’, ’ realise’), (’ Whilst’, ’ Whilst’), (’ Whilst’, ’ realised’), (’ Whilst’, ’ organise’), (’ Whilst’, ’ recognise’), (’ Whilst’, ’ civilisation’), (’ Whilst’, ’ organisation’), (’ Whilst’, ’ whilst’), (’ Whilst’, ’ organising’), (’ Whilst’, ’ organised’), (’ Whilst’, ’ organis’), (’ Whilst’, ’ util’), (’ Whilst’, ’ apologise’), (’ Whilst’, ’ emphas’), (’ Whilst’, ’ analyse’), (’ Whilst’, ’ organisations’), (’ Whilst’, ’ recognised’), (’ Whilst’, ’ flavours’), (’ Whilst’, ’ colour’), (’ Whilst’, ’colour’), (’ Whilst’, ’ Nasa’), (’ Whilst’, ’ Nato’), (’ Whilst’, ’ analys’), (’ Whilst’, ’ flavour’), (’ Whilst’, ’ colourful’), (’ Whilst’, ’ colours’), (’ organising’, ’ realise’), (’ Whilst’, ’ behavioural’), (’ Whilst’, ’ coloured’), (’ Whilst’, ’ learnt’), (’ Whilst’, ’ favourable’), (’ Whilst’, ’isation’), (’ Whilst’, ’ programmes’), (’ organis’, ’ realise’), (’ Whilst’, ’ authorised’), (’ Whilst’, ’ practise’), (’ Whilst’, ’ criticised’), (’ Whilst’, ’ organisers’), (’ organising’, ’ organise’), (’ Whilst’, ’ analysed’), (’ Whilst’, ’ programme’), (’ Whilst’, ’ behaviours’), (’ Whilst’, ’ humour’), (’ Whilst’, ’isations’), (’ Whilst’, ’ tyres’), (’ Whilst’, ’ aluminium’), (’ organised’, ’ realise’), (’ Whilst’, ’ favour’), (’ Whilst’, ’ ageing’), (’ organis’, ’ organise’) C.1.5 RELATED WORDS Layer 13 Head 8* (’ mirac’, ’ miraculous’), (’ mirac’, ’ miracle’), (’ nuanced’, ’ nuance’), (’Better’, ’ smarter’), (’ equitable’, ’ healthier’), (’ liberating’, ’ liberated’), (’ unaffected’, ’ untouched’), (’ equitable’, ’ unbiased’), (’ inconsistent’, ’failed’), (’ emanc’, ’ liberated’), (’ equitable’, ’ humane’), (’ liberated’, ’ liberating’), (’ incompatible’, ’failed’), (’ mirac’, ’ miracles’), (’ consensual’, ’ peacefully’), (’ uncond’, ’ unconditional’), (’ unexpected’, ’ unexpectedly’), (’ unconditional’, ’ untouched’), (’Better’, ’ healthier’), (’ unexpectedly’, ’ unexpected’), (’ graceful’, ’ peacefully’), (’ emanc’, ’ emancipation’), (’ effortlessly’, ’ seamlessly’), (’ honorable’, ’ peacefully’), (’ unconditional’, ’ uncond’), (’ rubbish’, ’ excuses’), (’ emanc’, ’ liberating’), (’ equitable’, ’ peacefully’), (’ Feather’, ’ gracious’), (’ emancipation’, ’ liberated’), (’ nuanced’, ’ nuances’), (’icable’, ’ avoids’), (’ liberated’, ’ freeing’), (’ liberating’, ’ freeing’), (’ inconsistent’, ’ lousy’), (’ lousy’, ’failed’), (’ unconditional’, ’ unaffected’), (’ equitable’, ’ivable’), (’ equitable’, ’Honest’), (’erning’, ’ principled’), (’ survival’, ’surv’), (’ocre’, ’ lackluster’), (’ equitable’, ’ liberating’), (’Bah’, ’Instead’), (’ incompatible’, ’ inappropriate ’), (’ emancipation’, ’ emanc’), (’ unchanged’, ’ unaffected’), (’ peacefully’, ’ peaceful’), (’ equitable’, ’ safer’), (’ unconditional’, ’ uninterrupted ’) Layer 12 Head 14* (’ perished’, ’ died’), (’ perished’, ’ dies’), (’ testify’, ’ testifying’), (’ intervened’, ’ interven’), (’ advises’, ’ advising’), (’ disbanded’, ’ disband’), (’lost’, ’ perished’), (’ died’, ’ perished’), (’ applauded’, ’ applaud’), (’ dictates’, ’ dictate’), (’ prev’, ’ prevailed’), (’ advise’, ’ advising’), (’shed’, ’thood’), (’Reviewed’, ’orsi’), (’ dies’, ’ perished’), (’published’, ’ publishes’), (’ prevailed’, ’ prevail’), (’ died’, ’ dies’), (’ testified’, ’ testifying’), (’ testifying’, ’ testify’), (’ dictates’, ’ governs’), (’ complicit’, ’ complicity’), (’ dictated’, ’ dictate’), (’enough’, ’CHO’), (’ skelet’, ’independence’), (’ Recomm’, ’ prescribe’), (’essential’, ’ perished’), (’noticed’, ’CHO’), (’avorable’, ’ approving’), (’ perish’, ’ perished’), (’ overseeing’, ’ oversee’), (’ skelet’, ’shed’), (’EY’, ’chart’), (’ presiding’, ’ overseeing’), (’ fundament’, ’pees’), (’ sanction’, ’appro’), (’ prevail’, ’ prevailed’), (’ governs’, ’ regulates’), (’tails’, ’shed’), (’ Period’, ’chart’), (’lihood’, ’hower’), (’ prev’, ’ prevail’), (’ aids’, ’helps’), (’ dictated’, ’ dict’), (’ dictated’, ’ dictates’), (’ Dise’, ’itta’), (’REC’, ’CHO’), (’exclusive’, ’ORTS’), (’ Helpful’, ’helps’), (’bart’, ’ciples’) Layer 14 Head 1* (’ misunderstand’, ’ incorrectly’) , (’ Proper’, ’ properly’), (’ inaccur’, ’ incorrectly’), (’ misunderstand’, ’ wrongly’), (’ misinterpret’, ’ incorrectly’), (’ incorrect’, ’ incorrectly’), (’ mistakes’, ’ incorrectly’), (’ misunderstanding’, ’ incorrectly’), (’ proper’, ’ properly’), (’fail’, ’ incorrectly’), (’ faulty’, ’ incorrectly’), (’ misrepresent’, ’ incorrectly’), (’ failing’, ’ fails’), (’ inaccurate’, ’ incorrectly’), (’ errors’, ’ incorrectly’), (’ harmful’, ’ Worse’), (’ misunderstand’, ’ wrong’), (’ misunderstand’, ’ improperly’), (’wrong’, ’ incorrectly’), (’ harmful’, ’ incorrectly’), (’ mistake’, ’ incorrectly’), (’ mis’, ’ incorrectly’), (’fail’, ’ fails’), (’ detrimental’, ’ Worse’), (’ rightful’, ’ properly’), (’ misunderstand’, ’ inappropriately’), (’ harmful’, ’ unnecessarily’), (’ neglect’, ’ unnecessarily’), (’ correctly’, ’ properly’), (’ Worst’, ’ Worse’), (’ failure’, ’ fails’), (’ satisfactory’, ’ adequately’), (’ defective’, ’ incorrectly’), (’ misunderstand’, ’ mistakenly’), (’ harming’, ’ Worse’), (’ mishand’, ’ incorrectly’), (’adequ’, ’ adequately’), (’ misuse’, ’ incorrectly’), (’Failure’, ’ fails’), (’ hurts’, ’ Worse’), (’ misunderstand’, ’wrong’), (’ mistakenly’, ’ incorrectly’), (’ failures’, ’ fails’), (’ adequate’, ’ adequately’), (’ properly’, ’ correctly’), (’ hurting’, ’ Worse’), (’ Proper’, ’ correctly’), (’ fail’, ’ fails’), (’ mistaken’, ’ incorrectly’), (’ harming’, ’ adversely’) Layer 14 Head 13* (’ editors’, ’ editorial’), (’ broadcasters’, ’ broadcasting’) , (’ broadcasting’, ’ broadcasts’), (’ broadcast’, ’ broadcasts’), (’ Broadcasting’, ’ broadcasters’) , (’ editors’, ’ Editorial’), (’ broadcasters’, ’ broadcast’), (’ Broadcasting’, ’ broadcast’), (’ lectures’, ’ lecture’), (’ Broadcast’, ’ broadcasting’), (’ broadcasters’, ’ broadcaster’), (’ broadcasters’, ’ broadcasts’), (’ Publishers’, ’ publishing’), (’ broadcasting’, ’ broadcast’), (’ broadcasters’, ’ Broadcasting’) , (’ Publishers’, ’ Publishing’), (’ lecture’, ’ lectures’), (’ Editors’, ’ editorial’), (’ broadcast’, ’ broadcasting’), (’ Broadcasting’, ’ broadcasts’), (’ broadcasting’, ’ broadcasters’) , (’ journalism’, ’ journalistic’), (’reports’, ’Journal’), (’ Broadcast’, ’ Broadcasting’), (’ Publishers’, ’Publisher’), (’azeera’, ’ Broadcasting’), (’Reporting’, ’Journal’), (’ journalistic’, ’ journalism’), (’ Broadcasting’, ’ broadcaster’), (’ broadcasting’, ’ broadcaster’), (’ broadcaster’, ’ broadcasting’), (’ editors’, ’ publication’), (’ journalism’, ’journal’), (’ Journalists’, ’Journal’), (’ documentary’, ’ documentaries’) , (’ filming’, ’ filmed’), (’ publishers’, ’ publishing’), (’ journalism’, ’Journal’), (’ Broadcast’, ’ broadcasts’), (’ broadcast’, ’ broadcasters’), (’ articles’, ’Journal’), (’ reporting’, ’reports’), (’ manuscripts’, ’ manuscript’), (’ publish’, ’ publishing’), (’azeera’, ’ broadcasters’), (’ Publishers’, ’ publication’), (’ Publishers’, ’ publications’), (’ newspapers’, ’ Newsp’), (’ Broadcast’, ’ broadcasters’), (’ Readers’, ’Journal’) C.2 QUERY-KEY MATRICES Layer 22 Head 1 (’ usual’, ’ usual’), (’ occasional’, ’ occasional’), (’ aforementioned’, ’ aforementioned’), (’ general’, ’ usual’), (’ usual’, ’ slightest’), (’agn’, ’ealous’), (’ traditional’, ’ usual’), (’ free’, ’amina’), (’ major’, ’ major’), (’ frequent’, ’ occasional’), (’ generous’, ’ generous’), (’ free’, ’lam’), (’ regular’, ’ usual’), (’ standard’, ’ usual’), (’ main’, ’ usual’), (’ complete’, ’ Finished’), (’ main’, ’liest’), (’ traditional’, ’ traditional’), (’ latest’, ’ aforementioned’), (’ current’, ’ aforementioned’), (’ normal’, ’ usual’), (’ dominant’, ’ dominant’), (’ free’, ’ministic’), (’ brief’, ’ brief’), (’ biggest’, ’liest’), (’usual’, ’ usual’), (’ rash’, ’ rash’), (’ regular’, ’ occasional’), (’ specialized’, ’ specialized’), (’ free’, ’iosis’), (’ free’, ’hero’), (’ specialty’, ’ specialty’), (’ general’, ’iosis’), (’ nearby’, ’ nearby’), (’ best’, ’liest’), (’ officially’, ’ formal’), (’ immediate’, ’mediate’), (’ special’, ’ ultimate’), (’ free’, ’otropic’), (’ rigorous’, ’ comparative’), (’ actual’, ’ slightest’), (’ complete’, ’ comparative’), (’ typical’, ’ usual’), (’ modern’, ’ modern’), (’ best’, ’ smartest’), (’ free’, ’ free’), (’ highest’, ’ widest’), (’ specialist’, ’ specialist’), (’ appropriate’, ’ slightest’), (’ usual’, ’liest’) Layer 0 Head 9 (’59’, ’27’), (’212’, ’39’), (’212’, ’38’), (’217’, ’39’), (’37’, ’27’), (’59’, ’26’), (’54’, ’88’), (’156’, ’39’), (’212’, ’79’), (’59’, ’28’), (’57’, ’27’), (’212’, ’57’), (’156’, ’29’), (’36’, ’27’), (’217’, ’79’), (’59’, ’38’), (’63’, ’27’), (’72’, ’39’), (’57’, ’26’), (’57’, ’34’), (’59’, ’34’), (’156’, ’27’), (’91’, ’27’), (’156’, ’38’), (’63’, ’26’), (’59’, ’25’), (’138’, ’27’), (’217’, ’38’), (’72’, ’27’), (’54’, ’27’), (’36’, ’29’), (’72’, ’26’), (’307’, ’39’), (’37’, ’26’), (’217’, ’57’), (’37’, ’29’), (’54’, ’38’), (’59’, ’29’), (’37’, ’28’), (’307’, ’38’), (’57’, ’29’), (’63’, ’29’), (’71’, ’27’), (’138’, ’78’), (’59’, ’88’), (’89’, ’27’), (’561’, ’79’), (’212’, ’29’), (’183’, ’27’), (’54’, ’29’) Layer 17 Head 6* (’ legally’, ’ legal’), (’ legal’, ’ sentencing’), (’ legal’, ’ arbitration’), (’ boycot’, ’ boycott’), (’ legal’, ’ criminal’), (’ legal’, ’ Judicial’), (’ legal’, ’ rulings’), (’ judicial’, ’ sentencing’), (’ marketing’, ’ advertising’), (’ legal’, ’ confidential’), (’ protesting’, ’ protest’), (’ recruited’, ’ recruit’), (’ recruited’, ’ recruits’), (’ judicial’, ’ criminal’), (’ legal’, ’ exemptions’), (’ demographics’, ’ demographic’), (’ boycott’, ’ boycot’), (’ sentencing’, ’ criminal’), (’ recruitment’, ’ recruits’), (’ recruitment’, ’ recruit’), (’ Constitutional’, ’ sentencing’) , (’ Legal’, ’ sentencing’), (’ constitutional’, ’ sentencing’) , (’ legal’, ’ subpoena’), (’ injury’, ’ injuries’), (’ FOIA’, ’ confidential’), (’ legal’, ’ licenses’), (’ donation’, ’ donations’), (’ disclosure’, ’ confidential’), (’ negotiation’, ’ negotiating’), (’ Judicial’, ’ legal’), (’ legally’, ’ criminal’), (’ legally’, ’ confidential’), (’ legal’, ’ jur’), (’ legal’, ’ enforcement’), (’ legal’, ’ lawyers’), (’ legally’, ’ enforcement’), (’ recruitment’, ’ recruiting’), (’ recruiting’, ’ recruit’), (’ criminal’, ’ sentencing’), (’ legal’, ’ attorneys’), (’ negotiations’, ’ negotiating’), (’ legally’, ’ arbitration’), (’ recruited’, ’ recruiting’), (’ legally’, ’ exemptions’), (’ legal’, ’ judicial’), (’ voting’, ’ Vote’), (’ negotiated’, ’ negotiating’), (’ legislative’, ’ veto’), (’ fund
1. What is the focus of the paper regarding transformers in embedding space? 2. What are the strengths and weaknesses of the paper, particularly in terms of its contribution and technical aspects? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Do you have any questions or concerns regarding the paper's analysis, results, or conclusions?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper attempts to analyze transformers in embedding space. In particular, this paper extends the approach in the existing work of Elhage et al. [2021] to project the parameters of a transformer into an embedding space. This is done by multiplying the parameters by the embedding matrix in the transformer. To project the embeddings back to the parameter space, the paper uses the transpose of the embedding matrix to approximate its right inverse. Empirically, examine the top-k pairs of vocabulary items for the parameters (e.g., heads) and latent space in a GPT-2 model. It also presents some interpretation of the fine-tuning process. Finally, the paper shows that it can align the embedding space of different models and show zero-shot stitching results. Strengths And Weaknesses Strength The problem is important and interesting. Weakness As mentioned in its main text, the paper's main contribution is to extend the prior work of Elhage et al. [2021] from 2-layer models to multi-layer models. However, it is not clear whether it is technically nontrivial. I did not understand why the parameters in a multi-layer model could not be projected into embedding space. To project the embeddings back to the parameter space, the paper uses the transpose of the embedding matrix as an approximation of its right inverse. How accurate the approximation is? The paper claims that it presents a theoretical analysis. However, I did not see a precise statement and proof of the "theory" in the paper. The results in Sec. 4 and Sec. 5 are not that surprising to me. In fact, it is expected that the well-trained transformers can learn semantic information in its parameters and that different layers in the model serve different roles while two models on the same task may have corresponding modules. Similar things have been investigated in other neural networks and shallow transformers. Are there any new insights that we can obtain from the interpretation? Sec. 4 only considers GPT-2 as an example to validate the proposed method. It is better to analyze more models (e.g., with different model sizes or trained on different datasets) to draw a conclusion. Clarity, Quality, Novelty And Reproducibility Clarity: the clarity of the paper should be significantly improved. A standard machine learning paper should treat "theory" more seriously. In particular, please at least present and prove your theory rigorously such that readers can understand what's your theory and check the proof. Quality & Novelty: the paper did not emphasize the technical challenge of extending the prior work of Elhage et al. [2021] from 2-layer models to multi-layer models and I did not get it either. So the significance and novelty are limited. Reproducibility: it seems reproducible to me.
ICLR
Title Analyzing Transformers in Embedding Space Abstract Understanding Transformer-based models has attracted significant attention, as they lie at the heart of recent technological advances across machine learning. While most interpretability methods rely on running models over inputs, recent work has shown that a zero-pass approach, where parameters are interpreted directly without a forward/backward pass is feasible for some Transformer parameters, and for two-layer attention networks. In this work, we present a theoretical analysis where all parameters of a trained Transformer are interpreted by projecting them into the embedding space, that is, the space of vocabulary items they operate on. We derive a simple theoretical framework to support our arguments and provide ample evidence for its validity. First, an empirical analysis showing that parameters of both pretrained and fine-tuned models can be interpreted in embedding space. Second, we present two applications of our framework: (a) aligning the parameters of different models that share a vocabulary, and (b) constructing a classifier without training by “translating” the parameters of a fine-tuned classifier to parameters of a different model that was only pretrained. Overall, our findings open the door to interpretation methods that, at least in part, abstract away from model specifics and operate in the embedding space only. 1 INTRODUCTION Transformer-based models [Vaswani et al., 2017] currently dominate Natural Language Processing [Devlin et al., 2018; Radford et al., 2019; Zhang et al., 2022] as well as many other fields of machine learning [Dosovitskiy et al., 2020; Chen et al., 2020; Baevski et al., 2020]. Consequently, understanding their inner workings has been a topic of great interest. Typically, work on interpreting Transformers relies on feeding inputs to the model and analyzing the resulting activations [Adi et al., 2016; Shi et al., 2016; Clark et al., 2019]. Thus, interpretation involves an expensive forward, and sometimes also a backward pass, over multiple inputs. Moreover, such interpretation methods are conditioned on the input, and are not guaranteed to generalize to all inputs. In the evolving literature on static interpretation, i.e., without forward or backward passes, Geva et al. [2022b] showed that the value vectors of the Transformer feed-forward module (the second layer of the feed-forward network) can be interpreted by projecting them into the embedding space, i.e., multiplying them by the embedding matrix to obtain a representation over vocabulary items. Elhage et al. [2021] have shown that in a 2-layer attention network, weight matrices can be interpreted in the embedding space as well. In this work, we extend the theoretical analysis and findings of Elhage et al. [2021] and Geva et al. [2022b], and present a zero-pass framework to understand the behaviour of Transformers. Conceretely, we interpret all weights of a pretrained language model (LM) in embedding space, including both keys and values of the feed-forward module as well as all attention parameters. Our theory relies on a simple observation. Since Geva et al. [2022b] have shown that one can project hidden states to the embedding space via the embedding matrix, we can extend this to other parts of the model by projecting to the embedding space and then projecting back by multiplying with a right-inverse of the embedding matrix. Thus, we can recast inner products in the model as inner products in embedding space. Viewing inner products in this way, we can interpret such products as interactions between pairs of vocabulary items.1 This applies to (a) interactions between 1We refer to the unique items of the vocabulary as vocabulary items, and to the (possibly duplicate) elements of a tokenized input as tokens. 2. Pack them into a similarity matrix attention queries and keys as well as to (b) interactions between attention value vectors and the parameters that project them at the output of the attention module. Taking this perspective to an extreme, one can view Transformers as operating implicitly in the embedding space. This entails the existence of a single linear space that depends solely on the tokenizer, in which parameters of different Transformers can be compared. Thus, one can use the embedding space to compare and transfer information across different models that share a tokenizer. We provide extensive empirical evidence for the credibility of our proposal. On the interpretation front (Fig. 1, Left), we provide qualitative and quantitative evidence that Transformer parameters can be interpreted in embedding space. We also show that when fine-tuning a pretrained LM on a sentiment analysis task (over movie reviews), projecting changes in parameters into embedding space yields words that characterize sentiment towards movies. Second (Fig. 1, Center), we show that given two distinct instances of BERT pretrained with different random seeds [Sellam et al., 2022], we can align layers of the two instances by casting their weights into the embedding space. We find that indeed layer i of the first instance aligns well to layer i of the second instance, showing the different BERT instances converge to a semantically-similar solution. Last (Fig. 1, Right), we take a model fine-tuned on a sentiment analysis task and “transfer” the learned weights to a different model that was only pretrained by going through the embedding spaces of the two models. We show that in 30% of the cases, this procedure, termed stitching, results in a classifier that reaches an impressive accuracy of 70% on the IMDB benchmark [Maas et al., 2011] without any training. Overall, our findings suggest that analyzing Transformers in embedding space is fruitful for both interpretability and as a tool to relate different models that share a vocabulary, and opens the door to interpretation methods that operate in embedding space only. Our code is available at https: //anonymized. 2 BACKGROUND We now present the main components of the Transformer [Vaswani et al., 2017] relevant to our analysis. We discuss the residual stream view of Transformers, and recapitulate a view of the attention layer parameters as interaction matrices WVO and WQK [Elhage et al., 2021]. Similar to Elhage et al. [2021], we exclude biases and layer normalization from our analysis. 2.1 TRANSFORMER ARCHITECTURE The Transformer consists of a stack of layers, each includes an attention module followed by a Feed-Forward (FF) module. All inputs and outputs are sequences of N vectors of dimensionality d. The Attention Module takes as input a sequence of representations X ∈ RN×d, and each layer L is parameterized by four matrices W (L)Q ,W (L) K ,W (L) V ,W (L) O ∈ Rd×d (we henceforth omit the layer superscript for brevity). The input X is projected to produce queries, keys, and values: Qatt = XWQ,Katt = XWK , Vatt = XWV . Each one of Qatt,Katt, Vatt is split along the columns to H different heads of dimensionality RN× dH , denoted by Qiatt,Kiatt, V iatt respectively. We then compute H attention maps: Ai = softmax ( QiattK iT att√ d/H +M ) ∈ RN×N , where M ∈ RN×N is the attention mask. Each attention map is applied to the corresponding value head as AiV iatt, results are concatenated along columns and projected via WO. The input to the module is added via a residual connection, and thus the attention module’s output is: X + Concat [ A1V 1att, . . . , A iV iatt, . . . , A HV Hatt ] WO. (1) The FF Module is a two-layer neural network, applied to each position independently. Following past terminology [Sukhbaatar et al., 2019; Geva et al., 2020], weights of the first layer are called FF keys and weights of the second layer FF values. This is an analogy to attention, as the FF module too can be expressed as: f(QKT)V , where f is the activation function, Q ∈ RN×d is the output of the attention module and the input to the FF module, and K,V ∈ Rdff×d are the weights of the first and second layers of the FF module. Unlike attention, keys and values are learnable parameters. The output of the FF module is added to the output of the attention module to form the output of the layer via a residual connection. The output of the i-th layer is called the i-th hidden state. Embedding Matrix To process sequences of discrete tokens, Transformers use an embedding matrix E ∈ Rd×e that provides a d-dimensional representation to vocabulary items before entering the first Transformer layer. When training Transformers with a language modeling objective, the same embedding matrix E is often used [Press and Wolf, 2016] to take the output of the last Transformer layer and project it back to the vocabulary dimension, i.e., into the embedding space. In this work, we will interpret all components of the Transformer model in the embedding space. 2.2 THE RESIDUAL STREAM We rely on a useful view of the Transformer through its residual connections proposed by Elhage et al. [2021].2 Specifically, each layer takes a hidden state as input and adds information to the hidden state through its residual connection. Under this view, the hidden state is a residual stream passed along the layers, from which information is read, and to which information is written at each layer. Elhage et al. [2021] and Geva et al. [2022b] observed that the residual stream is often barely updated in the last layers, and thus the final prediction is determined in early layers and the hidden state is mostly passed through the later layers. An exciting consequence of the residual stream view is that we can project hidden states in every layer into embedding space by multiplying the hidden state with the embedding matrix E, treating the hidden state as if it were the output of the last layer. Geva et al. [2022a] used this approach to interpret the prediction of Transformer-based language models, and we follow a similar approach. 2.3 WQK AND WVO Following Elhage et al. [2021], we describe the attention module in terms of interaction matrices WQK and WVO which will be later used in our theoretical derivation. The computation of the attention module (§2.1) can be re-interpreted as follows. The attention projection matrices WQ,WK,WV can be split along the column axis to H equal parts denoted by W iQ,W i K,W i V ∈ Rd× d H for 1 ≤ i ≤ H . Similarly, the attention output matrix WO can be split along the row axis into H heads, W iO ∈ Rd/H×d. We define the interaction matrices as W iQK := W i QW iT K ∈ Rd×d, W iVO := W iVW iO ∈ Rd×d. 2Though earlier mentions include nostalgebraist [2020]. Importantly, W iQK,W i VO are input-independent. Intuitively, WQK encodes the amount of attention between pairs of tokens. Similarly, in W iVO, the matrices WV and WO can be viewed as a transition matrix that determines how attending to certain tokens affects the subsequent hidden state. We can restate the attention equations in terms of the interaction matrices. Recall (Eq. 1) that the output of the i’th head of the attention module is AiV iatt and the final output of the attention module is (without the residual connection): Concat [ A1V 1att, ..., A iV iatt, ..., A HV Hatt ] WO = H∑ i=1 Ai(XW iV)W i O = H∑ i=1 AiXW iVO. (2) Similarly, the attention map Ai at the i’th head in terms of WQK is (softmax is done row-wise): Ai = softmax ( (XW iQ)(XW i K) T√ d/H +M ) = softmax ( X(W iQK)X T√ d/H +M ) . (3) 3 PROJECTING TRANSFORMER PARAMETERS INTO EMBEDDING SPACE In this section, we propose that Transformer parameters can be projected into embedding space for interpretation purposes. Our results extend Elhage et al. [2021] who obtained similar results for a two-layer attention-only network. We empirically support our framework in §4-§5. Given a matrix A ∈ RN×d, we can project it into embedding space by multiplying by the embedding matrix E as  = AE ∈ RN×e. Let E′ be a right-inverse of E, that is, EE′ = I ∈ Rd×d.3 Then we can reconstruct the original matrix with E′ as A = A(EE′) = ÂE′. We will use this simple identity to reinterpret the model’s operation in embedding space. To simplify our analysis, we ignore layer norms and biases, a standard simplification justified in prior work [Elhage et al., 2021]. In interpretation experiments (§4), we do not use an exact right inverse such as the Moore–Penrose pseudo-inverse [Moore, 1920; Bjerhammar, 1951; Penrose, 1955] but instead use the transpose of the embedding matrix E′ = ET. This is since interpretation involves not only projecting using E′ but also applying a top-k operation where we inspect the vocabulary items with the largest logits. We empirically find that the Moore–Penrose pseudo-inverse does not work well for interpretation due to the top-k operation, and provide a justification and comprehensive empirical evidence in Appendix A. Conversely, ET empirically works well, and we conjecture this is due to the training procedure of LMs where E is used to embed discrete tokens into the hidden state dimension and ET is used to predict a distribution over the vocabulary items from the last hidden state. Attention Module Recall that W iVO := W iVW iO ∈ Rd×d is the interaction matrix between attention values and the output projection matrix for attention head i. By definition, the output of each head is: AiXW iVO = A iX̂E′W iVO. Since the output of the attention module is added to the residual stream, we can assume according to the residual stream view that it is meaningful to project it to the embedding space, similar to FF values. Thus, we expect the sequence of N e-dimensional vectors (AiXW iVO)E = A iX̂(E′W iVOE) to be interpretable. Importantly, the role of A i is just to mix the representations of the updated N input vectors. This is similar to the FF module, where FF values (the parameters of the second layer) are projected into embedding space, and FF keys (parameters of the first layer) determine the coefficients for mixing them. Hence, we can assume that the interpretable components are in the term X̂(E′W iVOE). Zooming in on this operation, we see that it takes the previous hidden state in the embedding space (X̂) and produces an output in the embedding space which will be incorporated into the next hidden state through the residual stream. Thus, E′W iVOE is a transition matrix that takes a representation the embedding space and outputs a new representation in the same space. Similarly, the matrix W iQK can be viewed as a bilinear map (Eq. 3). To interpret it in embedding space, we perform the following operation with E′: XW iQKX T = (XEE′)W iQK(XEE ′)T = (XE)E′W iQKE ′T(XE)T = X̂(E′W iQKE ′T)X̂T. 3E′ exists if d ≤ e and E is full-rank. Therefore, the interaction between tokens at different positions is determined by an e×e matrix that expresses the interaction between pairs of vocabulary items. FF Module Geva et al. [2022b] showed that FF value vectors V ∈ Rdff×d are meaningful when projected into embedding space, i.e., for a FF value vector v ∈ Rd, vE ∈ Re is interpretable (see §2.1). In vectorized form, the rows of V E ∈ Rdff×e are interpretable. On the other hand, the keys K of the FF layer are multiplied on the left by the output of the attention module, which are the queries of the FF layer. Denoting the output of the attention module by Q, we can write this product as QKT = Q̂E′KT = Q̂(KE′T)T. Because Q is a hidden state, we assume according to the residual stream view that Q̂ is interpretable in embedding space. When multiplying Q̂ by KE′T, we are capturing the interaction in embedding space between each query and key, and thus expect KE′T to be interpretable in embedding space as well. Overall, FF keys and values are intimately connected – the i-th key controls the coefficient of the i-th value, so we expect their interpretation to be related. While not central to this work, we empirically show that key-value pairs in the FF module are similar in embedding space in Appendix B.1. Subheads Another way to interpret the matrices W iVO and W iQK is through the subhead view. We use the following identity: AB = ∑b j=1 A:,jBj,:, which holds for arbitrary matrices A ∈ Ra×b, B ∈ Rb×c, where A:,j ∈ Ra×1 are the columns of the matrix A and Bj,: ∈ R1×c are the rows of the matrix B. Thus, we can decompose W iVO and W i QK into a sum of d H rank-1 matrices: W iVO = d H∑ j=1 W i,jV W i,j O , W i QK = d H∑ j=1 W i,jQ W i,j K T . where W i,jQ ,W i,j K ,W i,j V ∈ Rd×1 are columns of W iQ,W iK,W iV respectively, and W i,j O ∈ R1×d are the rows of W iO. We call these vectors subheads. This view is useful since it allows us to interpret subheads directly by multiplying them with the embedding matrix E. Moreover, it shows a parallel between interaction matrices in the attention module and the FF module. Just like the FF module includes key-value pairs as described above, for a given head, its interaction matrices are a sum of interactions between pairs of subheads (indexed by j), which are likely to be related in embedding space. We show this is indeed empirically the case for pairs of subheads in Appendix B.1. We summarize our approach for projecting the different components of the Transformer into embedding space in Table 1. 4 INTERPRETABILITY EXPERIMENTS In this section, we provide empirical evidence for the viability of our approach as a tool for interpreting Transformer parameters. 4.1 PARAMETER INTERPRETATION EXAMPLES We take GPT-2 medium [Radford et al., 2019] and manually analyze its parameters. GPT-2 medium has a total of 384 attention heads (24 layers and 16 heads per layer). We take the embedded transition matrices E′W iVOE for all heads and examine the top-k pairs of vocabulary items. As there are only 384 heads, we manually choose a few heads and present the top-k pairs in Appendix C.1 (k = 50). We observe that different heads capture different types of relations between pairs of vocabulary items including word parts, heads that focus on gender, geography, orthography, particular part-of-speech tags, and various semantic topics. In Appendix C.2 we perform a similar analysis for WQK. Appendix C.3 provides examples of key-value pairs from the FF modules of GPT-2 medium. We show random pairs (k, v) from the set of those pairs such that when looking at the top-100 vocabulary items for k and v, at least 15% overlap. Such pairs account for approximately 5% of all key-value pairs. The examples show how key-value pairs often revolve around similar topics such as media, months, organs, etc. Last, we show we can use embeddings to locate FF values (or keys) related to a particular topic. We take a few vocabulary items related to a certain topic, e.g., [‘cm’, ‘kg’, ‘inches’], average their embeddings,4 and rank all FF values (or keys) based on their dot-product with the average. Appendix C.4 shows a few examples of FF values found with this method that are related to programming, measurements, and animals. 4.2 HIDDEN STATE AND PARAMETERS An advantage of zero-pass interpretation is that it does not require running inputs through the model which is expensive and non-exhaustive. In this section (and this section only), we run a forward pass over inputs and examine if the representations in embedding space of dynamically-computed hidden states are “similar” to the representations of static parameter vectors that are activated. A technical side note: we use GPT-2, which applies layer norm to the Transformer output before projecting it to the embedding space with E. Thus, conservatively, layer norm should be considered as part of the projection operation.5 Empirically however, we observe that projecting parameters directly without layer norm works well, which simplifies our analysis in §3. An exception is when projecting hidden states in this section, where we apply layer norm before projection to improve performance, similar to Geva et al. [2022a]. Experimental Design We use GPT-2 medium and run it over 60 examples from IMDB [Maas et al., 2011]. This provides us with a dynamically-computed hidden state h for every token and at the output of every layer. For the projection ĥ ∈ Re of each such hidden state, we take the projections of the m most active parameter vectors {x̂i}mi=1 in the layer that computed h and check 4We subtract the average embedding µ from E before averaging, which improves interpretability. 5Layer norm consists of standardizing the mean and variance of the input followed by an affine transforma- tion. The latter part can be easily absorbed into E (while adding a bias term). if they cover the dominant vocabulary items of ĥ in embedding space. Specifically, let top-k(wE) be the k vocabulary items with largest logits in embedding space for a vector w ∈ Rd. We compute: Rk(x̂1, ..., x̂m, ĥ) = |top-k(ĥ) ∩ ⋃m i=1 top-k(x̂i)| k , to capture if activated parameter vectors cover the main vocabulary items corresponding to the hidden state. We find the m most active parameter vectors separately for FF keys (K), FF values (V ), attention value subheads (WV) (see §3), and attention output subheads (WO), where the activation of each parameter vector is determined by the vector’s “coefficient” as follows. For a FF key-value pair (k, v) the coefficient is σ(qTk), where q ∈ Rd is an input to the FF module, and σ is the FF nonlinearity. For attention value-output subhead pairs (v, o) the coefficient is xTv, where x is the input to this component (for attention head i, the input is one of the rows of AiX , see Eq. 2). Results and Discussion Figure 2 presents the Rk score averaged across tokens per layer. As a baseline, we compare Rk of the activated vectors {x̂i}mi=1 with the correctly-aligned hidden state ĥ at the output of the relevant layer (blue bars) against the the Rk when randomly sampling ĥrand from the set of all hidden states (orange bars). We conclude that the representations in embedding space induced by activated parameter vector mirror, at least to some extent, the representations of the hidden states themselves. Appendix §B.2 shows a variant of this experiment, where we compare activated parameters throughout GPT2-medium’s layers to the last hidden state, which produces the logits used for prediction. 4.3 INTERPRETATION OF FINE-TUNED MODELS We now show that we can interpret the changes a model goes through during fune-tuning through the lens of embedding space. We fine-tune the top-3 layers of the 12-layer GPT-2-base with a sequence classification head on IMDB sentiment analysis (binary classification) and compute the difference between the original parameters and the fine-tuned model. We then project the difference of parameter vectors into embedding space and test if change is interpretable w.r.t sentiment analysis. Appendix D shows examples for projected differences randomly sampled from the fine-tuned layers. Frequently, the difference, or its negation, is projected to nouns, adjectives and adverbs that express sentiment for a movie, such as ‘amazing’, ‘masterpiece’, ‘incompetence’, etc. This shows that the differences are indeed projected into vocabulary items that characterize movie reviews’ sentiment. Almost all parameter groups present this behavior, except for V and WO, which curiously are the parameters added to the residual stream. 5 ALIGNING MODELS IN EMBEDDING SPACE Assuming Transformers by and large operate in embedding space leads to an exciting possibility - we can relate different models to one another so long as they share a vocabulary and tokenizer. In §5.1, we show that we can align the layers of BERT models trained with different random seeds. In §5.2, we show the embedding space can be leveraged to “stitch” the parameters of a fine-tuned model to a model that was not fine-tuned. 5.1 LAYER ALIGNMENT Experimental Design Taking our approach to the extreme, the embedding space is a universal space, which depends only on the tokenizer, and in which Transformer parameters and hidden states reside. Consequently, we can align parameter vectors from different models in this space and compare them even if they come from different models, as long as they share a vocabulary. To demonstrate this, we use MultiBERT [Sellam et al., 2022], which contains 25 different instantiations of BERT initialized from different random seeds. We take parameters from two MultiBERT seeds and compute the Pearson correlation between their projection to embedding space. For example, let VA, VB be the FF values of models A and B. We can project the values into embedding space: VAEA, VBEB , where EA, EB are the respective embedding matrices, and compute Pearson correlation between projected values. This produces a similarity matrix S̃ ∈ R|VA|×|VB |, where each entry is the correlation coefficient between projected values from the two models. We bin S̃ by layer pairs and average the absolute value of the scores in each bin (different models might encode the same information in different directions, so we use absolute value) to produce a matrix S ∈ RL×L, where L is the number of layers. Specifically, the average (absolute) correlation between vectors that come from layer ℓA in model A and layer ℓB in Model B is registered in entry (ℓA, ℓB) of S. Last, to obtain a one-to-one layer alignment, we use the Hungarian algorithm [Kuhn, 1955], which assigns exactly one layer from the first model to a layer from the second model. The algorithm’s objective is to maximize, given a similarity matrix S, the sum of similarities of the chosen pairs, such that each index in one model is matched with exactly one index in the other. We repeat this for all parameter groups (WQ,WK,WV,WO,K). Results and Discussion Figure 3 (left) shows the resulting alignment. Clearly, parameters from a certain layer in model A tend to align to the same layer in model B across all parameter groups. This suggests that different layers from different models that were trained separately (but with the same training objective and data) serve a similar function. As further evidence, we show that if not projected, the matching appears absolutely random in Figure §3 (right). We show the same results for other seed pairs as well in Appendix B.3. 5.2 ZERO-SHOT STITCHING Model stitching [Lenc and Vedaldi, 2015; Csiszárik et al., 2021; Bansal et al., 2021] is a relatively under-explored feature of neural networks, particularly in NLP. The idea is that different models, sometimes trained on different data and with different architectures, learn representations that can be aligned through a linear transformation, termed stitching. Representations correspond to hidden states , and thus one can learn a transformation matrix from one model’s hidden states to an equivalent hidden state in the other model. Here, we show that going through embedding space one can align the hidden states of two models, i.e., stitch, without training. Given two models, we want to find a linear stitching transformation to align their representation spaces. According to our theory, given a hidden state v ∈ Rd1 from model A, we can project it to the embedding space as vEA, where EA is its embedding matrix. Then, we can re-project to the feature space of model B, with E+B ∈ Re×d2 , where E + B is the Penrose-Moore pseudo-inverse of the embedding matrix EB .6 This transformation can be expressed as multiplication with the kernel KAB := EAE + B ∈ Rd1×d2 . We employ the above approach to take representations of a fine-tuned classifier, A, and stitch them on top of a model B that was only pretrained, to obtain a new classifier based on B. Experimental Design We use the 24-layer GPT-2 medium as model A and 12-layer GPT-2 base model trained in §4.3 as model B. We fine-tune the last three layers of model B on IMDB, as explained in §4.3. Stitching is simple and is performed as follows. Given the sequence of N hidden states HℓA ∈ RN×d1 at the output of layer ℓ of model A (ℓ is a hyperparameter), we apply the stitching layer, which multiplies the hidden states with the kernel, computing HℓAKAB . This results in hidden states HB ∈ RN×d2 , used as input to the three fine-tuned layers from B. 6Since we are not interested in interpretation we use an exact right-inverse and not the transpose. Results and Discussion Stitching produces models with accuracies that are higher than random on IMDB evaluation set, but not consistently. Figure 4 shows the accuracy of stitched models against the layer index from model A over which stitching is performed. Out of 11 random seeds, three models obtained accuracy that is significantly higher than the baseline 50% accuracy, reaching an accuracy of roughly 70%, when stitching is done over the top layers. 6 RELATED WORK Interpreting Transformer is a broad area of research that has attracted much attention in recent years. A large body of work has focused on analyzing hidden representations, mostly through probing [Adi et al., 2016; Shi et al., 2016; Tenney et al., 2019; Rogers et al., 2020]. Voita et al. [2019a] used statistical tools to analyze the evolution of hidden representations throughout layers. Recently, Mickus et al. [2022] proposed to decompose the hidden representations into the contributions of different Transformer components. Unlike these works, we interpret parameters rather than the hidden representations. Another substantial effort has been to interpret specific network components. Previous work analyzed single neurons [Dalvi et al., 2018; Durrani et al., 2020], attention heads [Clark et al., 2019; Voita et al., 2019b], and feedforward values [Geva et al., 2020; Dai et al., 2021; Elhage et al., 2022]. While these works mostly rely on input-dependent neuron activations, we inspect “static” model parameters, and provide a comprehensive view of all Transformer components. Our work is most related to efforts to interpret specific groups of Transformer parameters. Cammarata et al. [2020] made observations about the interpretability of weights of neural networks. Elhage et al. [2021] analyzed 2-layer attention networks. We extend their analysis to multi-layer pre-trained Transformer models. Geva et al. [2020; 2022a;b] interpreted feedforward values in embedding space. We coalesce these lines of work and offer a unified interpretation framework for Transformers in embedding space. 7 DISCUSSION Our work has a few limitations that we care to highlight. First, it focuses on interpreting models through the vocabulary lens. While we have shown evidence for this, it does not preclude other factors from being involved in the computation process. Second, we used E′ = ET, but future research might find variants of E that improve performance. Last, we assume Transformer components can be projected to the embedding space with a single matrix multiplication, but this might depend on model training, e.g., in GPT-2 it involves a layer norm operation as explained in §4.2. Notwithstanding, we believe the benefits of our work overshadow its limitations. We provide a simple and efficient approach, which equips researchers with new tools to interpret Transformer models and relate them to one another. Apart from Elhage et al. [2021], there has been little work pursuing the embedding space approach, and we “sharpen” the tools they laid down and adjust them to existing pre-trained Transformers. Moreover, our framework allows us to view parameters from different models as residents of the same universal embedding space, where they can be compared in model-agnostic fashion. We demonstrate two applications of this observation (model alignment and stitching) and argue future work can yield many additional applications. A RETHINKING INTERPRETATION The process of interpreting a vector v in Geva et al. [2022b] proceeds in two steps: first the projection of the vector to the embedding space (vE); then, we use the list of the tokens that were assigned the largest values in the projected vector, i.e.: top-k(vE), as the interpretation of the projected vector. This is reasonable since (a) the most activated coordinates contribute the most when added to the residual stream, and (b) this matches how we eventually decode: we project to the embedding space and consider the top-1 token (or one of the few top tokens, when using beam search). In this work, we interpret inner products and matrix multiplications in the embedding space: given two vectors x, y ∈ Rd, their inner product xTy can be considered in the embedding space by multiplying with E and then by one of its right inverses (e.g., its pseudo-inverse E+ [Moore, 1920; Bjerhammar, 1951; Penrose, 1955]): xTy = xTEE+y = (xTE)(yE+T)T. Assume xE is interpretable in the embedding space, crudely meaning that it represents logits over vocabulary items. We expect y, which interacts with x, to also be interpretable in the embedding space. Consequently, we would like to take yE+T to be the projection of y. However, this projection does not take into account the subsequent interpretation using top-k. The projected vector yE+T might be harder to interpret in terms of its most activated tokens. To alleviate this problem, we need a different “inverse” matrix E′ that works well when considering the top-k operation. Formally, we want an E′ with the following “robustness” guarantee: keep-k(xE)Tkeep-k(yE′) ≈ xTy, where keep-k(v) is equal to v for coordinates whose absolute value is in the top-k, and zero elsewhere. This is a stronger notion of inverse – not only is EE′ ≈ I , but even when truncating the vector in the embedding space we can still reconstruct it with E′. We claim that ET is a decent instantiation of E′ and provide some empirical evidence. While a substantive line of work [Ethayarajh, 2019; Gao et al., 2019; Wang et al., 2020; Rudman et al., 2021] has shown that embedding matrices are not isotropic (an isotropic matrix E has to satisfy EET = αI for some scalar α), we show that it is isotropic enough to make ET a legitimate compromise. We randomly sample 300 vectors drawn from the normal distribution N (0, 1), and compute for every pair x, y the cosine similarity between xTy and keep-k(xE)Tkeep-k(yE′) for k = 1000, and then average over all pairs. We repeat this for E′ ∈ {E+T, E} and obtain a score of 0.10 for E+T, and 0.83 for E, showing the E is better under when using top-k. More globally, we compare E′ ∈ {E+T, E} for k ∈ {10, 50, 100, 200, 300, 500} with three distributions: - x, y drawn from the normal N (0, 1) distribution - x, y chosen randomly from the FF values - x, y drawn from hidden states along Transformer computations. In Figure 5 (Left) we show the results, where dashed lines represent E+ and solid lines represent ET. For small values of k (used for interpretation), ET is superior to E+ across all distributions. Interestingly, the hidden state distribution is the only distribution where E+ has similar performance to ET. Curiously, when looking at higher values of k the trend is reversed (k = {512, 1024, 2048, 4096, 10000, 15000, 20000, 30000}) - see Figure 5 (Right). This settles the deviation from findings showing embedding matrices are not isotropic, as we see that indeed as k grows, ET becomes an increasingly bad approximate right-inverse of the embedding matrix. The only distribution that keeps high performance with ET is the hidden state distribution, which is an interesting future direction of investigation. B ADDITIONAL MATERIAL B.1 CORRESPONDING PARAMETER PAIRS ARE RELATED We define the following metric applying on vectors after projecting them into the embedding space: Simk(x̂, ŷ) = |top-k(x̂) ∩ top-k(ŷ)| |top-k(x̂) ∪ top-k(ŷ)| where top-k(v) is the set of k top activated indices in the vector v (which correspond to tokens in the embedding space). This metric is the Jaccard index [Jaccard, 1912] applied to the top-k tokens from each vector. In Figure 6, Left, we demonstrate that corresponding FF key and value vectors are more similar (in embedding space) than two random key and value vectors. In Figure 6, Right, we show a similar result for attention value and output vectors. In Figure 6, Bottom, the same analysis in done for attention query and key vectors. This shows that there is a much higher-than-chance relation between corresponding FF keys and values (and the same for attention values and outputs). B.2 FINAL PREDICTION AND PARAMETERS We show that the final prediction of the model is correlated in embedding space with the most activated parameters from each layer. This implies that these objects are germane to the analysis of the final prediction in the embedding space, which in turn suggests that the embedding space is a viable choice for interpreting these vectors. Figure 7 shows that just like §4.2, correspondence is better when hidden states are not randomized, suggesting there parameter interpretations have an impact on the final prediction. B.3 PARAMETER ALIGNMENT PLOTS FOR ADDITIONAL MODEL PAIRS Alignment in embedding space of layers of pairs of BERT models trained with different random seeds for additional model pairs. SEED 1 VS SEED 2 K V WK WQ WV WO K V WK WQ WV WO SEED 2 VS SEED 3 K V WK WQ WV WO K V WK WQ WV WO SEED 3 VS SEED 4 K V WK WQ WV WO K V WK WQ WV WO SEED 4 VS SEED 5 K V WK WQ WV WO K V WK WQ WV WO C EXAMPLE CASES C.1 VALUE-OUTPUT MATRICES Below we show value-output pairs from different heads of GPT-2 Medium. For each head, we show the 50 pairs with largest value in the e × e transition matrix. There are 384 attention heads in GPT-2 medium from which we manually choose a subset. Throughout the section some lists were marked with asterisks indicating the way this particular list was created: * - pairs of the form (x, x) were excluded from the list C.1.1 LOW LEVEL LANGUAGE MODELING Layer 21 Head 7* (’FN’, ’NF’), (’ Ramos’, ’Ram’), (’ Hughes’, ’Hug’), (’GR’, ’gran’), (’NF’, ’FN’), (’CL’, ’CLA’), (’ McCain’, ’McC’), (’ Marshall’, ’Marsh’), (’Hug’, ’ Hughes’), (’ Tanner’, ’Tan’), (’NH’, ’nih’), (’NR’, ’NRS’), (’Bow’, ’ Bowman’), (’Marsh’, ’ Marshall’), (’ Jacobs’, ’Jac’), (’ Hayes’, ’Hay’), (’Hay’, ’ Hayes’), (’ McCorm’, ’McC’), (’NR’, ’NI’), (’ Dawson’, ’ sidx’), (’Tan’, ’ Tanner’), (’GR’, ’gra’), (’jac’, ’JA’), (’zo’, ’zos’), (’NF’, ’NI’), (’ McCull’, ’McC’), (’Jac’, ’ Jacobs’), (’ Beet’, ’ Beetle’), (’FG’, ’GF’), (’ja’, ’jas’), (’ Wilkinson’, ’Wil’), (’Ram’, ’ Ramos’), (’GR’, ’GRE’), (’FN’, ’ NF’), (’McC’, ’ McCorm’), (’ Scarborough’, ’Scar’), (’Ba’, ’ Baal’), (’FG’, ’FP’), (’FN’, ’FH’), (’Gar’, ’ Garfield’), (’jac’, ’jas’), (’nut’, ’nuts’), (’ Wis’, ’WI’), (’ Vaughan’, ’ Vaughn’), (’PF’, ’FP’), (’RN’, ’RNA’), (’jac’, ’ Jacobs’), (’FN’, ’FM’), (’Kn’, ’ Knox’), (’nic’, ’NI’) Layer 19 Head 13 (guessing the first letter/consonant of the word) (’senal’, ’ R’), # arsenal (’senal’, ’R’), (’vernment’, ’ G’), # government (’ Madness’, ’ M’), (’ Mayhem’, ’ M’), (’nesday’, ’ W’), # wednesday (’vernment’, ’G’), (’ Madness’, ’M’), (’lace’, ’ N’), # necklace (’nesday’, ’W’), (’senal’, ’Rs’), (’vernment’, ’ g’), (’farious’, ’ N’), # nefarious (’eneg’, ’ C’), (’senal’, ’ r’), (’ruary’, ’ F’), # february (’senal’, ’RIC’), (’ondo’, ’ R’), (’ Mandela’, ’ N’), # nelson (’ Mayhem’, ’M’), (’senal’, ’ RD’), (’estine’, ’ C’), (’vernment’, ’Gs’), (’senal’, ’RF’), (’esis’, ’ N’), (’Reviewed’, ’ N’), (’arette’, ’ C’), # cigarette (’rome’, ’ N’), (’theless’, ’ N’), # nonetheless (’lace’, ’N’), (’DEN’, ’ H’), (’ versa’, ’ V’), (’bably’, ’ P’), # probably (’vernment’, ’GF’), (’vernment’, ’g’), (’vernment’, ’GP’), (’ornia’, ’ C’), # california (’ilipp’, ’ F’), (’umbered’, ’ N’), (’arettes’, ’ C’), (’senal’, ’RS’), (’onsense’, ’ N’), (’senal’, ’RD’), (’senal’, ’RAL’), (’uci’, ’ F’), (’ondo’, ’R’), (’senal’, ’ RI’), (’iday’, ’ H’), # holiday (’senal’, ’ Rx’), (’odor’, ’ F’) Layer 20 Head 9 (’ behalf’, ’On’), (’ behalf’, ’ On’), (’ behalf’, ’ on’), (’ periods’, ’during’), (’ bounds’, ’within’), (’ envelope’, ’ inside’), (’door’, ’outside’), (’ envelope’, ’inside’), (’ regime’, ’ Under’), (’ periods’, ’ during’), (’lihood’, ’ LIKE’), (’ occasions’, ’ on’), (’ regime’, ’Under’), (’door’, ’inside’), (’period’, ’during’), (’lihood’, ’Like’), (’ periods’, ’ During’), (’ envelope’, ’Inside’), (’ sake’, ’for’), (’ doors’, ’ inside’), (’ regime’, ’ under’), (’ behalf’, ’ ON’), (’ purposes’, ’for’), (’ occasions’, ’On’), (’ doors’, ’inside’), (’ basis’, ’ on’), (’ regimes’, ’ Under’), (’doors’, ’outside’), (’ Osc’, ’inside’), (’ periods’, ’During’), (’door’, ’ inside’), (’ regime’, ’ UNDER’), (’ regimes’, ’ under’), (’ regimes’, ’Under’), (’doors’, ’inside’), (’zx’, ’inside’), (’ period’, ’during’), (’ascript’, ’inside’), (’door’, ’Inside’), (’ occasions’, ’ On’), (’ysc’, ’BuyableInstoreAndOnline’) , (’ envelope’, ’ Inside’), (’ pauses’, ’during’), (’ regime’, ’under’), (’ occasion’, ’ on’), (’ doors’, ’outside’), (’ banner’, ’ UNDER’), (’ envelope’, ’within’), (’abouts’, ’ here’), (’ duration’, ’during’) Layer 22 Head 5 (named entities, mostly made of two parts) (’enegger’, ’ Schwartz’), (’shire’, ’ Lincoln’), (’xual’, ’Weiss’), (’nery’, ’ Nun’), (’ Qiao’, ’ Huang’), (’schild’, ’ Schwarz’), (’oslov’, ’ Czech’), (’ Rica’, ’ Costa’), (’ Qiao’, ’ Qiao’), (’xual’, ’ RW’), (’ Nadu’, ’ Tamil’), (’ Nadu’, ’Tam’), (’shire’, ’ Baldwin’), (’swick’, ’ Hoff’), (’xual’, ’ Weiss’), (’ Takeru’, ’ Yamato’), (’xual’, ’ Grassley’), (’swick’, ’ Schwartz’), (’enegger’, ’ Schiff’), (’enegger’, ’Weiss’), (’xual’, ’RW’), (’shire’, ’ Nottingham’), (’shire’, ’ Barrett’), (’arest’, ’ Buch’), (’ Fei’, ’ Fei’), (’miah’, ’Jere’), (’swick’, ’ Owl’), (’ufact’, ’ Swanson’), (’akuya’, ’ Tanaka’), (’ Sachs’, ’ Feinstein’), (’enegger’, ’ Wagner’), (’otle’, ’Roberts’), (’shire’, ’ Neville’), (’oslov’, ’ Prague’), (’sburg’, ’ Hammond’), (’ ILCS’, ’ Dunham’), (’ Malfoy’, ’ Draco’), (’yip’, ’Billy’), (’iversal’, ’ Monroe’), (’iversal’, ’Murray’), (’Yang’, ’Yang’), (’akuya’, ’ Krishna’), (’schild’, ’ Schwartz’), (’tz’, ’ Rabb’), (’shire’, ’gow’), (’enegger’, ’ Feldman’), (’cair’, ’ Chou’), (’enegger’, ’ Duffy’), (’enegger’, ’Sch’), (’ Jensen’, ’ Jensen’) Layer 22 Head 13 (’ Additionally’, ’ the’), (’ Unfortunately’, ’ the’), (’ Nevertheless’, ’ the’), (’ Sadly’, ’ the’), (’ However’, ’ the’), (’ Furthermore’, ’ the’), (’ Additionally’, ’,’), (’ During’, ’ the’), (’ Moreover’, ’ the’), (’ Whilst’, ’ the’), (’ Since’, ’ the’), (’ Unfortunately’, ’,’), (’ Additionally’, ’-’), (’ Perhaps’, ’ the’), (’ Sadly’, ’,’), (’ Throughout’, ’ the’), (’ Nevertheless’, ’,’), (’ While’, ’ the’), (’ However’, ’,’), (’ Although’, ’ the’), (’ There’, ’ the’), (’ Furthermore’, ’,’), (’ Eventually’, ’ the’), (’ Meanwhile’, ’ the’), (’ Hopefully’, ’ the’), (’ Nevertheless’, ’-’), (’ During’, ’,’), (’ Regardless’, ’ the’), (’ However’, ’-’), (’ Whilst’, ’,’), (’ Additionally’, ’ and’), (’ Moreover’, ’,’), (’ Unfortunately’, ’-’), (’ They’, ’ the’), (’ Sadly’, ’-’), (’ Whereas’, ’ the’), (’ Additionally’, ’ a’), (’ Furthermore’, ’-’), (’ Unlike’, ’ the’), (’ Typically’, ’ the’), (’ Since’, ’,’), (’ Normally’, ’ the’), (’ Perhaps’, ’,’), (’ During’, ’-’), (’ Throughout’, ’,’), (’ While’, ’,’), (’ Nevertheless’, ’ a’), (’ Interestingly’, ’ the’), (’ Unfortunately’, ’ and’), (’ Unfortunately’, ’ a’) C.1.2 GENDER Layer 18 Head 1 (’ Marie’, ’women’), (’ Marie’, ’ actresses’), (’ Anne’, ’women’), (’ Anne’, ’Women’), (’ Marie’, ’woman’), (’ Marie’, ’Women’), (’ Anne’, ’woman’), (’ Marie’, ’Woman’), (’ Anne’, ’ actresses’), (’ Marie’, ’ heroine’), (’Jane’, ’Women’), (’ Anne’, ’ heroine’), (’Jane’, ’women’), (’ actresses’, ’Women’), (’ Anne’, ’Woman’), (’ Esther’, ’Women’), (’ Esther’, ’women’), (’ Marie’, ’girls’), (’ Anne’, ’Mrs’), (’ Marie’, ’ actress’), (’ actresses’, ’women’), (’Jane’, ’Woman’), (’ Marie’, ’ girls’), (’Jane’, ’ actresses’), (’Anne’, ’Woman’), (’ Marie’, ’Girls’), (’Anne’, ’women’), (’ Anne’, ’Girls’), (’ actresses’, ’Woman’), (’ Marie’, ’ Women’), (’ Anne’, ’ Women’), (’ Anne’, ’ girls’), (’ Anne’, ’girl’), (’Anne’, ’Women’), (’Women’, ’Woman’), (’ Anne’, ’girls’), (’Anne’, ’ actresses’), (’ Michelle’, ’women’), (’ Marie’, ’ Actress’), (’ Marie’, ’girl’), (’ Anne’, ’ Feminist’), (’ Marie’, ’ women’), (’ Devi’, ’Women’), (’ Elizabeth’, ’Women’), (’ Anne’, ’ actress’), (’Anne’, ’Mrs’), (’Answer’, ’answered’), (’Anne’, ’woman’), (’maid’, ’Woman’), (’Marie’, ’women’) C.1.3 GEOGRAPHY Layer 16 Head 6* (’ Mumbai’, ’ Chennai’), (’ Mumbai’, ’India’), (’ Chennai’, ’ Mumbai’), (’ Tasmania’, ’ Queensland’), (’ Rahul’, ’India’), (’ Gujar’, ’India’), (’ Bangalore’, ’ Chennai’), (’Scotland’, ’England’), (’ Kerala’, ’ Chennai’), (’ Mumbai’, ’ Delhi’), (’Scotland’, ’Britain’), (’ Mumbai’, ’ Bangalore’), (’India’, ’Pakistan’), (’Ireland’, ’Scotland’), (’ Bangalore’, ’ Mumbai’), (’ Chennai’, ’ Bangalore’), (’ Gujar’, ’ Aadhaar’), (’ Maharashtra’, ’ Mumbai’), (’ Gujarat’, ’ Maharashtra’), (’ Gujar’, ’ Gujarat’), (’Australia’, ’Australian’), (’ Gujarat’, ’India’), (’ Gujar’, ’ Rahul’), (’ Mumbai’, ’ Maharashtra’), (’England’, ’Britain’), (’ Chennai’, ’India’), (’ Bombay’, ’ Mumbai’), (’ Kerala’, ’ Tamil’), (’ Mumbai’, ’ Hindi’), (’ Tasman’, ’ Tasmania’), (’India’, ’ Mumbai’), (’ Gujar’, ’ Hindi’), (’ Gujar’, ’ Maharashtra’), (’Austral’, ’ Australians’), (’ Kerala’, ’ Maharashtra’), (’ Bangalore’, ’India’), (’ Kerala’, ’India’), (’ Bombay’, ’India’), (’Austral’, ’Australia’), (’India’, ’ Aadhaar’), (’ Mumbai’, ’ Sharma’), (’Austral’, ’Australian’), (’ Kerala’, ’ Mumbai’), (’England’, ’Scotland’), (’ Gujar’, ’ Mumbai’), (’ Mumbai’, ’ Rahul’), (’ Tasman’, ’ Queensland’), (’ Chennai’, ’ Tamil’), (’ Maharashtra’, ’ Gujarat’), (’ Modi’, ’India’) Layer 18 Head 9 (’ Winnipeg’, ’ Winnipeg’), (’ Edmonton’, ’ Winnipeg’), (’ Winnipeg’, ’ Ottawa’), (’ Calgary’, ’ Winnipeg’), (’ Ottawa’, ’ Winnipeg’), (’ Winnipeg’, ’ Calgary’), (’ Winnipeg’, ’CBC’), (’ Winnipeg’, ’Canada’), (’ Canberra’, ’ Canberra’), (’ RCMP’, ’ Winnipeg’), (’ Ottawa’, ’CBC’), (’ Winnipeg’, ’Canadian’), (’Toronto’, ’ Winnipeg’), (’ Winnipeg’, ’ Canadians’), (’ Edmonton’, ’ Ottawa’), (’ Winnipeg’, ’ RCMP’), (’ Winnipeg’, ’ Edmonton’), (’ Ottawa’, ’Canadian’), (’Canadian’, ’ Winnipeg’), (’Toronto’, ’ Calgary’), (’ Winnipeg’, ’ Quebec’), (’ Winnipeg’, ’ Canad’), (’Toronto’, ’Canadian’), (’ Edmonton’, ’ Edmonton’), (’ Ottawa’, ’ Calgary’), (’ Leafs’, ’ Winnipeg’), (’ Edmonton’, ’ Calgary’), (’ Ottawa’, ’Canada’), (’ Calgary’, ’Canadian’), (’Toronto’, ’Canada’), (’ Calgary’, ’ Calgary’), (’Ott’, ’ Winnipeg’), (’ Winnipeg’, ’ Saskatchewan’), (’ Winnipeg’, ’ Canadian’), (’ Ottawa’, ’ Ottawa’), (’ Calgary’, ’ Ottawa’), (’ Winnipeg’, ’ Manitoba’), (’ Canadians’, ’ Winnipeg’), (’ Winnipeg’, ’ Canada’), (’ RCMP’, ’ Calgary’), (’Toronto’, ’ Manitoba’), (’Toronto’, ’ Ottawa’), (’CBC’, ’ Winnipeg’), (’Canadian’, ’Canada’), (’ Edmonton’, ’Canadian’), (’ RCMP’, ’ Ottawa’), (’ Winnipeg’, ’ipeg’), (’Toronto’, ’Toronto’), (’Canadian’, ’ Calgary’), (’ Ottawa’, ’ Canadians’) Layer 16 Head 2* (’ Australians’, ’Austral’), (’Austral’, ’Australia’), (’Austral’, ’ Canberra’), (’ Canberra’, ’Austral’), (’ Edmonton’, ’ Winnipeg’), (’Austral’, ’Australian’), (’ Edmonton’, ’ Alberta’), (’ Australians’, ’Australia’), (’Austral’, ’ Australians’), (’ovych’, ’Ukraine’), (’ Canad’, ’ Quebec’), (’ Australians’, ’Australian’), (’ Manitoba’, ’ Winnipeg’), (’ Winnipeg’, ’ Manitoba’), (’Canada’, ’Canadian’), (’ Bulgar’, ’Moscow’), (’ Edmonton’, ’ Manitoba’), (’Austral’, ’berra’), (’Australian’, ’Austral’), (’ovych’, ’ Ukrainians’), (’ Canadians’, ’Canada’), (’ Australians’, ’ Canberra’), (’Canadian’, ’Canada’), (’ovych’, ’ Yanukovych’), (’ Trudeau’, ’Canada’), (’ Bulgar’, ’ Dmitry’), (’Austral’, ’ Australia’), (’ Canad’, ’ Mulcair’), (’ Canberra’, ’berra’), (’oglu’, ’Turkish’), (’Canada’, ’udeau’), (’ Oilers’, ’ Edmonton’), (’ Canberra’, ’Australia’), (’ Edmonton’, ’Canada’), (’ Calgary’, ’ Edmonton’), (’ Calgary’, ’ Alberta’), (’ Trudeau’, ’udeau’), (’ Edmonton’, ’ Calgary’), (’ Trudeau’, ’Canadian’), (’ Canberra’, ’Australian’), (’ Canucks’, ’ Vancouver’), (’Australian’, ’Australia’), (’ Fraser’, ’ Vancouver’), (’ Edmonton’, ’Canadian’), (’elaide’, ’Austral’), (’ Braz’, ’Tex’), (’ RCMP’, ’Canada’), (’sov’, ’Moscow’), (’ Bulgar’, ’Russia’), (’Canada’, ’ Canadians’) Layer 21 Head 12* (’ Indones’, ’ Indonesian’), (’ Nguyen’, ’ Vietnamese’), (’ Jakarta’, ’ Indonesian’), (’ Indonesia’, ’ Indonesian’), (’oglu’, ’Turkish’), (’ Indones’, ’ Indonesia’), (’ Indones’, ’ Jakarta’), (’ Koreans’, ’ Korean’), (’oglu’, ’ Turkish’), (’ Taiwanese’, ’ Taiwan’), (’ Nguyen’, ’ Thai’), (’Brazil’, ’ Brazilian’), (’ Indonesia’, ’ Indones’), (’ Taiwanese’, ’Tai’), (’oglu’, ’ Istanbul’), (’ Indonesian’, ’ Indones’), (’ Jakarta’, ’ Indones’), (’ Nguyen’, ’ Laos’), (’ Sloven’, ’ Slovenia’), (’ Korean’, ’ Koreans’), (’ Nguyen’, ’ Cambod’), (’zzi’, ’Italy’), (’Tai’, ’ Taiwanese’), (’ Jakarta’, ’ Indonesia’), (’ Indonesian’, ’ Indonesia’), (’ Bulgaria’, ’ Bulgarian’), (’ Icelandic’, ’ Iceland’), (’ Koreans’, ’ Korea’), (’ Brazilian’, ’Brazil’), (’ Bulgar’, ’ Bulgarian’), (’ Malays’, ’ Malaysian’), (’oglu’, ’ Ankara’), (’ Bulgarian’, ’ Bulgaria’), (’ Indones’, ’ Malays’), (’ Tai’, ’ Taiwanese’), (’oglu’, ’Turkey’), (’ Janeiro’, ’Brazil’), (’zzi’, ’Italian’), (’ Malays’, ’ Kuala’), (’ Fuk’, ’Japanese’), (’ Indonesian’, ’ Jakarta’), (’ Taiwan’, ’ Taiwanese’), (’oglu’, ’ Erdogan’), (’ Nguyen’, ’ Viet’), (’ Filipino’, ’ Philippine’), (’ Indonesia’, ’ Jakarta’), (’ Jong’, ’ Koreans’), (’ Duterte’, ’ Filipino’), (’ Azerbai’, ’ Azerbaijan’), (’ Bulgarian’, ’ Bulgar’) C.1.4 BRITISH SPELLING Layer 19 Head 4 (’ Whilst’, ’ realise’), (’ Whilst’, ’ Whilst’), (’ Whilst’, ’ realised’), (’ Whilst’, ’ organise’), (’ Whilst’, ’ recognise’), (’ Whilst’, ’ civilisation’), (’ Whilst’, ’ organisation’), (’ Whilst’, ’ whilst’), (’ Whilst’, ’ organising’), (’ Whilst’, ’ organised’), (’ Whilst’, ’ organis’), (’ Whilst’, ’ util’), (’ Whilst’, ’ apologise’), (’ Whilst’, ’ emphas’), (’ Whilst’, ’ analyse’), (’ Whilst’, ’ organisations’), (’ Whilst’, ’ recognised’), (’ Whilst’, ’ flavours’), (’ Whilst’, ’ colour’), (’ Whilst’, ’colour’), (’ Whilst’, ’ Nasa’), (’ Whilst’, ’ Nato’), (’ Whilst’, ’ analys’), (’ Whilst’, ’ flavour’), (’ Whilst’, ’ colourful’), (’ Whilst’, ’ colours’), (’ organising’, ’ realise’), (’ Whilst’, ’ behavioural’), (’ Whilst’, ’ coloured’), (’ Whilst’, ’ learnt’), (’ Whilst’, ’ favourable’), (’ Whilst’, ’isation’), (’ Whilst’, ’ programmes’), (’ organis’, ’ realise’), (’ Whilst’, ’ authorised’), (’ Whilst’, ’ practise’), (’ Whilst’, ’ criticised’), (’ Whilst’, ’ organisers’), (’ organising’, ’ organise’), (’ Whilst’, ’ analysed’), (’ Whilst’, ’ programme’), (’ Whilst’, ’ behaviours’), (’ Whilst’, ’ humour’), (’ Whilst’, ’isations’), (’ Whilst’, ’ tyres’), (’ Whilst’, ’ aluminium’), (’ organised’, ’ realise’), (’ Whilst’, ’ favour’), (’ Whilst’, ’ ageing’), (’ organis’, ’ organise’) C.1.5 RELATED WORDS Layer 13 Head 8* (’ mirac’, ’ miraculous’), (’ mirac’, ’ miracle’), (’ nuanced’, ’ nuance’), (’Better’, ’ smarter’), (’ equitable’, ’ healthier’), (’ liberating’, ’ liberated’), (’ unaffected’, ’ untouched’), (’ equitable’, ’ unbiased’), (’ inconsistent’, ’failed’), (’ emanc’, ’ liberated’), (’ equitable’, ’ humane’), (’ liberated’, ’ liberating’), (’ incompatible’, ’failed’), (’ mirac’, ’ miracles’), (’ consensual’, ’ peacefully’), (’ uncond’, ’ unconditional’), (’ unexpected’, ’ unexpectedly’), (’ unconditional’, ’ untouched’), (’Better’, ’ healthier’), (’ unexpectedly’, ’ unexpected’), (’ graceful’, ’ peacefully’), (’ emanc’, ’ emancipation’), (’ effortlessly’, ’ seamlessly’), (’ honorable’, ’ peacefully’), (’ unconditional’, ’ uncond’), (’ rubbish’, ’ excuses’), (’ emanc’, ’ liberating’), (’ equitable’, ’ peacefully’), (’ Feather’, ’ gracious’), (’ emancipation’, ’ liberated’), (’ nuanced’, ’ nuances’), (’icable’, ’ avoids’), (’ liberated’, ’ freeing’), (’ liberating’, ’ freeing’), (’ inconsistent’, ’ lousy’), (’ lousy’, ’failed’), (’ unconditional’, ’ unaffected’), (’ equitable’, ’ivable’), (’ equitable’, ’Honest’), (’erning’, ’ principled’), (’ survival’, ’surv’), (’ocre’, ’ lackluster’), (’ equitable’, ’ liberating’), (’Bah’, ’Instead’), (’ incompatible’, ’ inappropriate ’), (’ emancipation’, ’ emanc’), (’ unchanged’, ’ unaffected’), (’ peacefully’, ’ peaceful’), (’ equitable’, ’ safer’), (’ unconditional’, ’ uninterrupted ’) Layer 12 Head 14* (’ perished’, ’ died’), (’ perished’, ’ dies’), (’ testify’, ’ testifying’), (’ intervened’, ’ interven’), (’ advises’, ’ advising’), (’ disbanded’, ’ disband’), (’lost’, ’ perished’), (’ died’, ’ perished’), (’ applauded’, ’ applaud’), (’ dictates’, ’ dictate’), (’ prev’, ’ prevailed’), (’ advise’, ’ advising’), (’shed’, ’thood’), (’Reviewed’, ’orsi’), (’ dies’, ’ perished’), (’published’, ’ publishes’), (’ prevailed’, ’ prevail’), (’ died’, ’ dies’), (’ testified’, ’ testifying’), (’ testifying’, ’ testify’), (’ dictates’, ’ governs’), (’ complicit’, ’ complicity’), (’ dictated’, ’ dictate’), (’enough’, ’CHO’), (’ skelet’, ’independence’), (’ Recomm’, ’ prescribe’), (’essential’, ’ perished’), (’noticed’, ’CHO’), (’avorable’, ’ approving’), (’ perish’, ’ perished’), (’ overseeing’, ’ oversee’), (’ skelet’, ’shed’), (’EY’, ’chart’), (’ presiding’, ’ overseeing’), (’ fundament’, ’pees’), (’ sanction’, ’appro’), (’ prevail’, ’ prevailed’), (’ governs’, ’ regulates’), (’tails’, ’shed’), (’ Period’, ’chart’), (’lihood’, ’hower’), (’ prev’, ’ prevail’), (’ aids’, ’helps’), (’ dictated’, ’ dict’), (’ dictated’, ’ dictates’), (’ Dise’, ’itta’), (’REC’, ’CHO’), (’exclusive’, ’ORTS’), (’ Helpful’, ’helps’), (’bart’, ’ciples’) Layer 14 Head 1* (’ misunderstand’, ’ incorrectly’) , (’ Proper’, ’ properly’), (’ inaccur’, ’ incorrectly’), (’ misunderstand’, ’ wrongly’), (’ misinterpret’, ’ incorrectly’), (’ incorrect’, ’ incorrectly’), (’ mistakes’, ’ incorrectly’), (’ misunderstanding’, ’ incorrectly’), (’ proper’, ’ properly’), (’fail’, ’ incorrectly’), (’ faulty’, ’ incorrectly’), (’ misrepresent’, ’ incorrectly’), (’ failing’, ’ fails’), (’ inaccurate’, ’ incorrectly’), (’ errors’, ’ incorrectly’), (’ harmful’, ’ Worse’), (’ misunderstand’, ’ wrong’), (’ misunderstand’, ’ improperly’), (’wrong’, ’ incorrectly’), (’ harmful’, ’ incorrectly’), (’ mistake’, ’ incorrectly’), (’ mis’, ’ incorrectly’), (’fail’, ’ fails’), (’ detrimental’, ’ Worse’), (’ rightful’, ’ properly’), (’ misunderstand’, ’ inappropriately’), (’ harmful’, ’ unnecessarily’), (’ neglect’, ’ unnecessarily’), (’ correctly’, ’ properly’), (’ Worst’, ’ Worse’), (’ failure’, ’ fails’), (’ satisfactory’, ’ adequately’), (’ defective’, ’ incorrectly’), (’ misunderstand’, ’ mistakenly’), (’ harming’, ’ Worse’), (’ mishand’, ’ incorrectly’), (’adequ’, ’ adequately’), (’ misuse’, ’ incorrectly’), (’Failure’, ’ fails’), (’ hurts’, ’ Worse’), (’ misunderstand’, ’wrong’), (’ mistakenly’, ’ incorrectly’), (’ failures’, ’ fails’), (’ adequate’, ’ adequately’), (’ properly’, ’ correctly’), (’ hurting’, ’ Worse’), (’ Proper’, ’ correctly’), (’ fail’, ’ fails’), (’ mistaken’, ’ incorrectly’), (’ harming’, ’ adversely’) Layer 14 Head 13* (’ editors’, ’ editorial’), (’ broadcasters’, ’ broadcasting’) , (’ broadcasting’, ’ broadcasts’), (’ broadcast’, ’ broadcasts’), (’ Broadcasting’, ’ broadcasters’) , (’ editors’, ’ Editorial’), (’ broadcasters’, ’ broadcast’), (’ Broadcasting’, ’ broadcast’), (’ lectures’, ’ lecture’), (’ Broadcast’, ’ broadcasting’), (’ broadcasters’, ’ broadcaster’), (’ broadcasters’, ’ broadcasts’), (’ Publishers’, ’ publishing’), (’ broadcasting’, ’ broadcast’), (’ broadcasters’, ’ Broadcasting’) , (’ Publishers’, ’ Publishing’), (’ lecture’, ’ lectures’), (’ Editors’, ’ editorial’), (’ broadcast’, ’ broadcasting’), (’ Broadcasting’, ’ broadcasts’), (’ broadcasting’, ’ broadcasters’) , (’ journalism’, ’ journalistic’), (’reports’, ’Journal’), (’ Broadcast’, ’ Broadcasting’), (’ Publishers’, ’Publisher’), (’azeera’, ’ Broadcasting’), (’Reporting’, ’Journal’), (’ journalistic’, ’ journalism’), (’ Broadcasting’, ’ broadcaster’), (’ broadcasting’, ’ broadcaster’), (’ broadcaster’, ’ broadcasting’), (’ editors’, ’ publication’), (’ journalism’, ’journal’), (’ Journalists’, ’Journal’), (’ documentary’, ’ documentaries’) , (’ filming’, ’ filmed’), (’ publishers’, ’ publishing’), (’ journalism’, ’Journal’), (’ Broadcast’, ’ broadcasts’), (’ broadcast’, ’ broadcasters’), (’ articles’, ’Journal’), (’ reporting’, ’reports’), (’ manuscripts’, ’ manuscript’), (’ publish’, ’ publishing’), (’azeera’, ’ broadcasters’), (’ Publishers’, ’ publication’), (’ Publishers’, ’ publications’), (’ newspapers’, ’ Newsp’), (’ Broadcast’, ’ broadcasters’), (’ Readers’, ’Journal’) C.2 QUERY-KEY MATRICES Layer 22 Head 1 (’ usual’, ’ usual’), (’ occasional’, ’ occasional’), (’ aforementioned’, ’ aforementioned’), (’ general’, ’ usual’), (’ usual’, ’ slightest’), (’agn’, ’ealous’), (’ traditional’, ’ usual’), (’ free’, ’amina’), (’ major’, ’ major’), (’ frequent’, ’ occasional’), (’ generous’, ’ generous’), (’ free’, ’lam’), (’ regular’, ’ usual’), (’ standard’, ’ usual’), (’ main’, ’ usual’), (’ complete’, ’ Finished’), (’ main’, ’liest’), (’ traditional’, ’ traditional’), (’ latest’, ’ aforementioned’), (’ current’, ’ aforementioned’), (’ normal’, ’ usual’), (’ dominant’, ’ dominant’), (’ free’, ’ministic’), (’ brief’, ’ brief’), (’ biggest’, ’liest’), (’usual’, ’ usual’), (’ rash’, ’ rash’), (’ regular’, ’ occasional’), (’ specialized’, ’ specialized’), (’ free’, ’iosis’), (’ free’, ’hero’), (’ specialty’, ’ specialty’), (’ general’, ’iosis’), (’ nearby’, ’ nearby’), (’ best’, ’liest’), (’ officially’, ’ formal’), (’ immediate’, ’mediate’), (’ special’, ’ ultimate’), (’ free’, ’otropic’), (’ rigorous’, ’ comparative’), (’ actual’, ’ slightest’), (’ complete’, ’ comparative’), (’ typical’, ’ usual’), (’ modern’, ’ modern’), (’ best’, ’ smartest’), (’ free’, ’ free’), (’ highest’, ’ widest’), (’ specialist’, ’ specialist’), (’ appropriate’, ’ slightest’), (’ usual’, ’liest’) Layer 0 Head 9 (’59’, ’27’), (’212’, ’39’), (’212’, ’38’), (’217’, ’39’), (’37’, ’27’), (’59’, ’26’), (’54’, ’88’), (’156’, ’39’), (’212’, ’79’), (’59’, ’28’), (’57’, ’27’), (’212’, ’57’), (’156’, ’29’), (’36’, ’27’), (’217’, ’79’), (’59’, ’38’), (’63’, ’27’), (’72’, ’39’), (’57’, ’26’), (’57’, ’34’), (’59’, ’34’), (’156’, ’27’), (’91’, ’27’), (’156’, ’38’), (’63’, ’26’), (’59’, ’25’), (’138’, ’27’), (’217’, ’38’), (’72’, ’27’), (’54’, ’27’), (’36’, ’29’), (’72’, ’26’), (’307’, ’39’), (’37’, ’26’), (’217’, ’57’), (’37’, ’29’), (’54’, ’38’), (’59’, ’29’), (’37’, ’28’), (’307’, ’38’), (’57’, ’29’), (’63’, ’29’), (’71’, ’27’), (’138’, ’78’), (’59’, ’88’), (’89’, ’27’), (’561’, ’79’), (’212’, ’29’), (’183’, ’27’), (’54’, ’29’) Layer 17 Head 6* (’ legally’, ’ legal’), (’ legal’, ’ sentencing’), (’ legal’, ’ arbitration’), (’ boycot’, ’ boycott’), (’ legal’, ’ criminal’), (’ legal’, ’ Judicial’), (’ legal’, ’ rulings’), (’ judicial’, ’ sentencing’), (’ marketing’, ’ advertising’), (’ legal’, ’ confidential’), (’ protesting’, ’ protest’), (’ recruited’, ’ recruit’), (’ recruited’, ’ recruits’), (’ judicial’, ’ criminal’), (’ legal’, ’ exemptions’), (’ demographics’, ’ demographic’), (’ boycott’, ’ boycot’), (’ sentencing’, ’ criminal’), (’ recruitment’, ’ recruits’), (’ recruitment’, ’ recruit’), (’ Constitutional’, ’ sentencing’) , (’ Legal’, ’ sentencing’), (’ constitutional’, ’ sentencing’) , (’ legal’, ’ subpoena’), (’ injury’, ’ injuries’), (’ FOIA’, ’ confidential’), (’ legal’, ’ licenses’), (’ donation’, ’ donations’), (’ disclosure’, ’ confidential’), (’ negotiation’, ’ negotiating’), (’ Judicial’, ’ legal’), (’ legally’, ’ criminal’), (’ legally’, ’ confidential’), (’ legal’, ’ jur’), (’ legal’, ’ enforcement’), (’ legal’, ’ lawyers’), (’ legally’, ’ enforcement’), (’ recruitment’, ’ recruiting’), (’ recruiting’, ’ recruit’), (’ criminal’, ’ sentencing’), (’ legal’, ’ attorneys’), (’ negotiations’, ’ negotiating’), (’ legally’, ’ arbitration’), (’ recruited’, ’ recruiting’), (’ legally’, ’ exemptions’), (’ legal’, ’ judicial’), (’ voting’, ’ Vote’), (’ negotiated’, ’ negotiating’), (’ legislative’, ’ veto’), (’ fund
1. What is the focus and contribution of the paper regarding interpreting transformer-based models? 2. What are the strengths and weaknesses of the proposed method in terms of its simplicity, ease of application, and assumptions made? 3. Do you have any concerns or questions about the author's claims and justifications, particularly regarding the extension of Elhage et al.'s work and the assumptions made about the output of the attention module and the space of output tokens? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, especially regarding the lack of comparison with other data-independent methods and the need for more insights and evidence supporting the projecting "hidden states" to the "embedding space"? 5. Are there any additional concerns or questions that the reviewer has regarding the paper's objectives, such as whether the method can be applied to many other transformer models trained with different modalities and whether the authors acknowledged this limitation?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a data-independent method to interpret transformer-based models, by projecting them on the embedding space, i.e. the space of vocabulary items. The authors derive a theoretical framework and provide evidence to support their argument. The authors present two applications for their framework and conduct experiments on the pre-trained GPT-2 model. Strengths And Weaknesses Strong points: The proposed method to interpret transformers is simple and easy to apply to other pre-trained language models. The paper is well-written with illustrative figures. Weak points: Experiments The pre-trained model used for the experiments is only GPT-2, which is not diverse. The author‘s justification for excluding layernorm from their analysis in section 4.2 is not convincing to me, since this is only verified for the case of GPT-2. There is a lack of comparison with other data-independent methods in interpreting Transformers. Method The work is an extension of [ Elhage et al. [2021]] by adjusting their tools for interpreting self-attention and applied them on pretrained transformer models. The extension is not significant. Assumptions in the paper need to be verified. i) The authors state: “When training Transformers with a language modeling objective, the same embedding matrix E is often used [Press and Wolf, 2016] to take the output of the last Transformer layer and project it back to the vocabulary dimension, i.e., into the embedding space”. I do not think this is the state-of-the-art practice in training language modeling with transformers. ii) The authors state: “Since the output of the attention module is added to the residual stream, we can assume according to the residual stream view that it is meaningful to project it to the embedding space, similar to FF values”. This statement assumes that the output of each transformer layer is insignificantly updated, and the space of the output tokens at each intermediate layer is still closed to the embeddings states. There is no evidence for this statement. iii) The paper uses E’ = E^T only when E is an orthonormal matrix, which is a strong assumption to me. iv) As far as I understand, the proposed method can not be applied to many other transformer models trained with different modalities since those do not support standard “vocabulary”. I failed to see that the authors have acknowledged this in their paper, despite their claim that “In this work, we present a theoretical analysis where all parameters of a trained Transformer are interpreted by projecting them into the embedding space, that is, the space of vocabulary items they operate on”. Objectives of the paper: The author points out the downsides of input-dependent interpretation that the example (data points) used for interpretation are not representative of the datasets. To some extent, I agree. However, the paper did not point that whether we can overcome this downside by simply taking the statistics of the interpretation measure over the datasets. The author claims that running input through the model is exhausted. But this reason is not convincing to me since we have to run the data through the model to get the prediction nevertheless. Additional Concerns and Questions for the Authors: Could the authors please provide more insights and evidence why projecting “hidden states” to the “embedding space” is meaningful? Please address the assumption “Since the output of the attention module is added to the residual stream, we can assume according to the residual stream view that it is meaningful to project it to the embedding space, similar to FF values”. What insights of the pre-trained transformers models that can only be shown via your method but not other data-independent or data-dependent interpretation methods? References: [1] N. Elhage, N. Nanda, C. Olsson, T. Henighan, N. Joseph, B. Mann, A. Askell, Y. Bai, A. Chen, T. Conerly, N. DasSarma, D. Drain, D. Ganguli, Z. Hatfield-Dodds, D. Hernandez, A. Jones, J. Kernion, L. Lovitt, K. Ndousse, D. Amodei, T. Brown, J. Clark, J. Kaplan, S. McCandlish, and C. Olah. A mathematical framework for transformer circuits, 2021. Clarity, Quality, Novelty And Reproducibility The paper is well-written. The quality and novelty of the paper are not high. The details on the experiment settings are useful, but there is no submitted code to reproduce the results.
ICLR
Title Analyzing Transformers in Embedding Space Abstract Understanding Transformer-based models has attracted significant attention, as they lie at the heart of recent technological advances across machine learning. While most interpretability methods rely on running models over inputs, recent work has shown that a zero-pass approach, where parameters are interpreted directly without a forward/backward pass is feasible for some Transformer parameters, and for two-layer attention networks. In this work, we present a theoretical analysis where all parameters of a trained Transformer are interpreted by projecting them into the embedding space, that is, the space of vocabulary items they operate on. We derive a simple theoretical framework to support our arguments and provide ample evidence for its validity. First, an empirical analysis showing that parameters of both pretrained and fine-tuned models can be interpreted in embedding space. Second, we present two applications of our framework: (a) aligning the parameters of different models that share a vocabulary, and (b) constructing a classifier without training by “translating” the parameters of a fine-tuned classifier to parameters of a different model that was only pretrained. Overall, our findings open the door to interpretation methods that, at least in part, abstract away from model specifics and operate in the embedding space only. 1 INTRODUCTION Transformer-based models [Vaswani et al., 2017] currently dominate Natural Language Processing [Devlin et al., 2018; Radford et al., 2019; Zhang et al., 2022] as well as many other fields of machine learning [Dosovitskiy et al., 2020; Chen et al., 2020; Baevski et al., 2020]. Consequently, understanding their inner workings has been a topic of great interest. Typically, work on interpreting Transformers relies on feeding inputs to the model and analyzing the resulting activations [Adi et al., 2016; Shi et al., 2016; Clark et al., 2019]. Thus, interpretation involves an expensive forward, and sometimes also a backward pass, over multiple inputs. Moreover, such interpretation methods are conditioned on the input, and are not guaranteed to generalize to all inputs. In the evolving literature on static interpretation, i.e., without forward or backward passes, Geva et al. [2022b] showed that the value vectors of the Transformer feed-forward module (the second layer of the feed-forward network) can be interpreted by projecting them into the embedding space, i.e., multiplying them by the embedding matrix to obtain a representation over vocabulary items. Elhage et al. [2021] have shown that in a 2-layer attention network, weight matrices can be interpreted in the embedding space as well. In this work, we extend the theoretical analysis and findings of Elhage et al. [2021] and Geva et al. [2022b], and present a zero-pass framework to understand the behaviour of Transformers. Conceretely, we interpret all weights of a pretrained language model (LM) in embedding space, including both keys and values of the feed-forward module as well as all attention parameters. Our theory relies on a simple observation. Since Geva et al. [2022b] have shown that one can project hidden states to the embedding space via the embedding matrix, we can extend this to other parts of the model by projecting to the embedding space and then projecting back by multiplying with a right-inverse of the embedding matrix. Thus, we can recast inner products in the model as inner products in embedding space. Viewing inner products in this way, we can interpret such products as interactions between pairs of vocabulary items.1 This applies to (a) interactions between 1We refer to the unique items of the vocabulary as vocabulary items, and to the (possibly duplicate) elements of a tokenized input as tokens. 2. Pack them into a similarity matrix attention queries and keys as well as to (b) interactions between attention value vectors and the parameters that project them at the output of the attention module. Taking this perspective to an extreme, one can view Transformers as operating implicitly in the embedding space. This entails the existence of a single linear space that depends solely on the tokenizer, in which parameters of different Transformers can be compared. Thus, one can use the embedding space to compare and transfer information across different models that share a tokenizer. We provide extensive empirical evidence for the credibility of our proposal. On the interpretation front (Fig. 1, Left), we provide qualitative and quantitative evidence that Transformer parameters can be interpreted in embedding space. We also show that when fine-tuning a pretrained LM on a sentiment analysis task (over movie reviews), projecting changes in parameters into embedding space yields words that characterize sentiment towards movies. Second (Fig. 1, Center), we show that given two distinct instances of BERT pretrained with different random seeds [Sellam et al., 2022], we can align layers of the two instances by casting their weights into the embedding space. We find that indeed layer i of the first instance aligns well to layer i of the second instance, showing the different BERT instances converge to a semantically-similar solution. Last (Fig. 1, Right), we take a model fine-tuned on a sentiment analysis task and “transfer” the learned weights to a different model that was only pretrained by going through the embedding spaces of the two models. We show that in 30% of the cases, this procedure, termed stitching, results in a classifier that reaches an impressive accuracy of 70% on the IMDB benchmark [Maas et al., 2011] without any training. Overall, our findings suggest that analyzing Transformers in embedding space is fruitful for both interpretability and as a tool to relate different models that share a vocabulary, and opens the door to interpretation methods that operate in embedding space only. Our code is available at https: //anonymized. 2 BACKGROUND We now present the main components of the Transformer [Vaswani et al., 2017] relevant to our analysis. We discuss the residual stream view of Transformers, and recapitulate a view of the attention layer parameters as interaction matrices WVO and WQK [Elhage et al., 2021]. Similar to Elhage et al. [2021], we exclude biases and layer normalization from our analysis. 2.1 TRANSFORMER ARCHITECTURE The Transformer consists of a stack of layers, each includes an attention module followed by a Feed-Forward (FF) module. All inputs and outputs are sequences of N vectors of dimensionality d. The Attention Module takes as input a sequence of representations X ∈ RN×d, and each layer L is parameterized by four matrices W (L)Q ,W (L) K ,W (L) V ,W (L) O ∈ Rd×d (we henceforth omit the layer superscript for brevity). The input X is projected to produce queries, keys, and values: Qatt = XWQ,Katt = XWK , Vatt = XWV . Each one of Qatt,Katt, Vatt is split along the columns to H different heads of dimensionality RN× dH , denoted by Qiatt,Kiatt, V iatt respectively. We then compute H attention maps: Ai = softmax ( QiattK iT att√ d/H +M ) ∈ RN×N , where M ∈ RN×N is the attention mask. Each attention map is applied to the corresponding value head as AiV iatt, results are concatenated along columns and projected via WO. The input to the module is added via a residual connection, and thus the attention module’s output is: X + Concat [ A1V 1att, . . . , A iV iatt, . . . , A HV Hatt ] WO. (1) The FF Module is a two-layer neural network, applied to each position independently. Following past terminology [Sukhbaatar et al., 2019; Geva et al., 2020], weights of the first layer are called FF keys and weights of the second layer FF values. This is an analogy to attention, as the FF module too can be expressed as: f(QKT)V , where f is the activation function, Q ∈ RN×d is the output of the attention module and the input to the FF module, and K,V ∈ Rdff×d are the weights of the first and second layers of the FF module. Unlike attention, keys and values are learnable parameters. The output of the FF module is added to the output of the attention module to form the output of the layer via a residual connection. The output of the i-th layer is called the i-th hidden state. Embedding Matrix To process sequences of discrete tokens, Transformers use an embedding matrix E ∈ Rd×e that provides a d-dimensional representation to vocabulary items before entering the first Transformer layer. When training Transformers with a language modeling objective, the same embedding matrix E is often used [Press and Wolf, 2016] to take the output of the last Transformer layer and project it back to the vocabulary dimension, i.e., into the embedding space. In this work, we will interpret all components of the Transformer model in the embedding space. 2.2 THE RESIDUAL STREAM We rely on a useful view of the Transformer through its residual connections proposed by Elhage et al. [2021].2 Specifically, each layer takes a hidden state as input and adds information to the hidden state through its residual connection. Under this view, the hidden state is a residual stream passed along the layers, from which information is read, and to which information is written at each layer. Elhage et al. [2021] and Geva et al. [2022b] observed that the residual stream is often barely updated in the last layers, and thus the final prediction is determined in early layers and the hidden state is mostly passed through the later layers. An exciting consequence of the residual stream view is that we can project hidden states in every layer into embedding space by multiplying the hidden state with the embedding matrix E, treating the hidden state as if it were the output of the last layer. Geva et al. [2022a] used this approach to interpret the prediction of Transformer-based language models, and we follow a similar approach. 2.3 WQK AND WVO Following Elhage et al. [2021], we describe the attention module in terms of interaction matrices WQK and WVO which will be later used in our theoretical derivation. The computation of the attention module (§2.1) can be re-interpreted as follows. The attention projection matrices WQ,WK,WV can be split along the column axis to H equal parts denoted by W iQ,W i K,W i V ∈ Rd× d H for 1 ≤ i ≤ H . Similarly, the attention output matrix WO can be split along the row axis into H heads, W iO ∈ Rd/H×d. We define the interaction matrices as W iQK := W i QW iT K ∈ Rd×d, W iVO := W iVW iO ∈ Rd×d. 2Though earlier mentions include nostalgebraist [2020]. Importantly, W iQK,W i VO are input-independent. Intuitively, WQK encodes the amount of attention between pairs of tokens. Similarly, in W iVO, the matrices WV and WO can be viewed as a transition matrix that determines how attending to certain tokens affects the subsequent hidden state. We can restate the attention equations in terms of the interaction matrices. Recall (Eq. 1) that the output of the i’th head of the attention module is AiV iatt and the final output of the attention module is (without the residual connection): Concat [ A1V 1att, ..., A iV iatt, ..., A HV Hatt ] WO = H∑ i=1 Ai(XW iV)W i O = H∑ i=1 AiXW iVO. (2) Similarly, the attention map Ai at the i’th head in terms of WQK is (softmax is done row-wise): Ai = softmax ( (XW iQ)(XW i K) T√ d/H +M ) = softmax ( X(W iQK)X T√ d/H +M ) . (3) 3 PROJECTING TRANSFORMER PARAMETERS INTO EMBEDDING SPACE In this section, we propose that Transformer parameters can be projected into embedding space for interpretation purposes. Our results extend Elhage et al. [2021] who obtained similar results for a two-layer attention-only network. We empirically support our framework in §4-§5. Given a matrix A ∈ RN×d, we can project it into embedding space by multiplying by the embedding matrix E as  = AE ∈ RN×e. Let E′ be a right-inverse of E, that is, EE′ = I ∈ Rd×d.3 Then we can reconstruct the original matrix with E′ as A = A(EE′) = ÂE′. We will use this simple identity to reinterpret the model’s operation in embedding space. To simplify our analysis, we ignore layer norms and biases, a standard simplification justified in prior work [Elhage et al., 2021]. In interpretation experiments (§4), we do not use an exact right inverse such as the Moore–Penrose pseudo-inverse [Moore, 1920; Bjerhammar, 1951; Penrose, 1955] but instead use the transpose of the embedding matrix E′ = ET. This is since interpretation involves not only projecting using E′ but also applying a top-k operation where we inspect the vocabulary items with the largest logits. We empirically find that the Moore–Penrose pseudo-inverse does not work well for interpretation due to the top-k operation, and provide a justification and comprehensive empirical evidence in Appendix A. Conversely, ET empirically works well, and we conjecture this is due to the training procedure of LMs where E is used to embed discrete tokens into the hidden state dimension and ET is used to predict a distribution over the vocabulary items from the last hidden state. Attention Module Recall that W iVO := W iVW iO ∈ Rd×d is the interaction matrix between attention values and the output projection matrix for attention head i. By definition, the output of each head is: AiXW iVO = A iX̂E′W iVO. Since the output of the attention module is added to the residual stream, we can assume according to the residual stream view that it is meaningful to project it to the embedding space, similar to FF values. Thus, we expect the sequence of N e-dimensional vectors (AiXW iVO)E = A iX̂(E′W iVOE) to be interpretable. Importantly, the role of A i is just to mix the representations of the updated N input vectors. This is similar to the FF module, where FF values (the parameters of the second layer) are projected into embedding space, and FF keys (parameters of the first layer) determine the coefficients for mixing them. Hence, we can assume that the interpretable components are in the term X̂(E′W iVOE). Zooming in on this operation, we see that it takes the previous hidden state in the embedding space (X̂) and produces an output in the embedding space which will be incorporated into the next hidden state through the residual stream. Thus, E′W iVOE is a transition matrix that takes a representation the embedding space and outputs a new representation in the same space. Similarly, the matrix W iQK can be viewed as a bilinear map (Eq. 3). To interpret it in embedding space, we perform the following operation with E′: XW iQKX T = (XEE′)W iQK(XEE ′)T = (XE)E′W iQKE ′T(XE)T = X̂(E′W iQKE ′T)X̂T. 3E′ exists if d ≤ e and E is full-rank. Therefore, the interaction between tokens at different positions is determined by an e×e matrix that expresses the interaction between pairs of vocabulary items. FF Module Geva et al. [2022b] showed that FF value vectors V ∈ Rdff×d are meaningful when projected into embedding space, i.e., for a FF value vector v ∈ Rd, vE ∈ Re is interpretable (see §2.1). In vectorized form, the rows of V E ∈ Rdff×e are interpretable. On the other hand, the keys K of the FF layer are multiplied on the left by the output of the attention module, which are the queries of the FF layer. Denoting the output of the attention module by Q, we can write this product as QKT = Q̂E′KT = Q̂(KE′T)T. Because Q is a hidden state, we assume according to the residual stream view that Q̂ is interpretable in embedding space. When multiplying Q̂ by KE′T, we are capturing the interaction in embedding space between each query and key, and thus expect KE′T to be interpretable in embedding space as well. Overall, FF keys and values are intimately connected – the i-th key controls the coefficient of the i-th value, so we expect their interpretation to be related. While not central to this work, we empirically show that key-value pairs in the FF module are similar in embedding space in Appendix B.1. Subheads Another way to interpret the matrices W iVO and W iQK is through the subhead view. We use the following identity: AB = ∑b j=1 A:,jBj,:, which holds for arbitrary matrices A ∈ Ra×b, B ∈ Rb×c, where A:,j ∈ Ra×1 are the columns of the matrix A and Bj,: ∈ R1×c are the rows of the matrix B. Thus, we can decompose W iVO and W i QK into a sum of d H rank-1 matrices: W iVO = d H∑ j=1 W i,jV W i,j O , W i QK = d H∑ j=1 W i,jQ W i,j K T . where W i,jQ ,W i,j K ,W i,j V ∈ Rd×1 are columns of W iQ,W iK,W iV respectively, and W i,j O ∈ R1×d are the rows of W iO. We call these vectors subheads. This view is useful since it allows us to interpret subheads directly by multiplying them with the embedding matrix E. Moreover, it shows a parallel between interaction matrices in the attention module and the FF module. Just like the FF module includes key-value pairs as described above, for a given head, its interaction matrices are a sum of interactions between pairs of subheads (indexed by j), which are likely to be related in embedding space. We show this is indeed empirically the case for pairs of subheads in Appendix B.1. We summarize our approach for projecting the different components of the Transformer into embedding space in Table 1. 4 INTERPRETABILITY EXPERIMENTS In this section, we provide empirical evidence for the viability of our approach as a tool for interpreting Transformer parameters. 4.1 PARAMETER INTERPRETATION EXAMPLES We take GPT-2 medium [Radford et al., 2019] and manually analyze its parameters. GPT-2 medium has a total of 384 attention heads (24 layers and 16 heads per layer). We take the embedded transition matrices E′W iVOE for all heads and examine the top-k pairs of vocabulary items. As there are only 384 heads, we manually choose a few heads and present the top-k pairs in Appendix C.1 (k = 50). We observe that different heads capture different types of relations between pairs of vocabulary items including word parts, heads that focus on gender, geography, orthography, particular part-of-speech tags, and various semantic topics. In Appendix C.2 we perform a similar analysis for WQK. Appendix C.3 provides examples of key-value pairs from the FF modules of GPT-2 medium. We show random pairs (k, v) from the set of those pairs such that when looking at the top-100 vocabulary items for k and v, at least 15% overlap. Such pairs account for approximately 5% of all key-value pairs. The examples show how key-value pairs often revolve around similar topics such as media, months, organs, etc. Last, we show we can use embeddings to locate FF values (or keys) related to a particular topic. We take a few vocabulary items related to a certain topic, e.g., [‘cm’, ‘kg’, ‘inches’], average their embeddings,4 and rank all FF values (or keys) based on their dot-product with the average. Appendix C.4 shows a few examples of FF values found with this method that are related to programming, measurements, and animals. 4.2 HIDDEN STATE AND PARAMETERS An advantage of zero-pass interpretation is that it does not require running inputs through the model which is expensive and non-exhaustive. In this section (and this section only), we run a forward pass over inputs and examine if the representations in embedding space of dynamically-computed hidden states are “similar” to the representations of static parameter vectors that are activated. A technical side note: we use GPT-2, which applies layer norm to the Transformer output before projecting it to the embedding space with E. Thus, conservatively, layer norm should be considered as part of the projection operation.5 Empirically however, we observe that projecting parameters directly without layer norm works well, which simplifies our analysis in §3. An exception is when projecting hidden states in this section, where we apply layer norm before projection to improve performance, similar to Geva et al. [2022a]. Experimental Design We use GPT-2 medium and run it over 60 examples from IMDB [Maas et al., 2011]. This provides us with a dynamically-computed hidden state h for every token and at the output of every layer. For the projection ĥ ∈ Re of each such hidden state, we take the projections of the m most active parameter vectors {x̂i}mi=1 in the layer that computed h and check 4We subtract the average embedding µ from E before averaging, which improves interpretability. 5Layer norm consists of standardizing the mean and variance of the input followed by an affine transforma- tion. The latter part can be easily absorbed into E (while adding a bias term). if they cover the dominant vocabulary items of ĥ in embedding space. Specifically, let top-k(wE) be the k vocabulary items with largest logits in embedding space for a vector w ∈ Rd. We compute: Rk(x̂1, ..., x̂m, ĥ) = |top-k(ĥ) ∩ ⋃m i=1 top-k(x̂i)| k , to capture if activated parameter vectors cover the main vocabulary items corresponding to the hidden state. We find the m most active parameter vectors separately for FF keys (K), FF values (V ), attention value subheads (WV) (see §3), and attention output subheads (WO), where the activation of each parameter vector is determined by the vector’s “coefficient” as follows. For a FF key-value pair (k, v) the coefficient is σ(qTk), where q ∈ Rd is an input to the FF module, and σ is the FF nonlinearity. For attention value-output subhead pairs (v, o) the coefficient is xTv, where x is the input to this component (for attention head i, the input is one of the rows of AiX , see Eq. 2). Results and Discussion Figure 2 presents the Rk score averaged across tokens per layer. As a baseline, we compare Rk of the activated vectors {x̂i}mi=1 with the correctly-aligned hidden state ĥ at the output of the relevant layer (blue bars) against the the Rk when randomly sampling ĥrand from the set of all hidden states (orange bars). We conclude that the representations in embedding space induced by activated parameter vector mirror, at least to some extent, the representations of the hidden states themselves. Appendix §B.2 shows a variant of this experiment, where we compare activated parameters throughout GPT2-medium’s layers to the last hidden state, which produces the logits used for prediction. 4.3 INTERPRETATION OF FINE-TUNED MODELS We now show that we can interpret the changes a model goes through during fune-tuning through the lens of embedding space. We fine-tune the top-3 layers of the 12-layer GPT-2-base with a sequence classification head on IMDB sentiment analysis (binary classification) and compute the difference between the original parameters and the fine-tuned model. We then project the difference of parameter vectors into embedding space and test if change is interpretable w.r.t sentiment analysis. Appendix D shows examples for projected differences randomly sampled from the fine-tuned layers. Frequently, the difference, or its negation, is projected to nouns, adjectives and adverbs that express sentiment for a movie, such as ‘amazing’, ‘masterpiece’, ‘incompetence’, etc. This shows that the differences are indeed projected into vocabulary items that characterize movie reviews’ sentiment. Almost all parameter groups present this behavior, except for V and WO, which curiously are the parameters added to the residual stream. 5 ALIGNING MODELS IN EMBEDDING SPACE Assuming Transformers by and large operate in embedding space leads to an exciting possibility - we can relate different models to one another so long as they share a vocabulary and tokenizer. In §5.1, we show that we can align the layers of BERT models trained with different random seeds. In §5.2, we show the embedding space can be leveraged to “stitch” the parameters of a fine-tuned model to a model that was not fine-tuned. 5.1 LAYER ALIGNMENT Experimental Design Taking our approach to the extreme, the embedding space is a universal space, which depends only on the tokenizer, and in which Transformer parameters and hidden states reside. Consequently, we can align parameter vectors from different models in this space and compare them even if they come from different models, as long as they share a vocabulary. To demonstrate this, we use MultiBERT [Sellam et al., 2022], which contains 25 different instantiations of BERT initialized from different random seeds. We take parameters from two MultiBERT seeds and compute the Pearson correlation between their projection to embedding space. For example, let VA, VB be the FF values of models A and B. We can project the values into embedding space: VAEA, VBEB , where EA, EB are the respective embedding matrices, and compute Pearson correlation between projected values. This produces a similarity matrix S̃ ∈ R|VA|×|VB |, where each entry is the correlation coefficient between projected values from the two models. We bin S̃ by layer pairs and average the absolute value of the scores in each bin (different models might encode the same information in different directions, so we use absolute value) to produce a matrix S ∈ RL×L, where L is the number of layers. Specifically, the average (absolute) correlation between vectors that come from layer ℓA in model A and layer ℓB in Model B is registered in entry (ℓA, ℓB) of S. Last, to obtain a one-to-one layer alignment, we use the Hungarian algorithm [Kuhn, 1955], which assigns exactly one layer from the first model to a layer from the second model. The algorithm’s objective is to maximize, given a similarity matrix S, the sum of similarities of the chosen pairs, such that each index in one model is matched with exactly one index in the other. We repeat this for all parameter groups (WQ,WK,WV,WO,K). Results and Discussion Figure 3 (left) shows the resulting alignment. Clearly, parameters from a certain layer in model A tend to align to the same layer in model B across all parameter groups. This suggests that different layers from different models that were trained separately (but with the same training objective and data) serve a similar function. As further evidence, we show that if not projected, the matching appears absolutely random in Figure §3 (right). We show the same results for other seed pairs as well in Appendix B.3. 5.2 ZERO-SHOT STITCHING Model stitching [Lenc and Vedaldi, 2015; Csiszárik et al., 2021; Bansal et al., 2021] is a relatively under-explored feature of neural networks, particularly in NLP. The idea is that different models, sometimes trained on different data and with different architectures, learn representations that can be aligned through a linear transformation, termed stitching. Representations correspond to hidden states , and thus one can learn a transformation matrix from one model’s hidden states to an equivalent hidden state in the other model. Here, we show that going through embedding space one can align the hidden states of two models, i.e., stitch, without training. Given two models, we want to find a linear stitching transformation to align their representation spaces. According to our theory, given a hidden state v ∈ Rd1 from model A, we can project it to the embedding space as vEA, where EA is its embedding matrix. Then, we can re-project to the feature space of model B, with E+B ∈ Re×d2 , where E + B is the Penrose-Moore pseudo-inverse of the embedding matrix EB .6 This transformation can be expressed as multiplication with the kernel KAB := EAE + B ∈ Rd1×d2 . We employ the above approach to take representations of a fine-tuned classifier, A, and stitch them on top of a model B that was only pretrained, to obtain a new classifier based on B. Experimental Design We use the 24-layer GPT-2 medium as model A and 12-layer GPT-2 base model trained in §4.3 as model B. We fine-tune the last three layers of model B on IMDB, as explained in §4.3. Stitching is simple and is performed as follows. Given the sequence of N hidden states HℓA ∈ RN×d1 at the output of layer ℓ of model A (ℓ is a hyperparameter), we apply the stitching layer, which multiplies the hidden states with the kernel, computing HℓAKAB . This results in hidden states HB ∈ RN×d2 , used as input to the three fine-tuned layers from B. 6Since we are not interested in interpretation we use an exact right-inverse and not the transpose. Results and Discussion Stitching produces models with accuracies that are higher than random on IMDB evaluation set, but not consistently. Figure 4 shows the accuracy of stitched models against the layer index from model A over which stitching is performed. Out of 11 random seeds, three models obtained accuracy that is significantly higher than the baseline 50% accuracy, reaching an accuracy of roughly 70%, when stitching is done over the top layers. 6 RELATED WORK Interpreting Transformer is a broad area of research that has attracted much attention in recent years. A large body of work has focused on analyzing hidden representations, mostly through probing [Adi et al., 2016; Shi et al., 2016; Tenney et al., 2019; Rogers et al., 2020]. Voita et al. [2019a] used statistical tools to analyze the evolution of hidden representations throughout layers. Recently, Mickus et al. [2022] proposed to decompose the hidden representations into the contributions of different Transformer components. Unlike these works, we interpret parameters rather than the hidden representations. Another substantial effort has been to interpret specific network components. Previous work analyzed single neurons [Dalvi et al., 2018; Durrani et al., 2020], attention heads [Clark et al., 2019; Voita et al., 2019b], and feedforward values [Geva et al., 2020; Dai et al., 2021; Elhage et al., 2022]. While these works mostly rely on input-dependent neuron activations, we inspect “static” model parameters, and provide a comprehensive view of all Transformer components. Our work is most related to efforts to interpret specific groups of Transformer parameters. Cammarata et al. [2020] made observations about the interpretability of weights of neural networks. Elhage et al. [2021] analyzed 2-layer attention networks. We extend their analysis to multi-layer pre-trained Transformer models. Geva et al. [2020; 2022a;b] interpreted feedforward values in embedding space. We coalesce these lines of work and offer a unified interpretation framework for Transformers in embedding space. 7 DISCUSSION Our work has a few limitations that we care to highlight. First, it focuses on interpreting models through the vocabulary lens. While we have shown evidence for this, it does not preclude other factors from being involved in the computation process. Second, we used E′ = ET, but future research might find variants of E that improve performance. Last, we assume Transformer components can be projected to the embedding space with a single matrix multiplication, but this might depend on model training, e.g., in GPT-2 it involves a layer norm operation as explained in §4.2. Notwithstanding, we believe the benefits of our work overshadow its limitations. We provide a simple and efficient approach, which equips researchers with new tools to interpret Transformer models and relate them to one another. Apart from Elhage et al. [2021], there has been little work pursuing the embedding space approach, and we “sharpen” the tools they laid down and adjust them to existing pre-trained Transformers. Moreover, our framework allows us to view parameters from different models as residents of the same universal embedding space, where they can be compared in model-agnostic fashion. We demonstrate two applications of this observation (model alignment and stitching) and argue future work can yield many additional applications. A RETHINKING INTERPRETATION The process of interpreting a vector v in Geva et al. [2022b] proceeds in two steps: first the projection of the vector to the embedding space (vE); then, we use the list of the tokens that were assigned the largest values in the projected vector, i.e.: top-k(vE), as the interpretation of the projected vector. This is reasonable since (a) the most activated coordinates contribute the most when added to the residual stream, and (b) this matches how we eventually decode: we project to the embedding space and consider the top-1 token (or one of the few top tokens, when using beam search). In this work, we interpret inner products and matrix multiplications in the embedding space: given two vectors x, y ∈ Rd, their inner product xTy can be considered in the embedding space by multiplying with E and then by one of its right inverses (e.g., its pseudo-inverse E+ [Moore, 1920; Bjerhammar, 1951; Penrose, 1955]): xTy = xTEE+y = (xTE)(yE+T)T. Assume xE is interpretable in the embedding space, crudely meaning that it represents logits over vocabulary items. We expect y, which interacts with x, to also be interpretable in the embedding space. Consequently, we would like to take yE+T to be the projection of y. However, this projection does not take into account the subsequent interpretation using top-k. The projected vector yE+T might be harder to interpret in terms of its most activated tokens. To alleviate this problem, we need a different “inverse” matrix E′ that works well when considering the top-k operation. Formally, we want an E′ with the following “robustness” guarantee: keep-k(xE)Tkeep-k(yE′) ≈ xTy, where keep-k(v) is equal to v for coordinates whose absolute value is in the top-k, and zero elsewhere. This is a stronger notion of inverse – not only is EE′ ≈ I , but even when truncating the vector in the embedding space we can still reconstruct it with E′. We claim that ET is a decent instantiation of E′ and provide some empirical evidence. While a substantive line of work [Ethayarajh, 2019; Gao et al., 2019; Wang et al., 2020; Rudman et al., 2021] has shown that embedding matrices are not isotropic (an isotropic matrix E has to satisfy EET = αI for some scalar α), we show that it is isotropic enough to make ET a legitimate compromise. We randomly sample 300 vectors drawn from the normal distribution N (0, 1), and compute for every pair x, y the cosine similarity between xTy and keep-k(xE)Tkeep-k(yE′) for k = 1000, and then average over all pairs. We repeat this for E′ ∈ {E+T, E} and obtain a score of 0.10 for E+T, and 0.83 for E, showing the E is better under when using top-k. More globally, we compare E′ ∈ {E+T, E} for k ∈ {10, 50, 100, 200, 300, 500} with three distributions: - x, y drawn from the normal N (0, 1) distribution - x, y chosen randomly from the FF values - x, y drawn from hidden states along Transformer computations. In Figure 5 (Left) we show the results, where dashed lines represent E+ and solid lines represent ET. For small values of k (used for interpretation), ET is superior to E+ across all distributions. Interestingly, the hidden state distribution is the only distribution where E+ has similar performance to ET. Curiously, when looking at higher values of k the trend is reversed (k = {512, 1024, 2048, 4096, 10000, 15000, 20000, 30000}) - see Figure 5 (Right). This settles the deviation from findings showing embedding matrices are not isotropic, as we see that indeed as k grows, ET becomes an increasingly bad approximate right-inverse of the embedding matrix. The only distribution that keeps high performance with ET is the hidden state distribution, which is an interesting future direction of investigation. B ADDITIONAL MATERIAL B.1 CORRESPONDING PARAMETER PAIRS ARE RELATED We define the following metric applying on vectors after projecting them into the embedding space: Simk(x̂, ŷ) = |top-k(x̂) ∩ top-k(ŷ)| |top-k(x̂) ∪ top-k(ŷ)| where top-k(v) is the set of k top activated indices in the vector v (which correspond to tokens in the embedding space). This metric is the Jaccard index [Jaccard, 1912] applied to the top-k tokens from each vector. In Figure 6, Left, we demonstrate that corresponding FF key and value vectors are more similar (in embedding space) than two random key and value vectors. In Figure 6, Right, we show a similar result for attention value and output vectors. In Figure 6, Bottom, the same analysis in done for attention query and key vectors. This shows that there is a much higher-than-chance relation between corresponding FF keys and values (and the same for attention values and outputs). B.2 FINAL PREDICTION AND PARAMETERS We show that the final prediction of the model is correlated in embedding space with the most activated parameters from each layer. This implies that these objects are germane to the analysis of the final prediction in the embedding space, which in turn suggests that the embedding space is a viable choice for interpreting these vectors. Figure 7 shows that just like §4.2, correspondence is better when hidden states are not randomized, suggesting there parameter interpretations have an impact on the final prediction. B.3 PARAMETER ALIGNMENT PLOTS FOR ADDITIONAL MODEL PAIRS Alignment in embedding space of layers of pairs of BERT models trained with different random seeds for additional model pairs. SEED 1 VS SEED 2 K V WK WQ WV WO K V WK WQ WV WO SEED 2 VS SEED 3 K V WK WQ WV WO K V WK WQ WV WO SEED 3 VS SEED 4 K V WK WQ WV WO K V WK WQ WV WO SEED 4 VS SEED 5 K V WK WQ WV WO K V WK WQ WV WO C EXAMPLE CASES C.1 VALUE-OUTPUT MATRICES Below we show value-output pairs from different heads of GPT-2 Medium. For each head, we show the 50 pairs with largest value in the e × e transition matrix. There are 384 attention heads in GPT-2 medium from which we manually choose a subset. Throughout the section some lists were marked with asterisks indicating the way this particular list was created: * - pairs of the form (x, x) were excluded from the list C.1.1 LOW LEVEL LANGUAGE MODELING Layer 21 Head 7* (’FN’, ’NF’), (’ Ramos’, ’Ram’), (’ Hughes’, ’Hug’), (’GR’, ’gran’), (’NF’, ’FN’), (’CL’, ’CLA’), (’ McCain’, ’McC’), (’ Marshall’, ’Marsh’), (’Hug’, ’ Hughes’), (’ Tanner’, ’Tan’), (’NH’, ’nih’), (’NR’, ’NRS’), (’Bow’, ’ Bowman’), (’Marsh’, ’ Marshall’), (’ Jacobs’, ’Jac’), (’ Hayes’, ’Hay’), (’Hay’, ’ Hayes’), (’ McCorm’, ’McC’), (’NR’, ’NI’), (’ Dawson’, ’ sidx’), (’Tan’, ’ Tanner’), (’GR’, ’gra’), (’jac’, ’JA’), (’zo’, ’zos’), (’NF’, ’NI’), (’ McCull’, ’McC’), (’Jac’, ’ Jacobs’), (’ Beet’, ’ Beetle’), (’FG’, ’GF’), (’ja’, ’jas’), (’ Wilkinson’, ’Wil’), (’Ram’, ’ Ramos’), (’GR’, ’GRE’), (’FN’, ’ NF’), (’McC’, ’ McCorm’), (’ Scarborough’, ’Scar’), (’Ba’, ’ Baal’), (’FG’, ’FP’), (’FN’, ’FH’), (’Gar’, ’ Garfield’), (’jac’, ’jas’), (’nut’, ’nuts’), (’ Wis’, ’WI’), (’ Vaughan’, ’ Vaughn’), (’PF’, ’FP’), (’RN’, ’RNA’), (’jac’, ’ Jacobs’), (’FN’, ’FM’), (’Kn’, ’ Knox’), (’nic’, ’NI’) Layer 19 Head 13 (guessing the first letter/consonant of the word) (’senal’, ’ R’), # arsenal (’senal’, ’R’), (’vernment’, ’ G’), # government (’ Madness’, ’ M’), (’ Mayhem’, ’ M’), (’nesday’, ’ W’), # wednesday (’vernment’, ’G’), (’ Madness’, ’M’), (’lace’, ’ N’), # necklace (’nesday’, ’W’), (’senal’, ’Rs’), (’vernment’, ’ g’), (’farious’, ’ N’), # nefarious (’eneg’, ’ C’), (’senal’, ’ r’), (’ruary’, ’ F’), # february (’senal’, ’RIC’), (’ondo’, ’ R’), (’ Mandela’, ’ N’), # nelson (’ Mayhem’, ’M’), (’senal’, ’ RD’), (’estine’, ’ C’), (’vernment’, ’Gs’), (’senal’, ’RF’), (’esis’, ’ N’), (’Reviewed’, ’ N’), (’arette’, ’ C’), # cigarette (’rome’, ’ N’), (’theless’, ’ N’), # nonetheless (’lace’, ’N’), (’DEN’, ’ H’), (’ versa’, ’ V’), (’bably’, ’ P’), # probably (’vernment’, ’GF’), (’vernment’, ’g’), (’vernment’, ’GP’), (’ornia’, ’ C’), # california (’ilipp’, ’ F’), (’umbered’, ’ N’), (’arettes’, ’ C’), (’senal’, ’RS’), (’onsense’, ’ N’), (’senal’, ’RD’), (’senal’, ’RAL’), (’uci’, ’ F’), (’ondo’, ’R’), (’senal’, ’ RI’), (’iday’, ’ H’), # holiday (’senal’, ’ Rx’), (’odor’, ’ F’) Layer 20 Head 9 (’ behalf’, ’On’), (’ behalf’, ’ On’), (’ behalf’, ’ on’), (’ periods’, ’during’), (’ bounds’, ’within’), (’ envelope’, ’ inside’), (’door’, ’outside’), (’ envelope’, ’inside’), (’ regime’, ’ Under’), (’ periods’, ’ during’), (’lihood’, ’ LIKE’), (’ occasions’, ’ on’), (’ regime’, ’Under’), (’door’, ’inside’), (’period’, ’during’), (’lihood’, ’Like’), (’ periods’, ’ During’), (’ envelope’, ’Inside’), (’ sake’, ’for’), (’ doors’, ’ inside’), (’ regime’, ’ under’), (’ behalf’, ’ ON’), (’ purposes’, ’for’), (’ occasions’, ’On’), (’ doors’, ’inside’), (’ basis’, ’ on’), (’ regimes’, ’ Under’), (’doors’, ’outside’), (’ Osc’, ’inside’), (’ periods’, ’During’), (’door’, ’ inside’), (’ regime’, ’ UNDER’), (’ regimes’, ’ under’), (’ regimes’, ’Under’), (’doors’, ’inside’), (’zx’, ’inside’), (’ period’, ’during’), (’ascript’, ’inside’), (’door’, ’Inside’), (’ occasions’, ’ On’), (’ysc’, ’BuyableInstoreAndOnline’) , (’ envelope’, ’ Inside’), (’ pauses’, ’during’), (’ regime’, ’under’), (’ occasion’, ’ on’), (’ doors’, ’outside’), (’ banner’, ’ UNDER’), (’ envelope’, ’within’), (’abouts’, ’ here’), (’ duration’, ’during’) Layer 22 Head 5 (named entities, mostly made of two parts) (’enegger’, ’ Schwartz’), (’shire’, ’ Lincoln’), (’xual’, ’Weiss’), (’nery’, ’ Nun’), (’ Qiao’, ’ Huang’), (’schild’, ’ Schwarz’), (’oslov’, ’ Czech’), (’ Rica’, ’ Costa’), (’ Qiao’, ’ Qiao’), (’xual’, ’ RW’), (’ Nadu’, ’ Tamil’), (’ Nadu’, ’Tam’), (’shire’, ’ Baldwin’), (’swick’, ’ Hoff’), (’xual’, ’ Weiss’), (’ Takeru’, ’ Yamato’), (’xual’, ’ Grassley’), (’swick’, ’ Schwartz’), (’enegger’, ’ Schiff’), (’enegger’, ’Weiss’), (’xual’, ’RW’), (’shire’, ’ Nottingham’), (’shire’, ’ Barrett’), (’arest’, ’ Buch’), (’ Fei’, ’ Fei’), (’miah’, ’Jere’), (’swick’, ’ Owl’), (’ufact’, ’ Swanson’), (’akuya’, ’ Tanaka’), (’ Sachs’, ’ Feinstein’), (’enegger’, ’ Wagner’), (’otle’, ’Roberts’), (’shire’, ’ Neville’), (’oslov’, ’ Prague’), (’sburg’, ’ Hammond’), (’ ILCS’, ’ Dunham’), (’ Malfoy’, ’ Draco’), (’yip’, ’Billy’), (’iversal’, ’ Monroe’), (’iversal’, ’Murray’), (’Yang’, ’Yang’), (’akuya’, ’ Krishna’), (’schild’, ’ Schwartz’), (’tz’, ’ Rabb’), (’shire’, ’gow’), (’enegger’, ’ Feldman’), (’cair’, ’ Chou’), (’enegger’, ’ Duffy’), (’enegger’, ’Sch’), (’ Jensen’, ’ Jensen’) Layer 22 Head 13 (’ Additionally’, ’ the’), (’ Unfortunately’, ’ the’), (’ Nevertheless’, ’ the’), (’ Sadly’, ’ the’), (’ However’, ’ the’), (’ Furthermore’, ’ the’), (’ Additionally’, ’,’), (’ During’, ’ the’), (’ Moreover’, ’ the’), (’ Whilst’, ’ the’), (’ Since’, ’ the’), (’ Unfortunately’, ’,’), (’ Additionally’, ’-’), (’ Perhaps’, ’ the’), (’ Sadly’, ’,’), (’ Throughout’, ’ the’), (’ Nevertheless’, ’,’), (’ While’, ’ the’), (’ However’, ’,’), (’ Although’, ’ the’), (’ There’, ’ the’), (’ Furthermore’, ’,’), (’ Eventually’, ’ the’), (’ Meanwhile’, ’ the’), (’ Hopefully’, ’ the’), (’ Nevertheless’, ’-’), (’ During’, ’,’), (’ Regardless’, ’ the’), (’ However’, ’-’), (’ Whilst’, ’,’), (’ Additionally’, ’ and’), (’ Moreover’, ’,’), (’ Unfortunately’, ’-’), (’ They’, ’ the’), (’ Sadly’, ’-’), (’ Whereas’, ’ the’), (’ Additionally’, ’ a’), (’ Furthermore’, ’-’), (’ Unlike’, ’ the’), (’ Typically’, ’ the’), (’ Since’, ’,’), (’ Normally’, ’ the’), (’ Perhaps’, ’,’), (’ During’, ’-’), (’ Throughout’, ’,’), (’ While’, ’,’), (’ Nevertheless’, ’ a’), (’ Interestingly’, ’ the’), (’ Unfortunately’, ’ and’), (’ Unfortunately’, ’ a’) C.1.2 GENDER Layer 18 Head 1 (’ Marie’, ’women’), (’ Marie’, ’ actresses’), (’ Anne’, ’women’), (’ Anne’, ’Women’), (’ Marie’, ’woman’), (’ Marie’, ’Women’), (’ Anne’, ’woman’), (’ Marie’, ’Woman’), (’ Anne’, ’ actresses’), (’ Marie’, ’ heroine’), (’Jane’, ’Women’), (’ Anne’, ’ heroine’), (’Jane’, ’women’), (’ actresses’, ’Women’), (’ Anne’, ’Woman’), (’ Esther’, ’Women’), (’ Esther’, ’women’), (’ Marie’, ’girls’), (’ Anne’, ’Mrs’), (’ Marie’, ’ actress’), (’ actresses’, ’women’), (’Jane’, ’Woman’), (’ Marie’, ’ girls’), (’Jane’, ’ actresses’), (’Anne’, ’Woman’), (’ Marie’, ’Girls’), (’Anne’, ’women’), (’ Anne’, ’Girls’), (’ actresses’, ’Woman’), (’ Marie’, ’ Women’), (’ Anne’, ’ Women’), (’ Anne’, ’ girls’), (’ Anne’, ’girl’), (’Anne’, ’Women’), (’Women’, ’Woman’), (’ Anne’, ’girls’), (’Anne’, ’ actresses’), (’ Michelle’, ’women’), (’ Marie’, ’ Actress’), (’ Marie’, ’girl’), (’ Anne’, ’ Feminist’), (’ Marie’, ’ women’), (’ Devi’, ’Women’), (’ Elizabeth’, ’Women’), (’ Anne’, ’ actress’), (’Anne’, ’Mrs’), (’Answer’, ’answered’), (’Anne’, ’woman’), (’maid’, ’Woman’), (’Marie’, ’women’) C.1.3 GEOGRAPHY Layer 16 Head 6* (’ Mumbai’, ’ Chennai’), (’ Mumbai’, ’India’), (’ Chennai’, ’ Mumbai’), (’ Tasmania’, ’ Queensland’), (’ Rahul’, ’India’), (’ Gujar’, ’India’), (’ Bangalore’, ’ Chennai’), (’Scotland’, ’England’), (’ Kerala’, ’ Chennai’), (’ Mumbai’, ’ Delhi’), (’Scotland’, ’Britain’), (’ Mumbai’, ’ Bangalore’), (’India’, ’Pakistan’), (’Ireland’, ’Scotland’), (’ Bangalore’, ’ Mumbai’), (’ Chennai’, ’ Bangalore’), (’ Gujar’, ’ Aadhaar’), (’ Maharashtra’, ’ Mumbai’), (’ Gujarat’, ’ Maharashtra’), (’ Gujar’, ’ Gujarat’), (’Australia’, ’Australian’), (’ Gujarat’, ’India’), (’ Gujar’, ’ Rahul’), (’ Mumbai’, ’ Maharashtra’), (’England’, ’Britain’), (’ Chennai’, ’India’), (’ Bombay’, ’ Mumbai’), (’ Kerala’, ’ Tamil’), (’ Mumbai’, ’ Hindi’), (’ Tasman’, ’ Tasmania’), (’India’, ’ Mumbai’), (’ Gujar’, ’ Hindi’), (’ Gujar’, ’ Maharashtra’), (’Austral’, ’ Australians’), (’ Kerala’, ’ Maharashtra’), (’ Bangalore’, ’India’), (’ Kerala’, ’India’), (’ Bombay’, ’India’), (’Austral’, ’Australia’), (’India’, ’ Aadhaar’), (’ Mumbai’, ’ Sharma’), (’Austral’, ’Australian’), (’ Kerala’, ’ Mumbai’), (’England’, ’Scotland’), (’ Gujar’, ’ Mumbai’), (’ Mumbai’, ’ Rahul’), (’ Tasman’, ’ Queensland’), (’ Chennai’, ’ Tamil’), (’ Maharashtra’, ’ Gujarat’), (’ Modi’, ’India’) Layer 18 Head 9 (’ Winnipeg’, ’ Winnipeg’), (’ Edmonton’, ’ Winnipeg’), (’ Winnipeg’, ’ Ottawa’), (’ Calgary’, ’ Winnipeg’), (’ Ottawa’, ’ Winnipeg’), (’ Winnipeg’, ’ Calgary’), (’ Winnipeg’, ’CBC’), (’ Winnipeg’, ’Canada’), (’ Canberra’, ’ Canberra’), (’ RCMP’, ’ Winnipeg’), (’ Ottawa’, ’CBC’), (’ Winnipeg’, ’Canadian’), (’Toronto’, ’ Winnipeg’), (’ Winnipeg’, ’ Canadians’), (’ Edmonton’, ’ Ottawa’), (’ Winnipeg’, ’ RCMP’), (’ Winnipeg’, ’ Edmonton’), (’ Ottawa’, ’Canadian’), (’Canadian’, ’ Winnipeg’), (’Toronto’, ’ Calgary’), (’ Winnipeg’, ’ Quebec’), (’ Winnipeg’, ’ Canad’), (’Toronto’, ’Canadian’), (’ Edmonton’, ’ Edmonton’), (’ Ottawa’, ’ Calgary’), (’ Leafs’, ’ Winnipeg’), (’ Edmonton’, ’ Calgary’), (’ Ottawa’, ’Canada’), (’ Calgary’, ’Canadian’), (’Toronto’, ’Canada’), (’ Calgary’, ’ Calgary’), (’Ott’, ’ Winnipeg’), (’ Winnipeg’, ’ Saskatchewan’), (’ Winnipeg’, ’ Canadian’), (’ Ottawa’, ’ Ottawa’), (’ Calgary’, ’ Ottawa’), (’ Winnipeg’, ’ Manitoba’), (’ Canadians’, ’ Winnipeg’), (’ Winnipeg’, ’ Canada’), (’ RCMP’, ’ Calgary’), (’Toronto’, ’ Manitoba’), (’Toronto’, ’ Ottawa’), (’CBC’, ’ Winnipeg’), (’Canadian’, ’Canada’), (’ Edmonton’, ’Canadian’), (’ RCMP’, ’ Ottawa’), (’ Winnipeg’, ’ipeg’), (’Toronto’, ’Toronto’), (’Canadian’, ’ Calgary’), (’ Ottawa’, ’ Canadians’) Layer 16 Head 2* (’ Australians’, ’Austral’), (’Austral’, ’Australia’), (’Austral’, ’ Canberra’), (’ Canberra’, ’Austral’), (’ Edmonton’, ’ Winnipeg’), (’Austral’, ’Australian’), (’ Edmonton’, ’ Alberta’), (’ Australians’, ’Australia’), (’Austral’, ’ Australians’), (’ovych’, ’Ukraine’), (’ Canad’, ’ Quebec’), (’ Australians’, ’Australian’), (’ Manitoba’, ’ Winnipeg’), (’ Winnipeg’, ’ Manitoba’), (’Canada’, ’Canadian’), (’ Bulgar’, ’Moscow’), (’ Edmonton’, ’ Manitoba’), (’Austral’, ’berra’), (’Australian’, ’Austral’), (’ovych’, ’ Ukrainians’), (’ Canadians’, ’Canada’), (’ Australians’, ’ Canberra’), (’Canadian’, ’Canada’), (’ovych’, ’ Yanukovych’), (’ Trudeau’, ’Canada’), (’ Bulgar’, ’ Dmitry’), (’Austral’, ’ Australia’), (’ Canad’, ’ Mulcair’), (’ Canberra’, ’berra’), (’oglu’, ’Turkish’), (’Canada’, ’udeau’), (’ Oilers’, ’ Edmonton’), (’ Canberra’, ’Australia’), (’ Edmonton’, ’Canada’), (’ Calgary’, ’ Edmonton’), (’ Calgary’, ’ Alberta’), (’ Trudeau’, ’udeau’), (’ Edmonton’, ’ Calgary’), (’ Trudeau’, ’Canadian’), (’ Canberra’, ’Australian’), (’ Canucks’, ’ Vancouver’), (’Australian’, ’Australia’), (’ Fraser’, ’ Vancouver’), (’ Edmonton’, ’Canadian’), (’elaide’, ’Austral’), (’ Braz’, ’Tex’), (’ RCMP’, ’Canada’), (’sov’, ’Moscow’), (’ Bulgar’, ’Russia’), (’Canada’, ’ Canadians’) Layer 21 Head 12* (’ Indones’, ’ Indonesian’), (’ Nguyen’, ’ Vietnamese’), (’ Jakarta’, ’ Indonesian’), (’ Indonesia’, ’ Indonesian’), (’oglu’, ’Turkish’), (’ Indones’, ’ Indonesia’), (’ Indones’, ’ Jakarta’), (’ Koreans’, ’ Korean’), (’oglu’, ’ Turkish’), (’ Taiwanese’, ’ Taiwan’), (’ Nguyen’, ’ Thai’), (’Brazil’, ’ Brazilian’), (’ Indonesia’, ’ Indones’), (’ Taiwanese’, ’Tai’), (’oglu’, ’ Istanbul’), (’ Indonesian’, ’ Indones’), (’ Jakarta’, ’ Indones’), (’ Nguyen’, ’ Laos’), (’ Sloven’, ’ Slovenia’), (’ Korean’, ’ Koreans’), (’ Nguyen’, ’ Cambod’), (’zzi’, ’Italy’), (’Tai’, ’ Taiwanese’), (’ Jakarta’, ’ Indonesia’), (’ Indonesian’, ’ Indonesia’), (’ Bulgaria’, ’ Bulgarian’), (’ Icelandic’, ’ Iceland’), (’ Koreans’, ’ Korea’), (’ Brazilian’, ’Brazil’), (’ Bulgar’, ’ Bulgarian’), (’ Malays’, ’ Malaysian’), (’oglu’, ’ Ankara’), (’ Bulgarian’, ’ Bulgaria’), (’ Indones’, ’ Malays’), (’ Tai’, ’ Taiwanese’), (’oglu’, ’Turkey’), (’ Janeiro’, ’Brazil’), (’zzi’, ’Italian’), (’ Malays’, ’ Kuala’), (’ Fuk’, ’Japanese’), (’ Indonesian’, ’ Jakarta’), (’ Taiwan’, ’ Taiwanese’), (’oglu’, ’ Erdogan’), (’ Nguyen’, ’ Viet’), (’ Filipino’, ’ Philippine’), (’ Indonesia’, ’ Jakarta’), (’ Jong’, ’ Koreans’), (’ Duterte’, ’ Filipino’), (’ Azerbai’, ’ Azerbaijan’), (’ Bulgarian’, ’ Bulgar’) C.1.4 BRITISH SPELLING Layer 19 Head 4 (’ Whilst’, ’ realise’), (’ Whilst’, ’ Whilst’), (’ Whilst’, ’ realised’), (’ Whilst’, ’ organise’), (’ Whilst’, ’ recognise’), (’ Whilst’, ’ civilisation’), (’ Whilst’, ’ organisation’), (’ Whilst’, ’ whilst’), (’ Whilst’, ’ organising’), (’ Whilst’, ’ organised’), (’ Whilst’, ’ organis’), (’ Whilst’, ’ util’), (’ Whilst’, ’ apologise’), (’ Whilst’, ’ emphas’), (’ Whilst’, ’ analyse’), (’ Whilst’, ’ organisations’), (’ Whilst’, ’ recognised’), (’ Whilst’, ’ flavours’), (’ Whilst’, ’ colour’), (’ Whilst’, ’colour’), (’ Whilst’, ’ Nasa’), (’ Whilst’, ’ Nato’), (’ Whilst’, ’ analys’), (’ Whilst’, ’ flavour’), (’ Whilst’, ’ colourful’), (’ Whilst’, ’ colours’), (’ organising’, ’ realise’), (’ Whilst’, ’ behavioural’), (’ Whilst’, ’ coloured’), (’ Whilst’, ’ learnt’), (’ Whilst’, ’ favourable’), (’ Whilst’, ’isation’), (’ Whilst’, ’ programmes’), (’ organis’, ’ realise’), (’ Whilst’, ’ authorised’), (’ Whilst’, ’ practise’), (’ Whilst’, ’ criticised’), (’ Whilst’, ’ organisers’), (’ organising’, ’ organise’), (’ Whilst’, ’ analysed’), (’ Whilst’, ’ programme’), (’ Whilst’, ’ behaviours’), (’ Whilst’, ’ humour’), (’ Whilst’, ’isations’), (’ Whilst’, ’ tyres’), (’ Whilst’, ’ aluminium’), (’ organised’, ’ realise’), (’ Whilst’, ’ favour’), (’ Whilst’, ’ ageing’), (’ organis’, ’ organise’) C.1.5 RELATED WORDS Layer 13 Head 8* (’ mirac’, ’ miraculous’), (’ mirac’, ’ miracle’), (’ nuanced’, ’ nuance’), (’Better’, ’ smarter’), (’ equitable’, ’ healthier’), (’ liberating’, ’ liberated’), (’ unaffected’, ’ untouched’), (’ equitable’, ’ unbiased’), (’ inconsistent’, ’failed’), (’ emanc’, ’ liberated’), (’ equitable’, ’ humane’), (’ liberated’, ’ liberating’), (’ incompatible’, ’failed’), (’ mirac’, ’ miracles’), (’ consensual’, ’ peacefully’), (’ uncond’, ’ unconditional’), (’ unexpected’, ’ unexpectedly’), (’ unconditional’, ’ untouched’), (’Better’, ’ healthier’), (’ unexpectedly’, ’ unexpected’), (’ graceful’, ’ peacefully’), (’ emanc’, ’ emancipation’), (’ effortlessly’, ’ seamlessly’), (’ honorable’, ’ peacefully’), (’ unconditional’, ’ uncond’), (’ rubbish’, ’ excuses’), (’ emanc’, ’ liberating’), (’ equitable’, ’ peacefully’), (’ Feather’, ’ gracious’), (’ emancipation’, ’ liberated’), (’ nuanced’, ’ nuances’), (’icable’, ’ avoids’), (’ liberated’, ’ freeing’), (’ liberating’, ’ freeing’), (’ inconsistent’, ’ lousy’), (’ lousy’, ’failed’), (’ unconditional’, ’ unaffected’), (’ equitable’, ’ivable’), (’ equitable’, ’Honest’), (’erning’, ’ principled’), (’ survival’, ’surv’), (’ocre’, ’ lackluster’), (’ equitable’, ’ liberating’), (’Bah’, ’Instead’), (’ incompatible’, ’ inappropriate ’), (’ emancipation’, ’ emanc’), (’ unchanged’, ’ unaffected’), (’ peacefully’, ’ peaceful’), (’ equitable’, ’ safer’), (’ unconditional’, ’ uninterrupted ’) Layer 12 Head 14* (’ perished’, ’ died’), (’ perished’, ’ dies’), (’ testify’, ’ testifying’), (’ intervened’, ’ interven’), (’ advises’, ’ advising’), (’ disbanded’, ’ disband’), (’lost’, ’ perished’), (’ died’, ’ perished’), (’ applauded’, ’ applaud’), (’ dictates’, ’ dictate’), (’ prev’, ’ prevailed’), (’ advise’, ’ advising’), (’shed’, ’thood’), (’Reviewed’, ’orsi’), (’ dies’, ’ perished’), (’published’, ’ publishes’), (’ prevailed’, ’ prevail’), (’ died’, ’ dies’), (’ testified’, ’ testifying’), (’ testifying’, ’ testify’), (’ dictates’, ’ governs’), (’ complicit’, ’ complicity’), (’ dictated’, ’ dictate’), (’enough’, ’CHO’), (’ skelet’, ’independence’), (’ Recomm’, ’ prescribe’), (’essential’, ’ perished’), (’noticed’, ’CHO’), (’avorable’, ’ approving’), (’ perish’, ’ perished’), (’ overseeing’, ’ oversee’), (’ skelet’, ’shed’), (’EY’, ’chart’), (’ presiding’, ’ overseeing’), (’ fundament’, ’pees’), (’ sanction’, ’appro’), (’ prevail’, ’ prevailed’), (’ governs’, ’ regulates’), (’tails’, ’shed’), (’ Period’, ’chart’), (’lihood’, ’hower’), (’ prev’, ’ prevail’), (’ aids’, ’helps’), (’ dictated’, ’ dict’), (’ dictated’, ’ dictates’), (’ Dise’, ’itta’), (’REC’, ’CHO’), (’exclusive’, ’ORTS’), (’ Helpful’, ’helps’), (’bart’, ’ciples’) Layer 14 Head 1* (’ misunderstand’, ’ incorrectly’) , (’ Proper’, ’ properly’), (’ inaccur’, ’ incorrectly’), (’ misunderstand’, ’ wrongly’), (’ misinterpret’, ’ incorrectly’), (’ incorrect’, ’ incorrectly’), (’ mistakes’, ’ incorrectly’), (’ misunderstanding’, ’ incorrectly’), (’ proper’, ’ properly’), (’fail’, ’ incorrectly’), (’ faulty’, ’ incorrectly’), (’ misrepresent’, ’ incorrectly’), (’ failing’, ’ fails’), (’ inaccurate’, ’ incorrectly’), (’ errors’, ’ incorrectly’), (’ harmful’, ’ Worse’), (’ misunderstand’, ’ wrong’), (’ misunderstand’, ’ improperly’), (’wrong’, ’ incorrectly’), (’ harmful’, ’ incorrectly’), (’ mistake’, ’ incorrectly’), (’ mis’, ’ incorrectly’), (’fail’, ’ fails’), (’ detrimental’, ’ Worse’), (’ rightful’, ’ properly’), (’ misunderstand’, ’ inappropriately’), (’ harmful’, ’ unnecessarily’), (’ neglect’, ’ unnecessarily’), (’ correctly’, ’ properly’), (’ Worst’, ’ Worse’), (’ failure’, ’ fails’), (’ satisfactory’, ’ adequately’), (’ defective’, ’ incorrectly’), (’ misunderstand’, ’ mistakenly’), (’ harming’, ’ Worse’), (’ mishand’, ’ incorrectly’), (’adequ’, ’ adequately’), (’ misuse’, ’ incorrectly’), (’Failure’, ’ fails’), (’ hurts’, ’ Worse’), (’ misunderstand’, ’wrong’), (’ mistakenly’, ’ incorrectly’), (’ failures’, ’ fails’), (’ adequate’, ’ adequately’), (’ properly’, ’ correctly’), (’ hurting’, ’ Worse’), (’ Proper’, ’ correctly’), (’ fail’, ’ fails’), (’ mistaken’, ’ incorrectly’), (’ harming’, ’ adversely’) Layer 14 Head 13* (’ editors’, ’ editorial’), (’ broadcasters’, ’ broadcasting’) , (’ broadcasting’, ’ broadcasts’), (’ broadcast’, ’ broadcasts’), (’ Broadcasting’, ’ broadcasters’) , (’ editors’, ’ Editorial’), (’ broadcasters’, ’ broadcast’), (’ Broadcasting’, ’ broadcast’), (’ lectures’, ’ lecture’), (’ Broadcast’, ’ broadcasting’), (’ broadcasters’, ’ broadcaster’), (’ broadcasters’, ’ broadcasts’), (’ Publishers’, ’ publishing’), (’ broadcasting’, ’ broadcast’), (’ broadcasters’, ’ Broadcasting’) , (’ Publishers’, ’ Publishing’), (’ lecture’, ’ lectures’), (’ Editors’, ’ editorial’), (’ broadcast’, ’ broadcasting’), (’ Broadcasting’, ’ broadcasts’), (’ broadcasting’, ’ broadcasters’) , (’ journalism’, ’ journalistic’), (’reports’, ’Journal’), (’ Broadcast’, ’ Broadcasting’), (’ Publishers’, ’Publisher’), (’azeera’, ’ Broadcasting’), (’Reporting’, ’Journal’), (’ journalistic’, ’ journalism’), (’ Broadcasting’, ’ broadcaster’), (’ broadcasting’, ’ broadcaster’), (’ broadcaster’, ’ broadcasting’), (’ editors’, ’ publication’), (’ journalism’, ’journal’), (’ Journalists’, ’Journal’), (’ documentary’, ’ documentaries’) , (’ filming’, ’ filmed’), (’ publishers’, ’ publishing’), (’ journalism’, ’Journal’), (’ Broadcast’, ’ broadcasts’), (’ broadcast’, ’ broadcasters’), (’ articles’, ’Journal’), (’ reporting’, ’reports’), (’ manuscripts’, ’ manuscript’), (’ publish’, ’ publishing’), (’azeera’, ’ broadcasters’), (’ Publishers’, ’ publication’), (’ Publishers’, ’ publications’), (’ newspapers’, ’ Newsp’), (’ Broadcast’, ’ broadcasters’), (’ Readers’, ’Journal’) C.2 QUERY-KEY MATRICES Layer 22 Head 1 (’ usual’, ’ usual’), (’ occasional’, ’ occasional’), (’ aforementioned’, ’ aforementioned’), (’ general’, ’ usual’), (’ usual’, ’ slightest’), (’agn’, ’ealous’), (’ traditional’, ’ usual’), (’ free’, ’amina’), (’ major’, ’ major’), (’ frequent’, ’ occasional’), (’ generous’, ’ generous’), (’ free’, ’lam’), (’ regular’, ’ usual’), (’ standard’, ’ usual’), (’ main’, ’ usual’), (’ complete’, ’ Finished’), (’ main’, ’liest’), (’ traditional’, ’ traditional’), (’ latest’, ’ aforementioned’), (’ current’, ’ aforementioned’), (’ normal’, ’ usual’), (’ dominant’, ’ dominant’), (’ free’, ’ministic’), (’ brief’, ’ brief’), (’ biggest’, ’liest’), (’usual’, ’ usual’), (’ rash’, ’ rash’), (’ regular’, ’ occasional’), (’ specialized’, ’ specialized’), (’ free’, ’iosis’), (’ free’, ’hero’), (’ specialty’, ’ specialty’), (’ general’, ’iosis’), (’ nearby’, ’ nearby’), (’ best’, ’liest’), (’ officially’, ’ formal’), (’ immediate’, ’mediate’), (’ special’, ’ ultimate’), (’ free’, ’otropic’), (’ rigorous’, ’ comparative’), (’ actual’, ’ slightest’), (’ complete’, ’ comparative’), (’ typical’, ’ usual’), (’ modern’, ’ modern’), (’ best’, ’ smartest’), (’ free’, ’ free’), (’ highest’, ’ widest’), (’ specialist’, ’ specialist’), (’ appropriate’, ’ slightest’), (’ usual’, ’liest’) Layer 0 Head 9 (’59’, ’27’), (’212’, ’39’), (’212’, ’38’), (’217’, ’39’), (’37’, ’27’), (’59’, ’26’), (’54’, ’88’), (’156’, ’39’), (’212’, ’79’), (’59’, ’28’), (’57’, ’27’), (’212’, ’57’), (’156’, ’29’), (’36’, ’27’), (’217’, ’79’), (’59’, ’38’), (’63’, ’27’), (’72’, ’39’), (’57’, ’26’), (’57’, ’34’), (’59’, ’34’), (’156’, ’27’), (’91’, ’27’), (’156’, ’38’), (’63’, ’26’), (’59’, ’25’), (’138’, ’27’), (’217’, ’38’), (’72’, ’27’), (’54’, ’27’), (’36’, ’29’), (’72’, ’26’), (’307’, ’39’), (’37’, ’26’), (’217’, ’57’), (’37’, ’29’), (’54’, ’38’), (’59’, ’29’), (’37’, ’28’), (’307’, ’38’), (’57’, ’29’), (’63’, ’29’), (’71’, ’27’), (’138’, ’78’), (’59’, ’88’), (’89’, ’27’), (’561’, ’79’), (’212’, ’29’), (’183’, ’27’), (’54’, ’29’) Layer 17 Head 6* (’ legally’, ’ legal’), (’ legal’, ’ sentencing’), (’ legal’, ’ arbitration’), (’ boycot’, ’ boycott’), (’ legal’, ’ criminal’), (’ legal’, ’ Judicial’), (’ legal’, ’ rulings’), (’ judicial’, ’ sentencing’), (’ marketing’, ’ advertising’), (’ legal’, ’ confidential’), (’ protesting’, ’ protest’), (’ recruited’, ’ recruit’), (’ recruited’, ’ recruits’), (’ judicial’, ’ criminal’), (’ legal’, ’ exemptions’), (’ demographics’, ’ demographic’), (’ boycott’, ’ boycot’), (’ sentencing’, ’ criminal’), (’ recruitment’, ’ recruits’), (’ recruitment’, ’ recruit’), (’ Constitutional’, ’ sentencing’) , (’ Legal’, ’ sentencing’), (’ constitutional’, ’ sentencing’) , (’ legal’, ’ subpoena’), (’ injury’, ’ injuries’), (’ FOIA’, ’ confidential’), (’ legal’, ’ licenses’), (’ donation’, ’ donations’), (’ disclosure’, ’ confidential’), (’ negotiation’, ’ negotiating’), (’ Judicial’, ’ legal’), (’ legally’, ’ criminal’), (’ legally’, ’ confidential’), (’ legal’, ’ jur’), (’ legal’, ’ enforcement’), (’ legal’, ’ lawyers’), (’ legally’, ’ enforcement’), (’ recruitment’, ’ recruiting’), (’ recruiting’, ’ recruit’), (’ criminal’, ’ sentencing’), (’ legal’, ’ attorneys’), (’ negotiations’, ’ negotiating’), (’ legally’, ’ arbitration’), (’ recruited’, ’ recruiting’), (’ legally’, ’ exemptions’), (’ legal’, ’ judicial’), (’ voting’, ’ Vote’), (’ negotiated’, ’ negotiating’), (’ legislative’, ’ veto’), (’ fund
1. What is the focus of the paper regarding transformer models? 2. What are the strengths and weaknesses of the proposed method for analyzing transformer models? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a method to study the structure of transformer models by projecting the parameters into the space of the input embeddings. They show that the attention matrices can be described by interaction and transition matrices in embedding space which define how a hidden state (defined in terms of combinations of input embeddings) is transformed by that layer by transforming between embeddings. They use this analysis method to analyze features such as which types of words each attention head specializes in, how the parameters change during finetuning, and how the features learned by ensembles of models with the same parameters/settings connect to each other. Strengths And Weaknesses This paper proposed a very interesting mechanism for analyzing transformer-based language models such as BERT and GPT, and providing an interpretable framework for what these models learn. Importantly, the proposed framework is also input-independent, and can be used to analyze the parameters only without having to pass large numbers of possible inputs through the model to analyze the hidden states. Deep neural models are often criticized for their lack of interpretability, and the contributions of this paper provide useful tools for analyzing one of the most common classes of models currently in use. The examples shown are interesting and relevant, and demonstrate that this method can provide valuable insight into what each component of the model learns. While I think this paper is very interesting and well-supported, I do think there are a few areas in which it could be improved. First, the proposed analysis method is mostly used to analyze relatively simple features of the layers or hidden states - the top-k most attended to words and the top-k words each of those words is transformed into. The analysis focuses largely on identifying e.g. which types of words each attention head attends most to, or what types of words tend to become the focus after finetuning. This analysis doesn't shed much light into the higher level structure and logic learned by the transformer, and thus there is a limit to its utility. This is, admittedly, a much more difficult problem, however, and I still believe the proposed framework is interesting and useful. My second criticism would be that the paper primarily focuses on the proposed theoretical framework and examples of how it can be used to make these models more interpretable, and provides little in the way of concrete applications. It would be nice to see some more examples of how this theoretical framework can be used to achieve practical empirical results - see the experiments in Geva et al for toxic language suppression or early exiting, for example. The one concrete application that this paper does discuss is 'stitching' - performing a zero shot mapping from the latent space of a given model A at a particular intermediate layer to the latent space of model B at some other intermediate layer. While this idea is very interesting, it is discussed only briefly and the results shown are not particularly convincing. If this is section is to be retained, it should be expanded, add justifications for why and how this can be used, and should have more work put into improving the results. Clarity, Quality, Novelty And Reproducibility The work is clear, well-written and easy to follow. My one criticism would be that most of the actual results are contained within the appendices - but this is perhaps unavoidable in some areas to avoid cluttering the main paper too much. The work draws very heavily on the works of Elhage et al (2021) and Geva et al (2022), but it does add significant and novel extensions to the analysis in those works.
ICLR
Title Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization Abstract Multiobjective combinatorial optimization (MOCO) problems can be found in many real-world applications. However, exactly solving these problems would be very challenging, particularly when they are NP-hard. Many handcrafted heuristic methods have been proposed to tackle different MOCO problems over the past decades. In this work, we generalize the idea of neural combinatorial optimization, and develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure. We propose a single preference-conditioned model to directly generate approximate Pareto solutions for any trade-off preference, and design an efficient multiobjective reinforcement learning algorithm to train this model. Our proposed method can be treated as a learning-based extension for the widely-used decomposition-based multiobjective evolutionary algorithm (MOEA/D). It uses a single model to accommodate all the possible preferences, whereas other methods use a finite number of solutions to approximate the Pareto set. Experimental results show that our proposed method significantly outperforms some other methods on the multiobjective traveling salesman problem, multiobjective vehicle routing problem, and multiobjective knapsack problem in terms of solution quality, speed, and model efficiency. 1 INTRODUCTION Many real-world applications can be modeled as multiobjective combinatorial optimization (MOCO) problems (Ehrgott & Gandibleux, 2000). Examples include the multiobjective traveling salesman problem (MOTSP) (Lust & Teghem, 2010a), the multiobjective vehicle routing problem (MOVRP) (Jozefowiez et al., 2008) and the multiobjective knapsack problem (MOKP) (Bazgan et al., 2009). These problems have multiple objectives to optimize, and no single solution can optimize all the objectives at the same time. Instead, there is a set of Pareto optimal solutions with different trade-offs among the objectives. It is very challenging to find all the exact Pareto optimal solutions for a MOCO problem. Actually, finding one single Pareto optimal solution can be NP-hard for many problems (Ehrgott & Gandibleux, 2000), and the number of Pareto solutions could be exponentially large with regard to the problem size (Ehrgott, 2005; Herzel et al., 2021). The decision-maker’s preference among different objectives is usually unknown in advance, making it very difficult to reduce the problem into a single-objective one. Over the past several decades, many methods have been developed to find an approximate Pareto set for different MOCO problems within a reasonable computational time. These methods often need carefully handcrafted and specialized heuristics for each problem. It can be very labor-intensive in practice. In many real-world applications, practitioners need to solve many different instances for the same particular problem, where the instances can be easily obtained or generated (Bengio et al., 2020). It is desirable to learn the patterns behind these problem instances explicitly or implicitly to design efficient algorithms (Cappart et al., 2021a). Machine learning techniques can be naturally used for this purpose. Some learning-based methods have been recently proposed for solving single-objective combinatorial optimization problems (Bengio et al., 2020; Vesselinova et al., 2020; Mazyavkina et al., 2021; Cappart et al., 2021a). In this work, we extend the learning-based method to solve MOCO problems in a principled way as shown in Figure 1. Our main contributions include: • We propose a novel neural multiobjective combinatorial optimization method to approximate the whole Pareto set via a single preference-conditioned model. It allows decision makers to obtain any preferred trade-off solution without any search effort. • We develop an efficient end-to-end reinforcement learning algorithm to train the single model for all different preferences simultaneously, and a simple yet powerful active adaption method to handle out-of-distribution problem instances. • We conduct comprehensive experiments on MOTSP, MOVR and MOKP of different settings. The results show that our proposed method can successfully approximate the Pareto sets for different problems in an efficient way. It also significantly outperforms other methods in terms of solution quality, speed, and model efficiency. 2 BACKGROUND AND RELATED WORK Multiobjective Combinatorial Optimization (MOCO). MOCO has been attracting growing research efforts from different communities over the past several decades (Sawaragi et al., 1985; Wallenius et al., 2008; Herzel et al., 2021). There are two main approaches to tackle the MOCO problems: the exact methods and the approximation methods (Ehrgott, 2005). Exact methods could be prohibitively costly when, as it often happens, the MOCO problem is NP-hard and the problem size is very large (Florios & Mavrotas, 2014). For this reason, many heuristics (Jaszkiewicz, 2002; Zhang & Li, 2007; Ehrgott & Gandibleux, 2008) and approximation methods (Papadimitriou & Yannakakis, 2000; Herzel et al., 2021) have been developed to find a manageable number of approximated Pareto solutions with a reasonable computational budget. However, these methods usually depend on carefully handcrafted designs for each specific problem (Ehrgott & Gandibleux, 2000), and the required effort is often nontrivial in real-world applications. Machine Learning for Combinatorial Optimization. As summarized in Bengio et al. (2020), there are three main learning-based approaches for combinatorial optimization: learning to configure algorithms (Kruber et al., 2017; Bonami et al., 2018), learning alongside the algorithms (Lodi & Zarpellon, 2017; Gasse et al., 2019; Chen & Tian, 2019), and learning to directly predict the solutions (Nowak et al., 2018; Emami & Ranka, 2018; Larsen et al., 2018). Neural combinatorial optimization (NCO) belongs to the last category where the model directly produces a good solution for a given problem instance. Vinyals et al. (2015) proposed a pointer network to sequentially construct a solution for the TSP problem. Bello et al. (2017) made a critical improvement to use reinforcement learning to train the model, eliminating the impractical optimal solutions collection for NP-hard problems. Some other improvements on model structure and training procedure have been proposed in the past few years (Nazari et al., 2018; Deudon et al., 2018; Kool et al., 2019; Veličković & Blundell, 2021), especially with graph neural networks (GNNs) (Dai et al., 2017; Li et al., 2018; Joshi et al., 2019; Dwivedi et al., 2020; Drori et al., 2020). Recent efforts have been made on more efficient learning strategies (Kwon et al., 2020; Karalias & Loukas, 2020; Lisicki et al., 2020; Geisler et al., 2022), learning-based graph search (Cappart et al., 2021b; Kool et al., 2021; Fu et al., 2021; Xin et al., 2021; Hudson et al., 2022), and iterative improvement methods (Wu et al., 2021; Ma et al., 2021; Li et al., 2021). Neural MOCO. Most of the existing learning-based methods are for single-objective combinatorial problems. Recently, a few attempts have been made to solve MOCO problems (Li et al., 2020; Wu et al., 2020; Zhang et al., 2021a;b). These methods adopt the MOEA/D framework (Zhang & Li, 2007) to decompose a MOCO problem into a number of single-objective subproblems, and then build a set of models to solve each subproblem separately. However, since the number of Pareto solutions would be exponentially large (Ehrgott, 2005), the required number of models would be huge for finding the whole Pareto set. In this work, we propose a single preference-conditioned model for solving MOCO problems, with which the decision makers can easily obtain any trade-off solutions. The proposed single neural MOCO solver could be much easier to use in a real-world system (Veličković & Blundell, 2021), than those using a large set of different models. 3 PROBLEM FORMULATION 3.1 MULTIOBJECTIVE COMBINATORIAL OPTIMIZATION A multiobjective combinatorial optimization (MOCO) problem can be defined as follows: min x∈X F (x) = (f1(x), f2(x), . . . , fm(x)), (1) where X is a discrete search space, and F (x) = (f1(x), . . . , fm(x)) is an m-objective vector. Since the individual objectives conflict each other, no single solution can optimize all of them at the same time. Therefore, practitioners are interested in Pareto optimal solutions, defined as follows. Definition 1 (Pareto Dominance). Let xa, xb ∈ X , xa is said to dominate xb (xa ≺ xb) if and only if fi(xa) ≤ fi(xb),∀i ∈ {1, ...,m} and fj(xa) < fj(xb),∃j ∈ {1, ...,m}. Definition 2 (Pareto Optimality). A solution x∗ ∈ X is a Pareto optimal solution if there does not exist x̂ ∈ X such that x̂ ≺ x∗. The set of all Pareto optimal solutions is called the Pareto set, and the image of the Pareto set in the objective space is called the Pareto front. Each Pareto solution represents an optimal trade-off among the objectives, and it is impossible to further improve one of the objectives without deteriorating any other objectives. 3.2 DECOMPOSITION AND PREFERENCE-BASED SCALARIZATION Decomposition is a mainstream strategy for solving multiobjective optimization problem (Zhang & Li, 2007). It decomposes a multiobjective problem into a number of subproblems, each of which can be a single objective or multiobjective optimization problem. MOEA/D (Zhang & Li, 2007) and its variants (Trivedi et al., 2016) solve these subproblems in a collaborative manner and generate a finite set of Pareto solutions to approximate the Pareto front. The most widely used way for constructing a single objective subproblem is the preference-based scalarization (Ehrgott, 2005; Miettinen, 2012). For an m-objective optimization problem, a preference vector for the objective functions can be defined as λ ∈ Rm that satisfies λi ≥ 0 and ∑m i=1 λi = 1. Weighted-Sum Aggregation is the simplest approach. It defines the aggregation function to minimize in the subproblem associated with λ as gws(x|λ) = m∑ i=1 λifi(x). (2) However, this approach can only find solutions on the convex hull of the Pareto front (Ehrgott, 2005). Weighted-Tchebycheff (Weighted-TCH) Aggregation is an alternative approach to minimize: gtch(x|λ) = max 1≤i≤m {λi|fi(x)− z∗i |}, (3) where z∗i < minx∈X fi(x) is an ideal value for fi(x). Any Pareto optimal solution could be an optimal solution of problem (3) with a specific (but unknown) preference λ (Choo & Atkins, 1983). 3.3 CURRENT DRAWBACKS AND OUR METHOD Drawbacks of Existing Methods. For many MOCO problems, the size of the Pareto set would be exponentially large with respect to the input size (e.g., nodes in MOTSP). It is computationally impractical for existing methods to find the whole Pareto set (Herzel et al., 2021). For this reason, all of the existing heuristic-based and learning-based methods are to find a small subset of approximate Pareto solutions. Decision makers can only select solutions from this small set, which often does not contain their preferred solutions. In addition, scalarization may also produce a complicated single objective subproblem. For example, the Tchebycheff scalarized subproblem of MOTSP is not a classic TSP, and thus cannot be solved by the highly specialized TSP solvers such as LKH (Helsgaun, 2000) or Concorde (Applegate et al., 2007). Our Method. Instead of finding a set of finite solutions, we propose a novel way to approximate the whole Pareto set using a single model. With our proposed model, decision makers can easily obtain any solution from the approximate Pareto set to satisfy their preferred trade-offs in real time as shown in Figure 2. This is a clear advantage to support interactive decision making. In addition, our proposed reinforcement learning based method can use a scalarization method to combine multiobjective rewards, and does not need to consider the problem-specific condition. In this paper, we mainly consider learning the whole Pareto front. It is possible to incorporate decision-maker’s preferences on specific regions for model building and inference as discussed in Appendix D.6. We believe our proposed method is a new principled way to solve multiobjective combinatorial optimization problems. 4 THE PROPOSED MODEL: PREFERENCE-CONDITIONED NEURAL MOCO 4.1 PREFERENCE-CONDITIONED SOLUTION CONSTRUCTION Decomposition and scalarization link preferences to their corresponding Pareto solutions. This work builds a preference-conditioned model to accommodate all the preferences. We use the MOTSP as an example to explain our model design. In an MOTSP instance s, a fully connected graph of n nodes (cities) with m distance metrics on each edge is given. A feasible solution is a tour that visits each city exactly once and returns to the starting city. The i-th objective to minimize is the tour length (total cost) based on the i-th distance metric. A tour can be represented as π = (π1, · · · , πt, · · · , πn), πt ∈ {1, · · · , n}, a permutation of all the nodes defining the order in which n cities is visited. Our model defines a preference-conditioned stochastic policy pθ(λ)(π|s) parameterized by θ(λ) to construct a valid solution in sequence: pθ(λ)(π|s) = ∏n t=1 pθ(λ)(πt|s,π1:t−1). (4) The goal is to learn an optimal preference-conditioned policy pθ(λ)(π|s) to construct tours with the lowest scalarized costs for each preference λ. 4.2 THE PROPOSED MODEL We propose to use an Attention Model (AM) (Kool et al., 2019) as our basic encoder-decoder model as shown in Figure 3. For the MOCO problems considered in this work, a preference-agnostic encoder is capable to transfer problem instances into embeddings (e.g., embedding for all cities) used in the preference-conditioned decoder. In our model, only the decoder’s parameters θdecoder(λ) are conditioned on the preference λ: θ(λ) = [θencoder,θdecoder(λ)]. (5) Preference-agnostic Encoder. The encoder takes a problem instance s (e.g., an MOTSP instance with n cities) as its input, and outputs a set of d-dimensional node embeddings {h1, · · · ,hn} for each city. For a given instance, the same embeddings can be used for different preferences. Hence we only need a single forward pass for the dense encoder. We use the attention-based encoder as in Kool et al. (2019) for all preferences. Preference-based Attention Decoder. The decoder has the same structure as in the attention-based model (Kool et al., 2019), but with parameters θdecoder(λ) = [WQ(λ),WK(λ),WV (λ),WMHA(λ)] conditioned on the preference λ. It takes the nodes embeddings for all cities as input, and sequentially selects the next node πt with probability pθ(λ)(πt|s,π1:t−1). At time step t, the decoder first constructs a context embedding ĥ(C) = [hπ1 ,hπt−1 ]WQ(λ) from the first selected node hπ1 , and the last selected node hπt−1 . The matrix WQ(λ) ∈ R2d×d projects the concatenated 2d-dimensional vector to a d-dimensional vector. Then we further aggregate the context embedding via a Multi-Head Attention (MHA) (Vaswani et al., 2017) with the embeddings for all cities {h1, · · · ,hn}: h(C) = MHA(Q = ĥ(C),K = {h1, · · · ,hn}WK(λ), V = {h1, · · · ,hn}WV (λ))WMHA(λ), (6) where Q,K, V are the query, key and value for MHA, respectively. WMHA(λ) represents the MHA parameters. The context embedding h(C) contains all information for the instance and the current partial tour at step t. We can calculate the logit for selecting each city with its embedding hj : logitj = { C · tanh(h T (C)hj√ d ) if j ̸= πt′ ∀t′ < t, −∞ otherwise. (7) All already visited cities are masked with −∞ and will not be selected as the next city. The logits of the rest cities are clipped into [−C,C] (C = 10) as in the AM model (Kool et al., 2019). The probability for choosing the j-th city at time step t can be calculated as pθ(λ)(πt = j|s,π1:t−1) = elogitj/ ∑ k e logitk . With this probability, the decoder can construct a feasible tour. One remaining designing issue is how to generate the preference-conditioned parameters θdecoder(λ). Multiplicative interactions (Jayakumar et al., 2020) and hypernetwork (Schmidhuber, 1992; Ha et al., 2017) provide a powerful and efficient way for conditional computation, which is widely used for transfer learning (von Oswald et al., 2020; Ehret et al., 2021; Lin et al., 2020; Navon et al., 2021). We use a simple MLP hypernetwork θdecoder(λ) = MLP(λ|ψ) to generate the decoder parameters conditioned on the preference. The details of our proposed model can be found in Appendix B. Algorithm 1 Neural MOCO Training 1: Input: preference distribution Λ, instances distribution S, number of training steps T , number of preferences per iteration K, batch size B, number of tours N 2: Initialize the model parameters θ 3: for t = 1 to T do 4: λk ∼ SamplePreference(Λ) ∀k ∈ {1, · · · ,K} 5: si ∼ SampleInstance(S) ∀i ∈ {1, · · · , B} 6: πjki ∼ SampleTour(pθ(λk)(·|si)) ∀k, i ∀j ∈ {1, · · · , N} 7: b(si|λk)← 1N ∑N j=1 L(π j ki|λk, si) ∀k ∈ {1, · · · ,K} ∀i ∈ {1, · · · , B} 8: ∇J (θ)← 1KBN ∑K k=1 ∑B i=1 ∑N j=1[(L(π j ki|λk, si)− b(si|λk))∇θ(λk) log pθ(λk)(π j ki|si)] 9: θ ← ADAM(θ,∇J (θ)) 10: end for 11: Output: The model parameter θ Instance Augmentation for MOCO. Our proposed model only has a small extra computational and memory overhead to the original single-objective AM solver. We keep our model as simple as possible, making it easy for our approach to use other models and other improvements developed for single-objective NCO. These properties are crucially important for generalizing the NCO to multiobjective problems. In this work, we simply extend the instance augmentation method (Kwon et al., 2020) to MOCO. The details can be found in Appendix B.1. 5 PREFERENCE-CONDITIONED MULTIOBJECTIVE POLICY OPTIMIZATION 5.1 COST FUNCTION Our proposed node selection strategy guarantees that the model can always generate feasible solutions. In this section, we develop an efficient multiobjective policy optimization method to train the model for all the preferences simultaneously. For an MOTSP problem, the objective functions are a vector of m different costs (i.e. lengths) for a tour L(π) = [L1(π), · · · , Lm(π)]. We can define a weighted-Tchebycheff scalarized cost for each preference λ: L(π|λ) = max 1≤i≤m {λi|Li(π)− (z∗i − ε)|}, (8) where z∗i is an ideal cost for the i-th objective. For a given instance s, our goal is to minimize the expected cost for all preferences: J (θ|s) = Eλ∼Λ,π∼pθ(λ)(·|s)L(π|λ), (9) where Λ is the uniform distribution over all valid preferences. To train the model, we repeatedly sample different instances s ∼ S at each iteration. We define the training loss as J (θ) = Es∼SJ (θ|s). 5.2 MULTIOBJECTIVE REINFORCE For a given instance s and a specific preference λ, we use the REINFORCE (Williams, 1992) to estimate the gradient for the preference-conditioned scalar cost: ∇J (θ|λ, s) = Eπ∼pθ(λ)(·|s)[(L(π|λ, s)− b(s|λ))∇θ(λ) log pθ(λ)(π|s)], (10) where b(s|λ) is the baseline of expected cost to reduce the gradient variance. This gradient can be estimated by Monte Carlo sampling. At each update step, we randomly sample K preference {λ1, · · · , λK} ∼ Λ, B instances {s1, · · · , sB} ∼ S, and N different tour {π1i , · · · ,πNi } ∼ pθ(λk)(·|si) for each λk-si combination. The approximated gradient is: ∇J (θ) ≈ 1 KBN K∑ k=1 B∑ i=1 N∑ j=1 [(L(πji |λk, si)− b(si|λk))∇θ(λk) log pθ(λk)(π j i |si)]. (11) We use the shared baseline bshared(si|λk) = 1N ∑N j=1 L(π j ki|λk, si) over N sampled tours for each λk − si combination. The starting node for each tour πjki is chosen in random to force diverse rollouts as proposed in (Kwon et al., 2020). The algorithm is shown in Algorithm 1. 5.3 ACTIVE ADAPTION We also propose a simple yet powerful active adaption approach to further adjust the whole model to approximate the Pareto front for a given test instance in Appendix B.3. The proposed method does not depend on specific instance distribution S, and is suitable for out-of-distribution adaption. 6 EXPERIMENTS Problems and Model Setting. We consider MOTSP (Lust & Teghem, 2010a), MOCVRP (Lacomme et al., 2006) and MOKP (Bazgan et al., 2009) in our experimental studies, and use the same model settings for all problems with different task-specific input sizes and mask methods. The main policy model encoder is the Attention Model (Kool et al., 2019) and the hypernetwork is an MLP. We randomly generate 100, 000 problem instances on the fly for each epoch, and train the model for 200 epochs. The optimizer is ADAM with learning rate η = 10−4 and weight decay 10−6. We train our models on a single RTX 2080-Ti GPU, and it costs about 10 minutes for an epoch on MOTSP100. We give detailed model settings, problem formulations, and more experimental results in Appendix BCD. The source code can be found in https://github.com/Xi-L/PMOCO. Baseline. We call our proposed preference-conditioned multiobjective combinatorial optimization as P-MOCO. We compare it with three widely-used evolutionary algorithm frameworks for MOCO: MOGLS (Jaszkiewicz, 2002) is a multiobjective genetic local search algorithm, NSGAII (Deb et al., 2002) is a Pareto dominance-based multiobjective genetic algorithm, and MOEA/D (Zhang & Li, 2007) is a decomposition-based multiobjective evolutionary algorithm. All these algorithm frameworks need problem-specific heuristics to generate and search feasible solutions for different problems. We also compare P-MOCO with two other learning-based methods: DRL-MOA (Li et al., 2020) decomposes a MOCO with different preferences and builds a Pointer Network (Vinyals et al., 2015; Bello et al., 2017) to solve each subproblem, and AM-MOCO is a multi-models variant of our proposed model, which builds Attention Model (Kool et al., 2019) for each subproblem. The Weight-Sum scalarization of MOTSP and MOKP are their respective single-objective counterpart. Therefore, we also compare our method with the approach that uses some state-of-the-art singleobjective solvers for each weight-sum subproblem. Model Information for the learning-based methods is shown in Table 1. Our model supports flexible preference assignment and only has 1.1% total parameters to the multi-model counterpart. Inference and Metrics. We report the results and run time for solving 200 random test instances for each problem, with normally 101 to 105 different trade-offed solutions, and up to 10, 011 solutions for our proposed method. In most cases, we report our model’s zero-shot generalization performance without any search and fine-tune. We use the hypervolume indicator (Zitzler et al., 2003) to measure the performance for each method. For a set P ⊂ Rm in the objective space, we can find a reference point r∗ that dominated by all solutions in P , and define the hypervolume HV(P ) as volume for: S = {r ∈ Rm | ∃ y ∈ P such that y ≺ r ≺ r∗}, (12) where HV(P ) = Vol(S). In general, the larger the hypervolume, the better the solution set tends to be. The ground truth Pareto set always has the largest hypervolume. We report the normalized hypervolume values in [0, 1] with respect to the same r∗ for all the methods, and also the ratios of hypervolume difference to our method. A Wilcoxon rank-sum test with a significance level 1% is conducted to compare the results for each experiment. More details can be found in Appendix D.1. 6.1 RESULTS AND ANALYSIS MOTSP. The results on two and three objective MOTSP are shown in Table 2 and Table 3 respectively. MOGLS, NSGAII and MOEA/D all use 2-opt local search heuristic (Jaszkiewicz, 2002) to MOTSP MOCVRP MOKP search for promising solutions. We also include two weight-sum scalarization baselines with the state-of-the-art LKH solver (Helsgaun, 2000; Tinós et al., 2018) and Google OR tools (Perron & Furnon, 2019). For the bi-objective problems, our proposed method with a single model has similar performances compared with AM-MOCO on all problems. It achieves the best performance with instance augmentation, which significantly outperforms other methods but is beaten by the LKH solver. For the three objective problems, our method can further improve its performance by generating much more trade-off solutions within a reasonable amount of time, which other methods cannot do. As shown in Figure 2 and Figure 5, our method can successfully learn the mapping from preferences to the corresponding solutions, and can generate a good prediction to the whole Pareto front. Decision makers can easily obtain any preferred trade-off solutions as they like. This flexibility could be desirable in many real-world applications. More discussion on the connection between the preference and Pareto solution for three-objective TSP can be found in Appendix D.5 D.6 D.7. MOCVRP. In this problem, each node has a demand, and we need to construct multiple return routes for a vehicle with a fixed capacity from the same depot to handle all demands. The objectives we consider are to minimize the tour length for all routes and also the tour length for the longest routes (the makespan in scheduling) (Lacomme et al., 2006). All the non-learning algorithm frameworks use the problem-specific constructive heuristics and local search method proposed in Lacomme et al. (2006) to search feasible non-dominated solutions. The results in Table 2 show that our method significantly outperforms the non-learning heuristics in terms of both solution quality and running time. It also outperforms AM-MOCO with 100 individual models, which could be due to the asymmetric objective scales. We provide further analysis in Appendix D.4. MOKP. The multiobjective 0-1 knapsack problem can be found in many real-world applications (Bazgan et al., 2009). We consider the uni-dimension problem, where each item has multiple values and one weight. The goal is to select a subset of items to maximize all obtained values with a weight constraint. The non-learning methods use binary coding with a greedy transformation heuristic to maintain feasibility (Ishibuchi et al., 2014). We also include weight-sum scalarization baselines with dynamic programming (DP) and a strong greedy search based on the value-weight ratio. According to the results in Table 2, our method has the best performance on all problems. The DP method is also outperformed by our method since the weight-sum scalarization can only find the convex hull of the Pareto front. The Tchebycheff scalarization of MOKP is not a KP problem, while our method is more flexible to use Tchebycheff scalarization on the reward function. We also report the results on 10 objective MOKP100 and the generalization performance to problem with 500 items in Appendix D.8. Out-of-Distribution Problems and Active Adaption. We also validate the generalization performance of our method on 6 out-of-distribution (OOD) MOTSP problems from Fonseca et al. (2006). Their ground truth Pareto fronts can be obtained by exhaustive search. The results are shown in Appendix D.2 due to the page limit. With active adaption, our method can achieve good performance (1% - 1.5% HV gap to the ground truth Pareto fronts) on these OOD problems. 7 CONCLUSION AND FUTURE WORK Conclusion. We have proposed a novel preference-conditioned method to approximate the whole Pareto front for MOCO problems using a single model. It allows decision makers to directly obtain any trade-off solutions without any search procedure. Experiments on different problems have shown that our proposed method significantly outperforms other methods in terms of performance, speed and model efficiency. We believe the proposed method is a principled way for solving MOCO. Future Work. In a sense, our method can be regarded as a learning version of the decompositionbased algorithm (MOEA/D (Zhang & Li, 2007)) dealing with all the possible trade-off preferences. Instead of maintaining a set of finite solutions as in other MOEA/D vaiants (Trivedi et al., 2016), we build a single learning-based model to solve the subproblems for all the preferences simultaneously in a collaborative manner. We believe the single-model-for-all-preference approach is a promising alternative to the current default finite-population-based methods, and it could be an important research direction for multiobjective optimization. Our method can be further improved with other advanced models and efficient multiobjective training procedures. In the future, we will study fundamental issues of multiobjective optimization (e.g., convergence v.s. diversity, exploitation v.s. exploration trade-off) for Pareto set learning methods. Limitation. It is very difficult to give a convergence guarantee for learning-based MOCO, where each preference-based subproblem could be already NP-hard, and the number of Pareto solutions is exponentially large with respect to the input size. See detailed discussion in Appendix A. ACKNOWLEDGMENTS We thank Prof. Hisao Ishibuchi for his valuable comments on an earlier version of this work. This work was supported by the Hong Kong General Research Fund (11208121, CityU-9043148). A PARETO SET LEARNING AND APPROXIMATION ANALYSIS A.1 PARETO SET LEARNING AND CONVERGENCE GUARANTEE In this work, we have proposed a novel neural combinatorial optimization (NCO) method to approximate the whole Pareto set for MOCO problems with a single model. The proposed learning-based MOCO solver can directly generate arbitrary trade-off solutions without extra optimization. We believe it is a principled way to solve MOCO problems. However, the lack of an exact optimality guarantee is a limitation of the proposed method, which is also the case for previous work on single-objective neural combinatorial optimization (Vinyals et al., 2015; Bello et al., 2017; Kool et al., 2019). This limitation is mainly due to the fact that many singleobjective combinatorial optimization (CO) problems are NP-hard, and the size of Pareto sets for a MOCO problem would be exponentially huge, which makes it very difficult to exactly solving the problems (Ehrgott, 2005; Herzel et al., 2021). In addition, the training for the parameterized policy (neural network model) cannot guarantee to fit all training problems perfectly. The generalization ability to problem instances with different patterns (out-of-distribution generalization) is another critical issue that makes it difficult to give an exact optimality guarantee to the proposed learningbased algorithm. On the other hand, our proposed model is an efficient mapping from the preferences to the corresponding approximate set of the Pareto optimal solutions. It provides a flexible way for decision makers to obtain an approximate solution with their preferred trade-off directly. The experimental results also show that our proposed method can generate good approximate Pareto sets for three different MOCO problems. In the next subsection, we provide a thorough discussion on the approximation ability of our proposed method. A.2 APPROXIMATION ANALYSIS For a MOCO problem, the number of Pareto solutions could be exponentially large with respect to its input size, which makes the problem intractable (Ehrgott, 2005; Herzel et al., 2021). The preference-based scalarization methods and decomposition methods (Choo & Atkins, 1983; Zhang & Li, 2007) we used provides a principled way to link the Pareto solutions with preference, allowing us to tackle the problem in a systematic manner. In this work, we propose to approximately solve the scalarized subproblem with all preferences via a single model. We first briefly review the weighted scalarization method and its Pareto optimality guarantee as discussed in the main paper. Then we provide further discussion on the approximation analysis. Our proposed method decomposes a MOCO problem into preference-based subproblems with the weighted-Tchebycheff scalarization (Weighted-TCH): min x∈X gtch(x|λ) = min x∈X max 1≤i≤m {λi|fi(x)− (z∗i − ε)|}, (13) where z∗i is the ideal value for objective fi(x) (e.g., the lower bound), and u ∗ i = z ∗ i − ε is a utopia value with small positive component ε. The preference vector λ ∈ Rm satisfies λi ≥ 0 and ∑m i=1 λi = 1, where λi is the preference for the i-th objective. This approach has a desirable property: Lemma 1 (Choo & Atkins (1983)). A feasible solution x ∈ X is Pareto optimal if and only if there is a weight vector λ > 0 such that x is an optimal solution to the problem (13). According to Lemma 1, we can obtain any Pareto solution by solving the Weighted-TCH subproblem with a specific weight. However, the weight for each Pareto solution depends on its objective values, which are not known in advance (Sawaragi et al., 1985; Ehrgott, 2005). The decision-maker still needs to solve multiple subproblems with different preferences to find a desirable solution. To find the whole Pareto set, it needs to solve an exponentially huge number of subproblems. Given a problem instance s, our proposed model provides a single mapping function xλ = h(λ) from any preference λ to its corresponding solution xλ, which is constructed by the preferencebased policy pθ(λ)(x|s). In the ideal case, if all generated solutions xλ are the optimal solutions x∗λ of problem (13) with preference λ, according to Lemma 1, our proposed model can generate the whole Pareto set (all Pareto optimal solutions) for the original MOCO problem. In practice, we are interested in the proposed method’s approximation ability. We find that its performance strongly depends on the approximation ability of the parameterized policy (neural network model) on the single-objective scalarized subproblem. We first give an informal claim on our method’s approximation ability, then provide detailed explanations and discussions. (Informal) Claim 1. If the proposed method can approximately solve the subproblem (13) with any preference λ, it can generate a good approximation to the whole Pareto set for the MOCO problem. To support this claim, we follow the traditional ε-Pareto approximate method for MOCO problems (Papadimitriou & Yannakakis, 2000; Herzel et al., 2021). First, an ε-Pareto domination relation between two individual solutions can be defined as: Definition 3 (ε-Pareto Domination). For a MOCO problem and an ε > 0, let xa, xb ∈ X , xa is said to ε-dominate xb (xa ≺ε xb) if fi(xa) ≤ (1 + ε)fi(xb),∀i ∈ {1, · · · ,m}. This definition is a natural generalization from the (1 + ε)-approximation for single-objective optimization. With this concept, an ε-approximate Pareto set (Papadimitriou & Yannakakis, 2000) can be defined as: Definition 4 (ε-Approximate Pareto Set). For an ε > 0, a set Pε ⊂ X is an ε-approximate Pareto set, if for any feasible solution x ∈ X , there exists a solution x′ ∈ Pε such that x′ ≺ε x. In other words, all feasible solutions of the MOCO problem can be almost dominated by some solutions in Pε (Papadimitriou & Yannakakis, 2000). When the Pareto set is intractable and hard to find, the ε-approximate Pareto set would be a reasonable choice to achieve in practice. Each MOCO problem has a unique Pareto set, but can have different ε-approximate Pareto sets. The ability of our proposed method to find an ε-approximate Pareto set strongly depends on its performance on each single-objective preference-based subproblem. Theorem 1. Let x∗λ denotes the optimal solution of the problem (13) with preference λ, if the proposed method can generate an approximate solution xλ ≺ε x∗λ for any preference λ, it is able to generate an ε-approximate Pareto set Pε to the MOCO problem. Proof. Let P be the Pareto set for a MOCO problem, for any xPareto ∈ P , according to Lemma 1, there is a weight vector λ > 0 such that x = x∗λ is the optimal solution for subproblem (13) with a specific preference λ. Therefore, our proposed method can generate an approximated solution xλ ≺ε x∗λ = xPareto. By generating approximate solutions for all xPareto ∈ P , our proposed method is able to generate an ε-approximate Pareto set Pε to the MOCO problem. A.3 LIMITATION Strong Assumption on (Approximately) Solving all Subproblems: The approximation guarantee in Theorem 1 heavily depends on the ability to (approximately) solve each weighted subproblem. Due to the NP-harness, it is indeed non-trivial to give a convergence guarantee to generate ε-dominate solutions for any preference with a small enough ε. This limitation also applies for other end-to-end learning-based (e.g, neural combinatorial optimization) and heuristic-based methods. We are aware that some efforts have been made to combine the learning-based method with dynamic programming to achieve asymptotically optimal solution solution for specific single-objective problem in recent works (Cappart et al., 2021b; Kool et al., 2021). These methods provide a controllable trade-off between the solution quality and the computational cost for solving NP-hard problems. However, their generalization to the multi-objective problem is not straightforward, since the scalarized subproblem for each preference is not necessary the same as its single-objective counterpart. For example, a Tchebycheff scalarized MOTSP is not a single-objective TSP as discussed at the end of Section 3.2. In addition, according to Bengio et al. (2020), these methods belong to the class of learning alongside the algorithms, while our proposed approach is learning to directly produce the solutions (neural combinatorial optimization). Therefore, the idea for learning enhanced multiobjective combinatorial algorithm could be an important research topic in future, but out of the scope for the current work. Dense Approximation for the Whole Pareto Set: Another concern would be the required number of solutions in the ε-approximate Pareto set Pε. If the required number is exponential to the input size, the approximation itself is also intractable. In their seminal work, Papadimitriou & Yannakakis (2000) establish a promising result: Theorem 2 (Papadimitriou & Yannakakis (2000)). For any multiobjective optimization problem and any ε, there is an ε-approximate Pareto set Pε of which the size is polynomial in the number of solutions and 1ε (but exponential in the number of objectives). However, the existence of such a set still does not mean that it can be easily found (Papadimitriou & Yannakakis, 2000; Herzel et al., 2021). The computability (whether Pε can be constructed in polynomial time) would be hard to justify for a real-world problem. For a new unseen problem instance in practice, our proposed method might still need to generate an exponentially large number of solutions to construct an ε-approximate Pareto set Pε. It is also unclear how to properly select a set of preferences in advance. Many research efforts have been made on developing approximation methods for solving MOCO problems in the past decades (Herzel et al., 2021; Hansen, 1980; Papadimitriou & Yannakakis, 2000; Vassilvitskii & Yannakakis, 2005; Koltun & Papadimitriou, 2005; Bazgan et al., 2017). In future work, it is important to better leverage the current advanced approximation strategies to design more efficient preference-based methods. In the learning-based optimization scenario we consider, it is also possible to learn the suitable approximation method and/or preference distribution directly from the data (problem instances). B DETAILS ON THE PROPOSED MODEL B.1 MODEL SETTING We use the same model for all MOCO problems while tuning the input size and mask method for each problem. Table 4 shows the number of parameters of a standard single-objective attention model (Kool et al., 2019) and our proposed preference-based multiobjective attention model. Our model supports flexible preference assignment at the inference time with a small overhead, while the other neural MOCO methods all require training multiple AM models for different preferences. We build the single-preference attention models as well as our model following the implementation in Kwon et al. (2020). Attention Encoder. The encoder we use is the standard attention encoder as in Kool et al. (2019), and it is shared by all preferences. The encoder has 6 attention layers, and 128-dimensional node embedding for the input nodes. Each attention layer has a multi-head attention (MHA) with eight 16-dimensional heads, and a fully connected layer (FC) with one 512-dimension hidden sublayer. The encoder also includes skip-connection and batch normalization for each attention layer. We use the same model for all MOCO problems (MOTSP, MOCVRP, MOKP) but with different input dimensions for each problem, which will be introduced in the next section. Preference-Conditioned Decoder. The decoder’s main model structure is the same as the AM decoder (Kool et al., 2019). It has one multi-head attention layer with eight 16-dimensional heads similar to the encoder, but without skip-connection and batch normalization. The decoder uses a single 128-dimensional attention head to calculate the probabilities of selecting different nodes at each step. Different problems have different masking methods for probability calculation. We use a simple MLP model to generate the preference-conditioned parameters for the decoder. For all MOCO problems, the MLP model has two 128-dimensional hidden layers with ReLu activation. The input is an m-dimensional preference vector λ which satisfies λi ≥ 0 and ∑m i=1 λi = 1, where m is the number of objectives and λi is the preference for the i-th objective. We adopt the parameter compression approach in Ha et al. (2017) to control the model size. The MLP model first generates a hidden embedding e(λ) = MLP(λ|ψ), then maps the hidden embedding to the decoder parameters via linear projection θdecoder = We(λ) + b. The learnable parameters are ψ for the MLP model MLP(λ|ψ) and the parameter matrices W and b for the decoder. Training Procedure. For all problems, we train our proposed model for 200 epochs, with 100, 000 problem instances randomly generated on the fly at each epoch. At each iteration step, we need to sample K preferences, B problem instances, and N tours to calculate the policy gradient. We set K×B = 64 to make the batch of 64 instances for training a single AM model, and letN equal to the problem size (e.g., the number of nodes) as in Kwon et al. (2020). We find the model performance is equally good for setting K = 1, 2 and 4, and keep using K = 1 for all problems. In other words, we randomly generate a preference λ that satisfies λi ≥ 0 and ∑m i=1 λi = 1 at each training step. For the AM-MOCO baseline, we adapt the transfer training approach in Li et al. (2020) to train multiple AM models for different preferences. We first train a single AM model with a single preference on one objective from scratch with 200 epochs, then transfer its parameter to the model for neighbor subproblem with similar preference, and fine-tune the new model with 5 epochs. With sequentially transfer and fine-tune, we can obtain a set of trained models for different preferences. In most experiments, we set the number of preferences as 101. Therefore, we need to build 101 AM models with total 700 training epochs. Instance Augmentation for MOCO. Due to the design choice of minimal essential change (e.g., the preference-conditioned decoder), our method can also enjoy the current improvements that were originally proposed for the single objective NCO. Here, we generalize the instance augmentation method proposed in Kwon et al. (2020) to the MOCO version. The key idea of instance augmentation for NCO is to find multiple efficient transformations for the original problem such that they share the same optimal solution. Then, we can use an NCO method to solve all problems and select the best solution among all obtained (potentially different) solutions. In this way, we have a more robust result similar to the test-time augmentation for computer vision (Szegedy et al., 2016). For the single-objective euclidean TSP and CVRP, there is a set of straightforward transformations, which simply flips or rotates the coordinate for all the 2D locations in a problem instance (Kwon et al., 2020). For a location (x, y), there is eight different transformation, namely, {(x, y), (y, x), (x, 1−y), (y, 1−x), (1−x, y), (1−y, x), (1−x, 1−y), (1−y, 1−x)}. For an m-objective euclidean MOTSP problem, the concrete location representations are independent for each objective. Therefore, we can independently apply different transformations for each objective. Consider the above eight different transformations for each objective, we can have 8m different problem transformations for an MOTSP instance. We have fixed 8 transformations for MOCVRP since it only has one 2D coordinate, and no transformation for MOKP. The details for each problem can be found in the next section. B.2 TRAINING EFFICIENCY We use the same amount of samples to train our proposed preference-based model as the other single-objective solvers need (Kool et al., 2019; Kwon et al., 2020). Indeed, our proposed model requires significantly fewer samples and training epochs, compared to the other MOCO methods that need to build multiple models for different preferences. We compare our model’s performance on one of the objective (e.g., with preference (1, 0)) with the other SOTA single-objective solver and learning-based solver, the results are shown in Table 5. The results of Concorde/LKH/OR Tools are from Kwon et al. (2020), and we run the learning-based solver by ourselves. We report the average performance over 10, 000 test instances. AM is the single-objective solver (one model in AM-MOCO), P-MOCO (single preference) is our proposed model but only training on a single fixed preference (1, 0), and P-MOCO (all preferences) is our proposed model with the reported result on the preference (1, 0). With the same amount of training samples, our model has similar single-objective performance with learning-based single-objective solver, while it can additionally approximate the whole Pareto front. The learning-based solver’s performance can be further improved by sampling or active search. These results indicate that we can use a single encoder to efficiently learn a shared representation for all trade-offs among different objectives, and there is a positive knowledge transfer among preferences during the learning procedure. In addition, it also confirms the assumption that similar preferences should have similar corresponding (approximate) Pareto solutions for the multiobjective problems we consider in this paper. These findings could be useful to design more powerful learning-based models for MOCO in the future. B.3 ACTIVE ADAPTION After end-to-end training, our proposed method can directly generate different trade-off solutions to a given problem without further search procedure. However, similar to single-objective neural combinatorial optimization, this approach could still have a gap to the Pareto front, especially for problems out of the training distribution S (e.g., with different sizes and patterns) (Lisicki et al., 2020). Iterative search methods, such as sampling and beam search, can further improve the performance for a single solution or single preference (Veličković & Blundell, 2021). However, these approaches can not find a better approximation to the whole Pareto set for a MOCO problem. Algorithm 2 Neural MOCO Active Adaption 1: Input: model parameter θ, instance s, preference distribution Λ, number of adaption steps T , number of preferences per iteration K, number of tours N 2: for t = 1 to T do 3: λk ∼ SamplePreference(Λ) ∀k ∈ {1, · · · ,K} 4: πjk ∼ SampleTour(pθ(λk)(·|s)) ∀k ∈ {1, · · · ,K} ∀j ∈ {1, · · · , N} 5: b(s|λk)← 1N ∑N j=1 L(π j k|λk, s) ∀k ∈ {1, · · · ,K} 6: ∇J (θ)← 1KN ∑K k=1 ∑N j=1[(L(π j k|λk, s)− b(s|λk))∇θ(λk) log pθ(λk)(π j k|s)] 7: θ ← ADAM(θ,∇J (θ)) 8: end for 9: Output: The model parameter θ We propose a simple yet powerful active adaption approach as shown in Algorithm 2. It iteratively adapts the model parameter θ(λ) to a given instance s (or a batch of instances) with all preferences from the distribution Λ rather than searching for a specific solution. This method is similar to the active search in Bello et al. (2017) which actively refines the single-objective model for efficient candidate solutions searching. Our approach focuses on adapting the whole model for a better Pareto front approximation. Since this method is distribution-agnostic (not depend on specific instance distribution S), it is suitable for out-of-distribution adaption. C DETAILS OF THE MOCO PROBLEMS This section introduces the detailed problem formulation for the MOTSP, MOCVRP and MOKP we used in this work. We also provide the model configuration (e.g., input size, masks) for each problem. C.1 MOTSP We consider the Euclidean multiobjective traveling salesman problem (Euclidean MOTSP), which is widely used in the MOCO community (Lust & Teghem, 2010b; Florios & Mavrotas, 2014). Its single objective counterpart, 2D Euclidean TSP, has also been studied in single-objective neural combinatorial optimization (NCO) (Vinyals et al., 2015; Bello et al., 2017; Kool et al., 2019). A general m-objective MOTSP instance s with n nodes has m n × n cost matrices {Ci = (cijk), i = 1, · · · ,m} for m different costs. The problem is to find a tour (cyclic permutation π) to minimize all the costs: minL(π|s) = min(L1(π|s), L2(π|s), · · · , Lm(π|s)), where Li(π|s) = ciπ(n)π(1) + n−1∑ j=1 ciπ(j)π(j+1). (14) In a Euclidean MOTSP, the cost information is stored in the nodes rather than the edges. The j-th node has a 2m-dimensional vector [x1j ,x 2 j , · · · ,xmj ] where xij ∈ R2 is a 2D coordinate for the i-th objective. The i-th cost cijk = ||xij − xik||2 is the Euclidean distance for moving from node j to k. If we only have one objective m = 1, it reduces to the single-objective 2D Euclidean TSP: min π L1(π|s) = ||xπ(n) − xπ(1)||2 + n−1∑ j=1 ||xπ(i) − xπ(i+1)||2. (15) The single-objective TSP is already NP-hard, so does the MOTSP. In addition, the Pareto set of MOTSP has an exponential cardinality with respect to its input size (e.g., number of nodes), so it is intractable even for the 2-objective case (Ehrgott & Gandibleux, 2003). Problem Instance. Similar to the previous work on single-objective NCO (Lust & Teghem, 2010b; Florios & Mavrotas, 2014), we randomly sample all n nodes with uniform distribution on the 2mdimensional unit hyper-square (e.g., [0, 1]2m) for all problem instances. Model Details. In m-objective MOTSP, each node has a 2m-dimensional vector to store all cost information, so the input size is 2m for the encoder. To calculate the probability for selecting the next node, the decoder needs to mask all already visited nodes as unavailable. We have a valid tour when all node is selected (we assume the end node will connect to the start node). C.2 MOCVRP The vehicle routing problem (VRP) is a classical generalization of TSP, which has been studied for several decades. This work studies the capacitated vehicle routing problem (CVRP). In this problem, in addition to the location, each node (city) has a demand δi needed to be satisfied. There is an extra depot node and a vehicle with a fixed capacity D > δi,∀i to handle all the demands. The vehicle will always start from the depot node, then goes to different cities to satisfy multiple demands ∑ δi ≤ D , and turns back to the depot node. A solution to this problem is a set of routes that satisfies the demands for all cities. In the multiobjective problem, we consider two objectives to optimize. The first one is the total tour length as in the single-objective CVRP, and the other one is the tour length for the longest route (which is also called makespan in scheduling theory). This problem has been studied in the MOCO community (Lacomme et al., 2006). Problem Instance. Similar to the TSP problem, the location of n nodes are uniformly sampled from the unit square. For the demand, similar to the previous work on the single-objective counterpart (Kool et al., 2019; Kwon et al., 2020), we randomly sample discrete δi from the set {1, · · · , 9}. For problem with size n = 20, 50, 100, we set the capacity as D20 = 30, D50 = 40 and D100 = 50, respectively. Without loss of generality, we normalize the demands δ̂i = δiD and capacity D̂ = DD = 1 as in the previous work (Kool et al., 2019; Kwon et al., 2020). Split delivery is not allowed in this problem. Model Details. In the MOCVRP, the depot node has a 2-dimensional location vector, and the other nodes all have 3-dimensional vectors to store their locations and demands. We use different parameter matrices to project the nodes into the input embedding with the same dimension dh = 128. For node selection, the model records the current capacity of the vehicle and the rest demands for all nodes. If a node has been already visited or has demand larger than the vehicle’s current capacity, it will be masked as unavailable for the vehicle to visit. If no node is available to visit, the vehicle will go back to the depot. Once all nodes have 0 demands, the node selection is finished and we have a valid solution to the problem. C.3 MOKP Knapsack problem (KP) is also a widely studied combinatorial optimization problem. In this work, we consider the 0-1 multiobjective knapsack problem (MOKP) with m objectives and n items: max f(x) = max(f1(x), f2(x), · · · , fm(x)), where fi(x) = ∑n j=1 v i jxj , subject to ∑n j=1 wjxj ≤W, xj ∈ {0, 1}, (16) where each item has a weight wj and m different values {vij , i = 1, · · · ,m}. The problem (e.g., knapsack) has a maximum weight capacity W , and the goal is to select a set of items within the weight capacity to maximize the sum values for each objective. To make this problem nontrivial, we further assume all values vij ,∀i, j, weights wj∀j and the total capacity are non-negative real value. The total weight of all items is larger than the capacity ∑ wi > W , while each single weight is smaller than the capacity wi < W, ∀i = 1, · · · , n. The single-objective knapsack problem is NP-hard, so does the MOKP problem (Ehrgott & Gandibleux, 2003). Problem Instance. We randomly generate the values and weight for each item both uniformly in [0, 1]. We consider problems with n = 50, 100, 200 nodes, and the weight capacities are W50 = 12.5,W100 =W200 = 25 as in the previous work (Bello et al., 2017; Kwon et al., 2020). Model Details. In an m-objective MOKP, each item has m values and 1 weight, so the input dimension is 3 for the encoder. For node selection at each step, we mask all already selected nodes and nodes with weights larger than the remained capacity as unavailable. We terminate the selection when all nodes are labeled as unavailable. D ADDITIONAL EXPERIMENTAL RESULTS D.1 HYPERVOLUME INDICATOR To solve a MOCO problem, the result for each method is a set of approximate Pareto solutions. Since the ground truth Pareto set is usually unknown, we use the hypervolume (HV) indicator (Zitzler et al., 2007) to numerically compare the performance for each method. The hypervolume indicator is widely used in the MOCO community for algorithm comparison. The hypervolume of a set is the volume in the objective space it dominates. For a set P ⊂ Rm in the objective space, we can find a reference point r∗ that dominated by all solutions in P , and define the hypervolume HV(P ) as the volume of the set: S = {r ∈ Rm | ∃y ∈ P such that y ≺ r ≺ r∗}, (17) where HV(P ) = Vol(S). An illustration example is shown in Figure 4. The grey area is the set S dominated by the solutions in set P = {p1, p2, p3, p4} with the reference point r∗. In this 2- dimensional case, the hypervolume HV(P ) is the size of the grey area. The hypervolume indicator has two important advantages for measuring the approximate set quality with respect to Pareto optimality (Zitzler et al., 2007). First, if an approximate set A dominates another approximate setB, it will have a strictly better hypervolume HV(A) > HV(B). In addition, if an approximate set C contains all Pareto optimal solutions, it is guaranteed to have the maximum hypervolume value. In comparison, an approximate set has better performance if it has a larger hypervolume. With different objective scales, the hypervolume value will vary significantly among different problems. We report the normalized hypervolume values Ĥ(P ) = HV(P )/ ∏m i r ∗ i for all methods and also their performance gaps to our method. For each experiment, all methods share the same reference point r∗, which contains the largest value achieved for each objective. Since all problems we consider have positive objective values, we have 0 ≤ Ĥ(P ) ≤ 1 for all solution sets. The ground truth Pareto set P ∗ usually has Ĥ(P ∗) < 1, unless the zero vector 0 ∈ Rm is feasible and in the Pareto set. D.2 OUT-OF-DISTRIBUTION PROBLEM WITH EXACT PARETO FRONT We conduct experiments on 6 two-objective MOTSP100 instance (L1-L6) in Florios & Mavrotas (2014) of which the exact Pareto fronts are available. In these problems, the objective functions have different ranges, and the cities are not uniformly located, so they are out of our method’s training distribution. The results can be found in the Table 6. In addition to hypervolume, we also report the Inverted Generational Distance (IGD) (Fonseca et al., 2006) to measure the average Euclidean distance between the set of approximated Pareto solutions to the exact Pareto front. A smaller IGD value means the approximated set is closer to the exact Pareto front. According to the results, our method, with the instance augmentation and/or active search (10 min budget), can have a good performance on these out-of-distribution (OOD) instances with a 1%− 1.5% hypervolume gap. The proposed method also significantly outperforms the weight-sum OR tools baseline. There is still a gap to the strong weight-sum LKH baseline. As discussed in the paper, robust OOD generalization is an important research direction for the learning-based solver. D.3 FLEXIBLE PREFERENCE-BASED APPROXIMATION With our model, it is flexible to generate different number of solutions to approximate the Pareto front. We present an example on the three-objective TSP in Figure 5. We use the structured weight assignment approach from Das & Dennis (1998) to give the sets of weights for different instances. This method can generate n = Cm+p−1p evenly distributed weights with an identical distance to their nearest neighbor on the unit simplex (e.g., ∑m i=1 λi = 1 with λi ≥ 0,∀i), where m is the number of objectives and p is a parameter to control the number of weights. For the three objective TSP problems (m = 3), we assign p = 13, 44 and 140 to generate n = 105, 1035 and 10011 weights respectively. We also show the corresponding generated solutions for MOTSP instances with 20, 50 and 100 cities. According to the results in Figure 5, our model can generate well-distributed solutions with a small number of preferences, and generate a dense approximation with more preferences. The ability to generate a dense approximation to the whole Pareto set also allows the decision-maker to generate arbitrary preferred solutions on the approximate front. D.4 PREFERENCE-SOLUTION CONNECTION We further analyze the connection between the preference and its corresponding solution on the uniform and non-uniform Pareto front. Figure 6 shows the connections in our model with different numbers of preferences for the MOTSP100 instance. Since the two objectives (costs) in MOTSP have the same scale, this problem has a uniform connection between the preferences and the (approximate) Pareto front. By increasing the number of preferences, we have three sparse to dense generated Pareto front approximations. We are more interested in MOCVRP, which has a non-uniform Pareto front. In this problem, we consider two different objectives to optimize, namely, the total tour length (objective 1) and the tour length for the longest route (objective 2). These two objectives are in quite different scales, where the first objective is significantly larger than the second one. In Figure 7, we show different connections for the MOCVRP100 instance. For MA-MOCO, we report the connections for all 101 models. For our proposed model, we report the connections with different numbers of uniform preferences. In this problem, 101 models or our model with 101 uniform preferences are not enough to generate a dense approximate Pareto front. The obtained solutions are biased to the area that objective 1 has a much better relative performance. By increasing the number of preferences, our proposed method can generate more solutions that have relatively better performance for objective 2, which leads to a better Pareto front approximation with higher hypervolume. In this work, we always use a straightforward uniform sampling method to select the preferences. It is interesting to design a learning-based approach to select the preferences for a given problem instance. Preference adjustment and model adaption with awareness on the shape of Pareto front are also worthy to investigate. We left them to the future work. In the MOCVRP instance, we also find the 101-model MA-MOCO has a worse performance compared to our method with 101 preferences. The reason would be the mismatch between the uniform transfer training and the non-uniform Pareto front. Increasing the training steps for fine-tuning each model might fix this issue, but will lead to an even larger computational overhead, given the current training already require 700 epochs. The fixed preferences assignment is another issue for MAMOCO. It requires a fixed set of preferences for each model at the start of the training procedure when the decision makers might have no knowledge on the problem. When the training procedure is done, it dose not allow any preference adjustment without retraining the models. D.5 CONNECTION BETWEEN PREFERENCES AND SOLUTIONS In the previous sections, we use the weighted Tchebycheff aggregation to connect the preference to its corresponding solution for two-objective optimization problems: gtch(x|λ) = max 1≤i≤m {λi|fi(x)− z∗i |}, (18) where z∗i < minx∈X fi(x) is an ideal value for fi(x). There are also many other aggregation function we can use to build the connection. For example, a modified version of weighted Tchebycheff aggregation can be defiend as: gmtch(x|λ) = max 1≤i≤m { 1 λi |fi(x)− z∗i |}, (19) where the only difference is the weight vector 1λi . The penalty-based boundary intersection (PBI) is another widely-used aggregation function for decomposition-based multiobjective optimization (Zhang & Li, 2007): gpbi(x|λ) = d1 + θd2, d1 = |(F (x)− z∗)Tλ|/||λ||, d2 = ||F (x)− z∗ − d1 λ ||λ|| ||, (20) where θ is the penalty parameter, F (x) = (f1(x), . . . , fm(x)) and z∗ = (z∗i , . . . , z ∗ i ) are the objective vector and ideal vector respectively. An inverted version of PBI (IPBI) aggregation function (Sato, 2014) can be defined as: gipbi(x|λ) = −d1 + θd2, d1 = |(zN − F (x))Tλ|/||λ||, d2 = ||zN − F (x)− d1 λ ||λ|| ||, (21) where zN is the nadir vector that contain each objective’s worst value among all Pareto solutions. For a two-objective optimization problem, when we can find a dense set of corresponding solutions to cover the Pareto front for each aggregation function, their performance could be similar to each other. However, different aggregation functions would have quite different performances on the problems with three or more objective functions (called many-objective optimization problems). The performances will heavily depend on the shape of Pareto front (Ishibuchi et al., 2016), especially with a limited number of approximate solutions. We compare the performance of our proposed method with different aggregation functions on MOTSP50 with 105, 1035 and 10011 preferences respectively in Fig. 8. According to the results, the IPBI method can generate the most uniformly distributed solutions for the MOTSP problem with an inverted triangular shape of Pareto front, of which the shape is similar to the weight vector distribution (e.g., see Fig 5). This observation is consistent with the findings and analysis in Ishibuchi et al. (2016). According to these results, we use the Tchebycheff aggregation for all two-objective optimization problems and IPBI aggregation for all problems with more than two objective functions in this work. Since the shape of Pareto front tends to be irregular for real-world applications (Ishibuchi et al., 2019), how to properly choose the aggregation function and assign the preference distribution could be an important future work. D.6 THREE-OBJECTIVE MOTSP WITH ASYMMETRIC PARETO FRONT In this subsection, we conduct experiments on the three-objective MOTSP100 instances with asymmetric Pareto fronts. The definition of irregular MOTSP instance is almost the same as in Section C.1, except the coordinates for the three objectives and randomly sampled from [0, 1]2, [0, 0.5]2 and [0, 0.1]2 respectively, rather than uniformly from [0, 1]6. In this way, the objective values for the MOTSP instance will be in quite different scales, thus leading to an irregular Pareto front (the axes in Figure 9 are in different scales). A well-known drawback of the scalarization-based approach is that it cannot evenly explore the irregular Pareto front with a set of uniform weights, which can also be observed in Figure 9(a)-(d). Our proposed approach allows the user to generate arbitrary trade-off Pareto solutions on the inference time, therefore they can directly generate a dense approximation and then select the preferred solutions as in Figure 9(d). This flexibility can partially address the unevenly distributed issues caused by a (small) set of fixed weights in the traditional scalarization-based approach. If we know the approximate range of different objectives in advance, we can first normalize them into [0, 1] to encourage a more symmetric Pareto front. Otherwise, on the inference time, we can use a (prior knowledge-based) biased and non-uniform weight assignment to generate uniformly distributed solutions. In Figure 9(e)-(h), we first multiple the three-dimensional weights by (1, 2, 10) and then normalize them back to [0, 1]3 which leads to a set of non-uniform weights as shown in Figure 9(e). With this weight assignment, we have a a set of more evenly distributed Pareto solutions as shown in Figure 9(f)-(h). D.7 PREFERENCE-BASED INFERENCE Even without any prior knowledge, our proposed approach allows the user to adaptively adjust the weights in real-time to search for the most suitable solutions in their preferred region(s). Some examples of selected weights and their corresponding solutions are shown in Figure 10 for symmetric Pareto front and Figure 11 for asymmetric Pareto front. If we have prior knowledge of the preference (e.g., the decision-makers will only care about a specific region of the Pareto front), we can modify the training preference distribution Λ accordingly to enhance the training efficiency. For the problem with a truly irregular Pareto front, it is also possible to adaptively adjust the given weights to make them evenly explore the Pareto front during the learning/searching process. One potential direction could be to consider the connection between scalarization and hypervolume maximization as in Zhang & Golovin (2020). We believe this could be an important research topic for the learning-based scalarization approach in future work. D.8 PROBLEM WITH MORE OBJECTIVES Finally, we test the performance of our proposed method on the 10-objective knapsack problems. We train a new model for the 10 objective MOKP with 100 items with uniform 10-dimension preferences. The obtained value path plots on the 10-objective MOKP100 are shown in Figure 12. For problems with more objectives, we need a large number of solutions to approximate the Pareto set. Training a large number of neural network models would have a huge computational and storage overhead, which is also not desirable in practice. Therefore, we do not compare with the AMMOCO and MOA-DRL methods on this problem. For inference, to approximate the Pareto set, we use a set of 715 fixed preferences following the weight assignment approach from (Das & Dennis, 1998) (with m = 10, p = 4, hence n = C10+4−14 = 715). The model generates different trade-off solution for each preference, so there are 715 different value paths (lines) on each plot. In MOKP, we want to maximize the values for all objectives under the capacity limitation. A set of good approximate solutions should have relatively high overall values. According to the results, our proposed method has the best performance. We also test the performance of our method on a larger problem with 500 items. The results shown in Figure 13 confirm that our trained model generalizes well to problems with a larger size.
1. What is the main contribution of the paper in terms of neural multi-objective combinatorial optimization? 2. What are the strengths of the proposed approach, particularly regarding its technical soundness and competitive performance? 3. What are the weaknesses of the paper, such as the unclear description of the decoder and the confusing use of the term "preference"? 4. How does the reviewer assess the empirical evaluation of the proposed method, including its thoroughness and consideration of multiple test problems and benchmark methods? 5. What are the limitations of the proposed approach, such as its reliance on scalarizations, which can lead to uneven exploration of the Pareto front? 6. Does the reviewer have any questions or concerns about the figures presented in the paper, such as the distribution of points across the Pareto front and the choice of weights used to generate these points? 7. How does the reviewer view the novelty of the paper's contributions, given that it reuses ideas from single-objective neural combinatorial optimization?
Summary Of The Paper Review
Summary Of The Paper This paper proposes an approach for neural multi-objective combinatorial optimization. This approach uses a preference-agnostic encoder along with a weight-dependent decoder to generate approximate Pareto optimal solutions for any arbitrary set of weights of a weighted Tchebyshev scalarization at virtually no additional cost. This is in contrast with existing approaches, which require a significant amount of computation for every new set of weights. Several numerical experiments are conducted, showing favorable results for the proposed method when compared with several other existing evolutionary and learning-based methods. Review Strengths The proposed approach is technically sound and exhibits very competitive performance. The empirical evaluation is thorough as it considers multiple test problems and a broad range of benchmark methods. Weaknesses The description of the proposed approach is hard to follow. In particular, I am having a hard time understanding how the decoder is defined when the problem instance is not a graph. In general, I would recommend the authors explain their approach in a general fashion by introducing appropriate notation and then explaining it in the context of a particular problem. The use of the term "preference" is confusing. The title and abstract suggest that the proposed approach allows the incorporation of user preferences to focus the search on especific regions of the Pareto front, but that does not seem to be the case. I would suggest the authors to clarify this earlier (and even drop the term preference from their title). Since the proposed method relies on scalarizations, it suffers from several well-known issues of this kind of approaches. In particular, scalarization-based approaches are prone to explore the Pareto front unevenly. The points generated in Figures 2 and 5 are well distributed accross the Pareto front, but I think occurs because the front in this particular case is fairly symmetric. I wonder how an analogous figure would look for a more irregular front. I would also like the authors to explain how the weights that give rise to the points in Figures 2 and 5 were chosen. The novelty in this paper is limited, as it reuses several ideas from single-objective neural combinatorial optimization. The only novel (but key) idea is to make the decoder weight-dependent.
ICLR
Title Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization Abstract Multiobjective combinatorial optimization (MOCO) problems can be found in many real-world applications. However, exactly solving these problems would be very challenging, particularly when they are NP-hard. Many handcrafted heuristic methods have been proposed to tackle different MOCO problems over the past decades. In this work, we generalize the idea of neural combinatorial optimization, and develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure. We propose a single preference-conditioned model to directly generate approximate Pareto solutions for any trade-off preference, and design an efficient multiobjective reinforcement learning algorithm to train this model. Our proposed method can be treated as a learning-based extension for the widely-used decomposition-based multiobjective evolutionary algorithm (MOEA/D). It uses a single model to accommodate all the possible preferences, whereas other methods use a finite number of solutions to approximate the Pareto set. Experimental results show that our proposed method significantly outperforms some other methods on the multiobjective traveling salesman problem, multiobjective vehicle routing problem, and multiobjective knapsack problem in terms of solution quality, speed, and model efficiency. 1 INTRODUCTION Many real-world applications can be modeled as multiobjective combinatorial optimization (MOCO) problems (Ehrgott & Gandibleux, 2000). Examples include the multiobjective traveling salesman problem (MOTSP) (Lust & Teghem, 2010a), the multiobjective vehicle routing problem (MOVRP) (Jozefowiez et al., 2008) and the multiobjective knapsack problem (MOKP) (Bazgan et al., 2009). These problems have multiple objectives to optimize, and no single solution can optimize all the objectives at the same time. Instead, there is a set of Pareto optimal solutions with different trade-offs among the objectives. It is very challenging to find all the exact Pareto optimal solutions for a MOCO problem. Actually, finding one single Pareto optimal solution can be NP-hard for many problems (Ehrgott & Gandibleux, 2000), and the number of Pareto solutions could be exponentially large with regard to the problem size (Ehrgott, 2005; Herzel et al., 2021). The decision-maker’s preference among different objectives is usually unknown in advance, making it very difficult to reduce the problem into a single-objective one. Over the past several decades, many methods have been developed to find an approximate Pareto set for different MOCO problems within a reasonable computational time. These methods often need carefully handcrafted and specialized heuristics for each problem. It can be very labor-intensive in practice. In many real-world applications, practitioners need to solve many different instances for the same particular problem, where the instances can be easily obtained or generated (Bengio et al., 2020). It is desirable to learn the patterns behind these problem instances explicitly or implicitly to design efficient algorithms (Cappart et al., 2021a). Machine learning techniques can be naturally used for this purpose. Some learning-based methods have been recently proposed for solving single-objective combinatorial optimization problems (Bengio et al., 2020; Vesselinova et al., 2020; Mazyavkina et al., 2021; Cappart et al., 2021a). In this work, we extend the learning-based method to solve MOCO problems in a principled way as shown in Figure 1. Our main contributions include: • We propose a novel neural multiobjective combinatorial optimization method to approximate the whole Pareto set via a single preference-conditioned model. It allows decision makers to obtain any preferred trade-off solution without any search effort. • We develop an efficient end-to-end reinforcement learning algorithm to train the single model for all different preferences simultaneously, and a simple yet powerful active adaption method to handle out-of-distribution problem instances. • We conduct comprehensive experiments on MOTSP, MOVR and MOKP of different settings. The results show that our proposed method can successfully approximate the Pareto sets for different problems in an efficient way. It also significantly outperforms other methods in terms of solution quality, speed, and model efficiency. 2 BACKGROUND AND RELATED WORK Multiobjective Combinatorial Optimization (MOCO). MOCO has been attracting growing research efforts from different communities over the past several decades (Sawaragi et al., 1985; Wallenius et al., 2008; Herzel et al., 2021). There are two main approaches to tackle the MOCO problems: the exact methods and the approximation methods (Ehrgott, 2005). Exact methods could be prohibitively costly when, as it often happens, the MOCO problem is NP-hard and the problem size is very large (Florios & Mavrotas, 2014). For this reason, many heuristics (Jaszkiewicz, 2002; Zhang & Li, 2007; Ehrgott & Gandibleux, 2008) and approximation methods (Papadimitriou & Yannakakis, 2000; Herzel et al., 2021) have been developed to find a manageable number of approximated Pareto solutions with a reasonable computational budget. However, these methods usually depend on carefully handcrafted designs for each specific problem (Ehrgott & Gandibleux, 2000), and the required effort is often nontrivial in real-world applications. Machine Learning for Combinatorial Optimization. As summarized in Bengio et al. (2020), there are three main learning-based approaches for combinatorial optimization: learning to configure algorithms (Kruber et al., 2017; Bonami et al., 2018), learning alongside the algorithms (Lodi & Zarpellon, 2017; Gasse et al., 2019; Chen & Tian, 2019), and learning to directly predict the solutions (Nowak et al., 2018; Emami & Ranka, 2018; Larsen et al., 2018). Neural combinatorial optimization (NCO) belongs to the last category where the model directly produces a good solution for a given problem instance. Vinyals et al. (2015) proposed a pointer network to sequentially construct a solution for the TSP problem. Bello et al. (2017) made a critical improvement to use reinforcement learning to train the model, eliminating the impractical optimal solutions collection for NP-hard problems. Some other improvements on model structure and training procedure have been proposed in the past few years (Nazari et al., 2018; Deudon et al., 2018; Kool et al., 2019; Veličković & Blundell, 2021), especially with graph neural networks (GNNs) (Dai et al., 2017; Li et al., 2018; Joshi et al., 2019; Dwivedi et al., 2020; Drori et al., 2020). Recent efforts have been made on more efficient learning strategies (Kwon et al., 2020; Karalias & Loukas, 2020; Lisicki et al., 2020; Geisler et al., 2022), learning-based graph search (Cappart et al., 2021b; Kool et al., 2021; Fu et al., 2021; Xin et al., 2021; Hudson et al., 2022), and iterative improvement methods (Wu et al., 2021; Ma et al., 2021; Li et al., 2021). Neural MOCO. Most of the existing learning-based methods are for single-objective combinatorial problems. Recently, a few attempts have been made to solve MOCO problems (Li et al., 2020; Wu et al., 2020; Zhang et al., 2021a;b). These methods adopt the MOEA/D framework (Zhang & Li, 2007) to decompose a MOCO problem into a number of single-objective subproblems, and then build a set of models to solve each subproblem separately. However, since the number of Pareto solutions would be exponentially large (Ehrgott, 2005), the required number of models would be huge for finding the whole Pareto set. In this work, we propose a single preference-conditioned model for solving MOCO problems, with which the decision makers can easily obtain any trade-off solutions. The proposed single neural MOCO solver could be much easier to use in a real-world system (Veličković & Blundell, 2021), than those using a large set of different models. 3 PROBLEM FORMULATION 3.1 MULTIOBJECTIVE COMBINATORIAL OPTIMIZATION A multiobjective combinatorial optimization (MOCO) problem can be defined as follows: min x∈X F (x) = (f1(x), f2(x), . . . , fm(x)), (1) where X is a discrete search space, and F (x) = (f1(x), . . . , fm(x)) is an m-objective vector. Since the individual objectives conflict each other, no single solution can optimize all of them at the same time. Therefore, practitioners are interested in Pareto optimal solutions, defined as follows. Definition 1 (Pareto Dominance). Let xa, xb ∈ X , xa is said to dominate xb (xa ≺ xb) if and only if fi(xa) ≤ fi(xb),∀i ∈ {1, ...,m} and fj(xa) < fj(xb),∃j ∈ {1, ...,m}. Definition 2 (Pareto Optimality). A solution x∗ ∈ X is a Pareto optimal solution if there does not exist x̂ ∈ X such that x̂ ≺ x∗. The set of all Pareto optimal solutions is called the Pareto set, and the image of the Pareto set in the objective space is called the Pareto front. Each Pareto solution represents an optimal trade-off among the objectives, and it is impossible to further improve one of the objectives without deteriorating any other objectives. 3.2 DECOMPOSITION AND PREFERENCE-BASED SCALARIZATION Decomposition is a mainstream strategy for solving multiobjective optimization problem (Zhang & Li, 2007). It decomposes a multiobjective problem into a number of subproblems, each of which can be a single objective or multiobjective optimization problem. MOEA/D (Zhang & Li, 2007) and its variants (Trivedi et al., 2016) solve these subproblems in a collaborative manner and generate a finite set of Pareto solutions to approximate the Pareto front. The most widely used way for constructing a single objective subproblem is the preference-based scalarization (Ehrgott, 2005; Miettinen, 2012). For an m-objective optimization problem, a preference vector for the objective functions can be defined as λ ∈ Rm that satisfies λi ≥ 0 and ∑m i=1 λi = 1. Weighted-Sum Aggregation is the simplest approach. It defines the aggregation function to minimize in the subproblem associated with λ as gws(x|λ) = m∑ i=1 λifi(x). (2) However, this approach can only find solutions on the convex hull of the Pareto front (Ehrgott, 2005). Weighted-Tchebycheff (Weighted-TCH) Aggregation is an alternative approach to minimize: gtch(x|λ) = max 1≤i≤m {λi|fi(x)− z∗i |}, (3) where z∗i < minx∈X fi(x) is an ideal value for fi(x). Any Pareto optimal solution could be an optimal solution of problem (3) with a specific (but unknown) preference λ (Choo & Atkins, 1983). 3.3 CURRENT DRAWBACKS AND OUR METHOD Drawbacks of Existing Methods. For many MOCO problems, the size of the Pareto set would be exponentially large with respect to the input size (e.g., nodes in MOTSP). It is computationally impractical for existing methods to find the whole Pareto set (Herzel et al., 2021). For this reason, all of the existing heuristic-based and learning-based methods are to find a small subset of approximate Pareto solutions. Decision makers can only select solutions from this small set, which often does not contain their preferred solutions. In addition, scalarization may also produce a complicated single objective subproblem. For example, the Tchebycheff scalarized subproblem of MOTSP is not a classic TSP, and thus cannot be solved by the highly specialized TSP solvers such as LKH (Helsgaun, 2000) or Concorde (Applegate et al., 2007). Our Method. Instead of finding a set of finite solutions, we propose a novel way to approximate the whole Pareto set using a single model. With our proposed model, decision makers can easily obtain any solution from the approximate Pareto set to satisfy their preferred trade-offs in real time as shown in Figure 2. This is a clear advantage to support interactive decision making. In addition, our proposed reinforcement learning based method can use a scalarization method to combine multiobjective rewards, and does not need to consider the problem-specific condition. In this paper, we mainly consider learning the whole Pareto front. It is possible to incorporate decision-maker’s preferences on specific regions for model building and inference as discussed in Appendix D.6. We believe our proposed method is a new principled way to solve multiobjective combinatorial optimization problems. 4 THE PROPOSED MODEL: PREFERENCE-CONDITIONED NEURAL MOCO 4.1 PREFERENCE-CONDITIONED SOLUTION CONSTRUCTION Decomposition and scalarization link preferences to their corresponding Pareto solutions. This work builds a preference-conditioned model to accommodate all the preferences. We use the MOTSP as an example to explain our model design. In an MOTSP instance s, a fully connected graph of n nodes (cities) with m distance metrics on each edge is given. A feasible solution is a tour that visits each city exactly once and returns to the starting city. The i-th objective to minimize is the tour length (total cost) based on the i-th distance metric. A tour can be represented as π = (π1, · · · , πt, · · · , πn), πt ∈ {1, · · · , n}, a permutation of all the nodes defining the order in which n cities is visited. Our model defines a preference-conditioned stochastic policy pθ(λ)(π|s) parameterized by θ(λ) to construct a valid solution in sequence: pθ(λ)(π|s) = ∏n t=1 pθ(λ)(πt|s,π1:t−1). (4) The goal is to learn an optimal preference-conditioned policy pθ(λ)(π|s) to construct tours with the lowest scalarized costs for each preference λ. 4.2 THE PROPOSED MODEL We propose to use an Attention Model (AM) (Kool et al., 2019) as our basic encoder-decoder model as shown in Figure 3. For the MOCO problems considered in this work, a preference-agnostic encoder is capable to transfer problem instances into embeddings (e.g., embedding for all cities) used in the preference-conditioned decoder. In our model, only the decoder’s parameters θdecoder(λ) are conditioned on the preference λ: θ(λ) = [θencoder,θdecoder(λ)]. (5) Preference-agnostic Encoder. The encoder takes a problem instance s (e.g., an MOTSP instance with n cities) as its input, and outputs a set of d-dimensional node embeddings {h1, · · · ,hn} for each city. For a given instance, the same embeddings can be used for different preferences. Hence we only need a single forward pass for the dense encoder. We use the attention-based encoder as in Kool et al. (2019) for all preferences. Preference-based Attention Decoder. The decoder has the same structure as in the attention-based model (Kool et al., 2019), but with parameters θdecoder(λ) = [WQ(λ),WK(λ),WV (λ),WMHA(λ)] conditioned on the preference λ. It takes the nodes embeddings for all cities as input, and sequentially selects the next node πt with probability pθ(λ)(πt|s,π1:t−1). At time step t, the decoder first constructs a context embedding ĥ(C) = [hπ1 ,hπt−1 ]WQ(λ) from the first selected node hπ1 , and the last selected node hπt−1 . The matrix WQ(λ) ∈ R2d×d projects the concatenated 2d-dimensional vector to a d-dimensional vector. Then we further aggregate the context embedding via a Multi-Head Attention (MHA) (Vaswani et al., 2017) with the embeddings for all cities {h1, · · · ,hn}: h(C) = MHA(Q = ĥ(C),K = {h1, · · · ,hn}WK(λ), V = {h1, · · · ,hn}WV (λ))WMHA(λ), (6) where Q,K, V are the query, key and value for MHA, respectively. WMHA(λ) represents the MHA parameters. The context embedding h(C) contains all information for the instance and the current partial tour at step t. We can calculate the logit for selecting each city with its embedding hj : logitj = { C · tanh(h T (C)hj√ d ) if j ̸= πt′ ∀t′ < t, −∞ otherwise. (7) All already visited cities are masked with −∞ and will not be selected as the next city. The logits of the rest cities are clipped into [−C,C] (C = 10) as in the AM model (Kool et al., 2019). The probability for choosing the j-th city at time step t can be calculated as pθ(λ)(πt = j|s,π1:t−1) = elogitj/ ∑ k e logitk . With this probability, the decoder can construct a feasible tour. One remaining designing issue is how to generate the preference-conditioned parameters θdecoder(λ). Multiplicative interactions (Jayakumar et al., 2020) and hypernetwork (Schmidhuber, 1992; Ha et al., 2017) provide a powerful and efficient way for conditional computation, which is widely used for transfer learning (von Oswald et al., 2020; Ehret et al., 2021; Lin et al., 2020; Navon et al., 2021). We use a simple MLP hypernetwork θdecoder(λ) = MLP(λ|ψ) to generate the decoder parameters conditioned on the preference. The details of our proposed model can be found in Appendix B. Algorithm 1 Neural MOCO Training 1: Input: preference distribution Λ, instances distribution S, number of training steps T , number of preferences per iteration K, batch size B, number of tours N 2: Initialize the model parameters θ 3: for t = 1 to T do 4: λk ∼ SamplePreference(Λ) ∀k ∈ {1, · · · ,K} 5: si ∼ SampleInstance(S) ∀i ∈ {1, · · · , B} 6: πjki ∼ SampleTour(pθ(λk)(·|si)) ∀k, i ∀j ∈ {1, · · · , N} 7: b(si|λk)← 1N ∑N j=1 L(π j ki|λk, si) ∀k ∈ {1, · · · ,K} ∀i ∈ {1, · · · , B} 8: ∇J (θ)← 1KBN ∑K k=1 ∑B i=1 ∑N j=1[(L(π j ki|λk, si)− b(si|λk))∇θ(λk) log pθ(λk)(π j ki|si)] 9: θ ← ADAM(θ,∇J (θ)) 10: end for 11: Output: The model parameter θ Instance Augmentation for MOCO. Our proposed model only has a small extra computational and memory overhead to the original single-objective AM solver. We keep our model as simple as possible, making it easy for our approach to use other models and other improvements developed for single-objective NCO. These properties are crucially important for generalizing the NCO to multiobjective problems. In this work, we simply extend the instance augmentation method (Kwon et al., 2020) to MOCO. The details can be found in Appendix B.1. 5 PREFERENCE-CONDITIONED MULTIOBJECTIVE POLICY OPTIMIZATION 5.1 COST FUNCTION Our proposed node selection strategy guarantees that the model can always generate feasible solutions. In this section, we develop an efficient multiobjective policy optimization method to train the model for all the preferences simultaneously. For an MOTSP problem, the objective functions are a vector of m different costs (i.e. lengths) for a tour L(π) = [L1(π), · · · , Lm(π)]. We can define a weighted-Tchebycheff scalarized cost for each preference λ: L(π|λ) = max 1≤i≤m {λi|Li(π)− (z∗i − ε)|}, (8) where z∗i is an ideal cost for the i-th objective. For a given instance s, our goal is to minimize the expected cost for all preferences: J (θ|s) = Eλ∼Λ,π∼pθ(λ)(·|s)L(π|λ), (9) where Λ is the uniform distribution over all valid preferences. To train the model, we repeatedly sample different instances s ∼ S at each iteration. We define the training loss as J (θ) = Es∼SJ (θ|s). 5.2 MULTIOBJECTIVE REINFORCE For a given instance s and a specific preference λ, we use the REINFORCE (Williams, 1992) to estimate the gradient for the preference-conditioned scalar cost: ∇J (θ|λ, s) = Eπ∼pθ(λ)(·|s)[(L(π|λ, s)− b(s|λ))∇θ(λ) log pθ(λ)(π|s)], (10) where b(s|λ) is the baseline of expected cost to reduce the gradient variance. This gradient can be estimated by Monte Carlo sampling. At each update step, we randomly sample K preference {λ1, · · · , λK} ∼ Λ, B instances {s1, · · · , sB} ∼ S, and N different tour {π1i , · · · ,πNi } ∼ pθ(λk)(·|si) for each λk-si combination. The approximated gradient is: ∇J (θ) ≈ 1 KBN K∑ k=1 B∑ i=1 N∑ j=1 [(L(πji |λk, si)− b(si|λk))∇θ(λk) log pθ(λk)(π j i |si)]. (11) We use the shared baseline bshared(si|λk) = 1N ∑N j=1 L(π j ki|λk, si) over N sampled tours for each λk − si combination. The starting node for each tour πjki is chosen in random to force diverse rollouts as proposed in (Kwon et al., 2020). The algorithm is shown in Algorithm 1. 5.3 ACTIVE ADAPTION We also propose a simple yet powerful active adaption approach to further adjust the whole model to approximate the Pareto front for a given test instance in Appendix B.3. The proposed method does not depend on specific instance distribution S, and is suitable for out-of-distribution adaption. 6 EXPERIMENTS Problems and Model Setting. We consider MOTSP (Lust & Teghem, 2010a), MOCVRP (Lacomme et al., 2006) and MOKP (Bazgan et al., 2009) in our experimental studies, and use the same model settings for all problems with different task-specific input sizes and mask methods. The main policy model encoder is the Attention Model (Kool et al., 2019) and the hypernetwork is an MLP. We randomly generate 100, 000 problem instances on the fly for each epoch, and train the model for 200 epochs. The optimizer is ADAM with learning rate η = 10−4 and weight decay 10−6. We train our models on a single RTX 2080-Ti GPU, and it costs about 10 minutes for an epoch on MOTSP100. We give detailed model settings, problem formulations, and more experimental results in Appendix BCD. The source code can be found in https://github.com/Xi-L/PMOCO. Baseline. We call our proposed preference-conditioned multiobjective combinatorial optimization as P-MOCO. We compare it with three widely-used evolutionary algorithm frameworks for MOCO: MOGLS (Jaszkiewicz, 2002) is a multiobjective genetic local search algorithm, NSGAII (Deb et al., 2002) is a Pareto dominance-based multiobjective genetic algorithm, and MOEA/D (Zhang & Li, 2007) is a decomposition-based multiobjective evolutionary algorithm. All these algorithm frameworks need problem-specific heuristics to generate and search feasible solutions for different problems. We also compare P-MOCO with two other learning-based methods: DRL-MOA (Li et al., 2020) decomposes a MOCO with different preferences and builds a Pointer Network (Vinyals et al., 2015; Bello et al., 2017) to solve each subproblem, and AM-MOCO is a multi-models variant of our proposed model, which builds Attention Model (Kool et al., 2019) for each subproblem. The Weight-Sum scalarization of MOTSP and MOKP are their respective single-objective counterpart. Therefore, we also compare our method with the approach that uses some state-of-the-art singleobjective solvers for each weight-sum subproblem. Model Information for the learning-based methods is shown in Table 1. Our model supports flexible preference assignment and only has 1.1% total parameters to the multi-model counterpart. Inference and Metrics. We report the results and run time for solving 200 random test instances for each problem, with normally 101 to 105 different trade-offed solutions, and up to 10, 011 solutions for our proposed method. In most cases, we report our model’s zero-shot generalization performance without any search and fine-tune. We use the hypervolume indicator (Zitzler et al., 2003) to measure the performance for each method. For a set P ⊂ Rm in the objective space, we can find a reference point r∗ that dominated by all solutions in P , and define the hypervolume HV(P ) as volume for: S = {r ∈ Rm | ∃ y ∈ P such that y ≺ r ≺ r∗}, (12) where HV(P ) = Vol(S). In general, the larger the hypervolume, the better the solution set tends to be. The ground truth Pareto set always has the largest hypervolume. We report the normalized hypervolume values in [0, 1] with respect to the same r∗ for all the methods, and also the ratios of hypervolume difference to our method. A Wilcoxon rank-sum test with a significance level 1% is conducted to compare the results for each experiment. More details can be found in Appendix D.1. 6.1 RESULTS AND ANALYSIS MOTSP. The results on two and three objective MOTSP are shown in Table 2 and Table 3 respectively. MOGLS, NSGAII and MOEA/D all use 2-opt local search heuristic (Jaszkiewicz, 2002) to MOTSP MOCVRP MOKP search for promising solutions. We also include two weight-sum scalarization baselines with the state-of-the-art LKH solver (Helsgaun, 2000; Tinós et al., 2018) and Google OR tools (Perron & Furnon, 2019). For the bi-objective problems, our proposed method with a single model has similar performances compared with AM-MOCO on all problems. It achieves the best performance with instance augmentation, which significantly outperforms other methods but is beaten by the LKH solver. For the three objective problems, our method can further improve its performance by generating much more trade-off solutions within a reasonable amount of time, which other methods cannot do. As shown in Figure 2 and Figure 5, our method can successfully learn the mapping from preferences to the corresponding solutions, and can generate a good prediction to the whole Pareto front. Decision makers can easily obtain any preferred trade-off solutions as they like. This flexibility could be desirable in many real-world applications. More discussion on the connection between the preference and Pareto solution for three-objective TSP can be found in Appendix D.5 D.6 D.7. MOCVRP. In this problem, each node has a demand, and we need to construct multiple return routes for a vehicle with a fixed capacity from the same depot to handle all demands. The objectives we consider are to minimize the tour length for all routes and also the tour length for the longest routes (the makespan in scheduling) (Lacomme et al., 2006). All the non-learning algorithm frameworks use the problem-specific constructive heuristics and local search method proposed in Lacomme et al. (2006) to search feasible non-dominated solutions. The results in Table 2 show that our method significantly outperforms the non-learning heuristics in terms of both solution quality and running time. It also outperforms AM-MOCO with 100 individual models, which could be due to the asymmetric objective scales. We provide further analysis in Appendix D.4. MOKP. The multiobjective 0-1 knapsack problem can be found in many real-world applications (Bazgan et al., 2009). We consider the uni-dimension problem, where each item has multiple values and one weight. The goal is to select a subset of items to maximize all obtained values with a weight constraint. The non-learning methods use binary coding with a greedy transformation heuristic to maintain feasibility (Ishibuchi et al., 2014). We also include weight-sum scalarization baselines with dynamic programming (DP) and a strong greedy search based on the value-weight ratio. According to the results in Table 2, our method has the best performance on all problems. The DP method is also outperformed by our method since the weight-sum scalarization can only find the convex hull of the Pareto front. The Tchebycheff scalarization of MOKP is not a KP problem, while our method is more flexible to use Tchebycheff scalarization on the reward function. We also report the results on 10 objective MOKP100 and the generalization performance to problem with 500 items in Appendix D.8. Out-of-Distribution Problems and Active Adaption. We also validate the generalization performance of our method on 6 out-of-distribution (OOD) MOTSP problems from Fonseca et al. (2006). Their ground truth Pareto fronts can be obtained by exhaustive search. The results are shown in Appendix D.2 due to the page limit. With active adaption, our method can achieve good performance (1% - 1.5% HV gap to the ground truth Pareto fronts) on these OOD problems. 7 CONCLUSION AND FUTURE WORK Conclusion. We have proposed a novel preference-conditioned method to approximate the whole Pareto front for MOCO problems using a single model. It allows decision makers to directly obtain any trade-off solutions without any search procedure. Experiments on different problems have shown that our proposed method significantly outperforms other methods in terms of performance, speed and model efficiency. We believe the proposed method is a principled way for solving MOCO. Future Work. In a sense, our method can be regarded as a learning version of the decompositionbased algorithm (MOEA/D (Zhang & Li, 2007)) dealing with all the possible trade-off preferences. Instead of maintaining a set of finite solutions as in other MOEA/D vaiants (Trivedi et al., 2016), we build a single learning-based model to solve the subproblems for all the preferences simultaneously in a collaborative manner. We believe the single-model-for-all-preference approach is a promising alternative to the current default finite-population-based methods, and it could be an important research direction for multiobjective optimization. Our method can be further improved with other advanced models and efficient multiobjective training procedures. In the future, we will study fundamental issues of multiobjective optimization (e.g., convergence v.s. diversity, exploitation v.s. exploration trade-off) for Pareto set learning methods. Limitation. It is very difficult to give a convergence guarantee for learning-based MOCO, where each preference-based subproblem could be already NP-hard, and the number of Pareto solutions is exponentially large with respect to the input size. See detailed discussion in Appendix A. ACKNOWLEDGMENTS We thank Prof. Hisao Ishibuchi for his valuable comments on an earlier version of this work. This work was supported by the Hong Kong General Research Fund (11208121, CityU-9043148). A PARETO SET LEARNING AND APPROXIMATION ANALYSIS A.1 PARETO SET LEARNING AND CONVERGENCE GUARANTEE In this work, we have proposed a novel neural combinatorial optimization (NCO) method to approximate the whole Pareto set for MOCO problems with a single model. The proposed learning-based MOCO solver can directly generate arbitrary trade-off solutions without extra optimization. We believe it is a principled way to solve MOCO problems. However, the lack of an exact optimality guarantee is a limitation of the proposed method, which is also the case for previous work on single-objective neural combinatorial optimization (Vinyals et al., 2015; Bello et al., 2017; Kool et al., 2019). This limitation is mainly due to the fact that many singleobjective combinatorial optimization (CO) problems are NP-hard, and the size of Pareto sets for a MOCO problem would be exponentially huge, which makes it very difficult to exactly solving the problems (Ehrgott, 2005; Herzel et al., 2021). In addition, the training for the parameterized policy (neural network model) cannot guarantee to fit all training problems perfectly. The generalization ability to problem instances with different patterns (out-of-distribution generalization) is another critical issue that makes it difficult to give an exact optimality guarantee to the proposed learningbased algorithm. On the other hand, our proposed model is an efficient mapping from the preferences to the corresponding approximate set of the Pareto optimal solutions. It provides a flexible way for decision makers to obtain an approximate solution with their preferred trade-off directly. The experimental results also show that our proposed method can generate good approximate Pareto sets for three different MOCO problems. In the next subsection, we provide a thorough discussion on the approximation ability of our proposed method. A.2 APPROXIMATION ANALYSIS For a MOCO problem, the number of Pareto solutions could be exponentially large with respect to its input size, which makes the problem intractable (Ehrgott, 2005; Herzel et al., 2021). The preference-based scalarization methods and decomposition methods (Choo & Atkins, 1983; Zhang & Li, 2007) we used provides a principled way to link the Pareto solutions with preference, allowing us to tackle the problem in a systematic manner. In this work, we propose to approximately solve the scalarized subproblem with all preferences via a single model. We first briefly review the weighted scalarization method and its Pareto optimality guarantee as discussed in the main paper. Then we provide further discussion on the approximation analysis. Our proposed method decomposes a MOCO problem into preference-based subproblems with the weighted-Tchebycheff scalarization (Weighted-TCH): min x∈X gtch(x|λ) = min x∈X max 1≤i≤m {λi|fi(x)− (z∗i − ε)|}, (13) where z∗i is the ideal value for objective fi(x) (e.g., the lower bound), and u ∗ i = z ∗ i − ε is a utopia value with small positive component ε. The preference vector λ ∈ Rm satisfies λi ≥ 0 and ∑m i=1 λi = 1, where λi is the preference for the i-th objective. This approach has a desirable property: Lemma 1 (Choo & Atkins (1983)). A feasible solution x ∈ X is Pareto optimal if and only if there is a weight vector λ > 0 such that x is an optimal solution to the problem (13). According to Lemma 1, we can obtain any Pareto solution by solving the Weighted-TCH subproblem with a specific weight. However, the weight for each Pareto solution depends on its objective values, which are not known in advance (Sawaragi et al., 1985; Ehrgott, 2005). The decision-maker still needs to solve multiple subproblems with different preferences to find a desirable solution. To find the whole Pareto set, it needs to solve an exponentially huge number of subproblems. Given a problem instance s, our proposed model provides a single mapping function xλ = h(λ) from any preference λ to its corresponding solution xλ, which is constructed by the preferencebased policy pθ(λ)(x|s). In the ideal case, if all generated solutions xλ are the optimal solutions x∗λ of problem (13) with preference λ, according to Lemma 1, our proposed model can generate the whole Pareto set (all Pareto optimal solutions) for the original MOCO problem. In practice, we are interested in the proposed method’s approximation ability. We find that its performance strongly depends on the approximation ability of the parameterized policy (neural network model) on the single-objective scalarized subproblem. We first give an informal claim on our method’s approximation ability, then provide detailed explanations and discussions. (Informal) Claim 1. If the proposed method can approximately solve the subproblem (13) with any preference λ, it can generate a good approximation to the whole Pareto set for the MOCO problem. To support this claim, we follow the traditional ε-Pareto approximate method for MOCO problems (Papadimitriou & Yannakakis, 2000; Herzel et al., 2021). First, an ε-Pareto domination relation between two individual solutions can be defined as: Definition 3 (ε-Pareto Domination). For a MOCO problem and an ε > 0, let xa, xb ∈ X , xa is said to ε-dominate xb (xa ≺ε xb) if fi(xa) ≤ (1 + ε)fi(xb),∀i ∈ {1, · · · ,m}. This definition is a natural generalization from the (1 + ε)-approximation for single-objective optimization. With this concept, an ε-approximate Pareto set (Papadimitriou & Yannakakis, 2000) can be defined as: Definition 4 (ε-Approximate Pareto Set). For an ε > 0, a set Pε ⊂ X is an ε-approximate Pareto set, if for any feasible solution x ∈ X , there exists a solution x′ ∈ Pε such that x′ ≺ε x. In other words, all feasible solutions of the MOCO problem can be almost dominated by some solutions in Pε (Papadimitriou & Yannakakis, 2000). When the Pareto set is intractable and hard to find, the ε-approximate Pareto set would be a reasonable choice to achieve in practice. Each MOCO problem has a unique Pareto set, but can have different ε-approximate Pareto sets. The ability of our proposed method to find an ε-approximate Pareto set strongly depends on its performance on each single-objective preference-based subproblem. Theorem 1. Let x∗λ denotes the optimal solution of the problem (13) with preference λ, if the proposed method can generate an approximate solution xλ ≺ε x∗λ for any preference λ, it is able to generate an ε-approximate Pareto set Pε to the MOCO problem. Proof. Let P be the Pareto set for a MOCO problem, for any xPareto ∈ P , according to Lemma 1, there is a weight vector λ > 0 such that x = x∗λ is the optimal solution for subproblem (13) with a specific preference λ. Therefore, our proposed method can generate an approximated solution xλ ≺ε x∗λ = xPareto. By generating approximate solutions for all xPareto ∈ P , our proposed method is able to generate an ε-approximate Pareto set Pε to the MOCO problem. A.3 LIMITATION Strong Assumption on (Approximately) Solving all Subproblems: The approximation guarantee in Theorem 1 heavily depends on the ability to (approximately) solve each weighted subproblem. Due to the NP-harness, it is indeed non-trivial to give a convergence guarantee to generate ε-dominate solutions for any preference with a small enough ε. This limitation also applies for other end-to-end learning-based (e.g, neural combinatorial optimization) and heuristic-based methods. We are aware that some efforts have been made to combine the learning-based method with dynamic programming to achieve asymptotically optimal solution solution for specific single-objective problem in recent works (Cappart et al., 2021b; Kool et al., 2021). These methods provide a controllable trade-off between the solution quality and the computational cost for solving NP-hard problems. However, their generalization to the multi-objective problem is not straightforward, since the scalarized subproblem for each preference is not necessary the same as its single-objective counterpart. For example, a Tchebycheff scalarized MOTSP is not a single-objective TSP as discussed at the end of Section 3.2. In addition, according to Bengio et al. (2020), these methods belong to the class of learning alongside the algorithms, while our proposed approach is learning to directly produce the solutions (neural combinatorial optimization). Therefore, the idea for learning enhanced multiobjective combinatorial algorithm could be an important research topic in future, but out of the scope for the current work. Dense Approximation for the Whole Pareto Set: Another concern would be the required number of solutions in the ε-approximate Pareto set Pε. If the required number is exponential to the input size, the approximation itself is also intractable. In their seminal work, Papadimitriou & Yannakakis (2000) establish a promising result: Theorem 2 (Papadimitriou & Yannakakis (2000)). For any multiobjective optimization problem and any ε, there is an ε-approximate Pareto set Pε of which the size is polynomial in the number of solutions and 1ε (but exponential in the number of objectives). However, the existence of such a set still does not mean that it can be easily found (Papadimitriou & Yannakakis, 2000; Herzel et al., 2021). The computability (whether Pε can be constructed in polynomial time) would be hard to justify for a real-world problem. For a new unseen problem instance in practice, our proposed method might still need to generate an exponentially large number of solutions to construct an ε-approximate Pareto set Pε. It is also unclear how to properly select a set of preferences in advance. Many research efforts have been made on developing approximation methods for solving MOCO problems in the past decades (Herzel et al., 2021; Hansen, 1980; Papadimitriou & Yannakakis, 2000; Vassilvitskii & Yannakakis, 2005; Koltun & Papadimitriou, 2005; Bazgan et al., 2017). In future work, it is important to better leverage the current advanced approximation strategies to design more efficient preference-based methods. In the learning-based optimization scenario we consider, it is also possible to learn the suitable approximation method and/or preference distribution directly from the data (problem instances). B DETAILS ON THE PROPOSED MODEL B.1 MODEL SETTING We use the same model for all MOCO problems while tuning the input size and mask method for each problem. Table 4 shows the number of parameters of a standard single-objective attention model (Kool et al., 2019) and our proposed preference-based multiobjective attention model. Our model supports flexible preference assignment at the inference time with a small overhead, while the other neural MOCO methods all require training multiple AM models for different preferences. We build the single-preference attention models as well as our model following the implementation in Kwon et al. (2020). Attention Encoder. The encoder we use is the standard attention encoder as in Kool et al. (2019), and it is shared by all preferences. The encoder has 6 attention layers, and 128-dimensional node embedding for the input nodes. Each attention layer has a multi-head attention (MHA) with eight 16-dimensional heads, and a fully connected layer (FC) with one 512-dimension hidden sublayer. The encoder also includes skip-connection and batch normalization for each attention layer. We use the same model for all MOCO problems (MOTSP, MOCVRP, MOKP) but with different input dimensions for each problem, which will be introduced in the next section. Preference-Conditioned Decoder. The decoder’s main model structure is the same as the AM decoder (Kool et al., 2019). It has one multi-head attention layer with eight 16-dimensional heads similar to the encoder, but without skip-connection and batch normalization. The decoder uses a single 128-dimensional attention head to calculate the probabilities of selecting different nodes at each step. Different problems have different masking methods for probability calculation. We use a simple MLP model to generate the preference-conditioned parameters for the decoder. For all MOCO problems, the MLP model has two 128-dimensional hidden layers with ReLu activation. The input is an m-dimensional preference vector λ which satisfies λi ≥ 0 and ∑m i=1 λi = 1, where m is the number of objectives and λi is the preference for the i-th objective. We adopt the parameter compression approach in Ha et al. (2017) to control the model size. The MLP model first generates a hidden embedding e(λ) = MLP(λ|ψ), then maps the hidden embedding to the decoder parameters via linear projection θdecoder = We(λ) + b. The learnable parameters are ψ for the MLP model MLP(λ|ψ) and the parameter matrices W and b for the decoder. Training Procedure. For all problems, we train our proposed model for 200 epochs, with 100, 000 problem instances randomly generated on the fly at each epoch. At each iteration step, we need to sample K preferences, B problem instances, and N tours to calculate the policy gradient. We set K×B = 64 to make the batch of 64 instances for training a single AM model, and letN equal to the problem size (e.g., the number of nodes) as in Kwon et al. (2020). We find the model performance is equally good for setting K = 1, 2 and 4, and keep using K = 1 for all problems. In other words, we randomly generate a preference λ that satisfies λi ≥ 0 and ∑m i=1 λi = 1 at each training step. For the AM-MOCO baseline, we adapt the transfer training approach in Li et al. (2020) to train multiple AM models for different preferences. We first train a single AM model with a single preference on one objective from scratch with 200 epochs, then transfer its parameter to the model for neighbor subproblem with similar preference, and fine-tune the new model with 5 epochs. With sequentially transfer and fine-tune, we can obtain a set of trained models for different preferences. In most experiments, we set the number of preferences as 101. Therefore, we need to build 101 AM models with total 700 training epochs. Instance Augmentation for MOCO. Due to the design choice of minimal essential change (e.g., the preference-conditioned decoder), our method can also enjoy the current improvements that were originally proposed for the single objective NCO. Here, we generalize the instance augmentation method proposed in Kwon et al. (2020) to the MOCO version. The key idea of instance augmentation for NCO is to find multiple efficient transformations for the original problem such that they share the same optimal solution. Then, we can use an NCO method to solve all problems and select the best solution among all obtained (potentially different) solutions. In this way, we have a more robust result similar to the test-time augmentation for computer vision (Szegedy et al., 2016). For the single-objective euclidean TSP and CVRP, there is a set of straightforward transformations, which simply flips or rotates the coordinate for all the 2D locations in a problem instance (Kwon et al., 2020). For a location (x, y), there is eight different transformation, namely, {(x, y), (y, x), (x, 1−y), (y, 1−x), (1−x, y), (1−y, x), (1−x, 1−y), (1−y, 1−x)}. For an m-objective euclidean MOTSP problem, the concrete location representations are independent for each objective. Therefore, we can independently apply different transformations for each objective. Consider the above eight different transformations for each objective, we can have 8m different problem transformations for an MOTSP instance. We have fixed 8 transformations for MOCVRP since it only has one 2D coordinate, and no transformation for MOKP. The details for each problem can be found in the next section. B.2 TRAINING EFFICIENCY We use the same amount of samples to train our proposed preference-based model as the other single-objective solvers need (Kool et al., 2019; Kwon et al., 2020). Indeed, our proposed model requires significantly fewer samples and training epochs, compared to the other MOCO methods that need to build multiple models for different preferences. We compare our model’s performance on one of the objective (e.g., with preference (1, 0)) with the other SOTA single-objective solver and learning-based solver, the results are shown in Table 5. The results of Concorde/LKH/OR Tools are from Kwon et al. (2020), and we run the learning-based solver by ourselves. We report the average performance over 10, 000 test instances. AM is the single-objective solver (one model in AM-MOCO), P-MOCO (single preference) is our proposed model but only training on a single fixed preference (1, 0), and P-MOCO (all preferences) is our proposed model with the reported result on the preference (1, 0). With the same amount of training samples, our model has similar single-objective performance with learning-based single-objective solver, while it can additionally approximate the whole Pareto front. The learning-based solver’s performance can be further improved by sampling or active search. These results indicate that we can use a single encoder to efficiently learn a shared representation for all trade-offs among different objectives, and there is a positive knowledge transfer among preferences during the learning procedure. In addition, it also confirms the assumption that similar preferences should have similar corresponding (approximate) Pareto solutions for the multiobjective problems we consider in this paper. These findings could be useful to design more powerful learning-based models for MOCO in the future. B.3 ACTIVE ADAPTION After end-to-end training, our proposed method can directly generate different trade-off solutions to a given problem without further search procedure. However, similar to single-objective neural combinatorial optimization, this approach could still have a gap to the Pareto front, especially for problems out of the training distribution S (e.g., with different sizes and patterns) (Lisicki et al., 2020). Iterative search methods, such as sampling and beam search, can further improve the performance for a single solution or single preference (Veličković & Blundell, 2021). However, these approaches can not find a better approximation to the whole Pareto set for a MOCO problem. Algorithm 2 Neural MOCO Active Adaption 1: Input: model parameter θ, instance s, preference distribution Λ, number of adaption steps T , number of preferences per iteration K, number of tours N 2: for t = 1 to T do 3: λk ∼ SamplePreference(Λ) ∀k ∈ {1, · · · ,K} 4: πjk ∼ SampleTour(pθ(λk)(·|s)) ∀k ∈ {1, · · · ,K} ∀j ∈ {1, · · · , N} 5: b(s|λk)← 1N ∑N j=1 L(π j k|λk, s) ∀k ∈ {1, · · · ,K} 6: ∇J (θ)← 1KN ∑K k=1 ∑N j=1[(L(π j k|λk, s)− b(s|λk))∇θ(λk) log pθ(λk)(π j k|s)] 7: θ ← ADAM(θ,∇J (θ)) 8: end for 9: Output: The model parameter θ We propose a simple yet powerful active adaption approach as shown in Algorithm 2. It iteratively adapts the model parameter θ(λ) to a given instance s (or a batch of instances) with all preferences from the distribution Λ rather than searching for a specific solution. This method is similar to the active search in Bello et al. (2017) which actively refines the single-objective model for efficient candidate solutions searching. Our approach focuses on adapting the whole model for a better Pareto front approximation. Since this method is distribution-agnostic (not depend on specific instance distribution S), it is suitable for out-of-distribution adaption. C DETAILS OF THE MOCO PROBLEMS This section introduces the detailed problem formulation for the MOTSP, MOCVRP and MOKP we used in this work. We also provide the model configuration (e.g., input size, masks) for each problem. C.1 MOTSP We consider the Euclidean multiobjective traveling salesman problem (Euclidean MOTSP), which is widely used in the MOCO community (Lust & Teghem, 2010b; Florios & Mavrotas, 2014). Its single objective counterpart, 2D Euclidean TSP, has also been studied in single-objective neural combinatorial optimization (NCO) (Vinyals et al., 2015; Bello et al., 2017; Kool et al., 2019). A general m-objective MOTSP instance s with n nodes has m n × n cost matrices {Ci = (cijk), i = 1, · · · ,m} for m different costs. The problem is to find a tour (cyclic permutation π) to minimize all the costs: minL(π|s) = min(L1(π|s), L2(π|s), · · · , Lm(π|s)), where Li(π|s) = ciπ(n)π(1) + n−1∑ j=1 ciπ(j)π(j+1). (14) In a Euclidean MOTSP, the cost information is stored in the nodes rather than the edges. The j-th node has a 2m-dimensional vector [x1j ,x 2 j , · · · ,xmj ] where xij ∈ R2 is a 2D coordinate for the i-th objective. The i-th cost cijk = ||xij − xik||2 is the Euclidean distance for moving from node j to k. If we only have one objective m = 1, it reduces to the single-objective 2D Euclidean TSP: min π L1(π|s) = ||xπ(n) − xπ(1)||2 + n−1∑ j=1 ||xπ(i) − xπ(i+1)||2. (15) The single-objective TSP is already NP-hard, so does the MOTSP. In addition, the Pareto set of MOTSP has an exponential cardinality with respect to its input size (e.g., number of nodes), so it is intractable even for the 2-objective case (Ehrgott & Gandibleux, 2003). Problem Instance. Similar to the previous work on single-objective NCO (Lust & Teghem, 2010b; Florios & Mavrotas, 2014), we randomly sample all n nodes with uniform distribution on the 2mdimensional unit hyper-square (e.g., [0, 1]2m) for all problem instances. Model Details. In m-objective MOTSP, each node has a 2m-dimensional vector to store all cost information, so the input size is 2m for the encoder. To calculate the probability for selecting the next node, the decoder needs to mask all already visited nodes as unavailable. We have a valid tour when all node is selected (we assume the end node will connect to the start node). C.2 MOCVRP The vehicle routing problem (VRP) is a classical generalization of TSP, which has been studied for several decades. This work studies the capacitated vehicle routing problem (CVRP). In this problem, in addition to the location, each node (city) has a demand δi needed to be satisfied. There is an extra depot node and a vehicle with a fixed capacity D > δi,∀i to handle all the demands. The vehicle will always start from the depot node, then goes to different cities to satisfy multiple demands ∑ δi ≤ D , and turns back to the depot node. A solution to this problem is a set of routes that satisfies the demands for all cities. In the multiobjective problem, we consider two objectives to optimize. The first one is the total tour length as in the single-objective CVRP, and the other one is the tour length for the longest route (which is also called makespan in scheduling theory). This problem has been studied in the MOCO community (Lacomme et al., 2006). Problem Instance. Similar to the TSP problem, the location of n nodes are uniformly sampled from the unit square. For the demand, similar to the previous work on the single-objective counterpart (Kool et al., 2019; Kwon et al., 2020), we randomly sample discrete δi from the set {1, · · · , 9}. For problem with size n = 20, 50, 100, we set the capacity as D20 = 30, D50 = 40 and D100 = 50, respectively. Without loss of generality, we normalize the demands δ̂i = δiD and capacity D̂ = DD = 1 as in the previous work (Kool et al., 2019; Kwon et al., 2020). Split delivery is not allowed in this problem. Model Details. In the MOCVRP, the depot node has a 2-dimensional location vector, and the other nodes all have 3-dimensional vectors to store their locations and demands. We use different parameter matrices to project the nodes into the input embedding with the same dimension dh = 128. For node selection, the model records the current capacity of the vehicle and the rest demands for all nodes. If a node has been already visited or has demand larger than the vehicle’s current capacity, it will be masked as unavailable for the vehicle to visit. If no node is available to visit, the vehicle will go back to the depot. Once all nodes have 0 demands, the node selection is finished and we have a valid solution to the problem. C.3 MOKP Knapsack problem (KP) is also a widely studied combinatorial optimization problem. In this work, we consider the 0-1 multiobjective knapsack problem (MOKP) with m objectives and n items: max f(x) = max(f1(x), f2(x), · · · , fm(x)), where fi(x) = ∑n j=1 v i jxj , subject to ∑n j=1 wjxj ≤W, xj ∈ {0, 1}, (16) where each item has a weight wj and m different values {vij , i = 1, · · · ,m}. The problem (e.g., knapsack) has a maximum weight capacity W , and the goal is to select a set of items within the weight capacity to maximize the sum values for each objective. To make this problem nontrivial, we further assume all values vij ,∀i, j, weights wj∀j and the total capacity are non-negative real value. The total weight of all items is larger than the capacity ∑ wi > W , while each single weight is smaller than the capacity wi < W, ∀i = 1, · · · , n. The single-objective knapsack problem is NP-hard, so does the MOKP problem (Ehrgott & Gandibleux, 2003). Problem Instance. We randomly generate the values and weight for each item both uniformly in [0, 1]. We consider problems with n = 50, 100, 200 nodes, and the weight capacities are W50 = 12.5,W100 =W200 = 25 as in the previous work (Bello et al., 2017; Kwon et al., 2020). Model Details. In an m-objective MOKP, each item has m values and 1 weight, so the input dimension is 3 for the encoder. For node selection at each step, we mask all already selected nodes and nodes with weights larger than the remained capacity as unavailable. We terminate the selection when all nodes are labeled as unavailable. D ADDITIONAL EXPERIMENTAL RESULTS D.1 HYPERVOLUME INDICATOR To solve a MOCO problem, the result for each method is a set of approximate Pareto solutions. Since the ground truth Pareto set is usually unknown, we use the hypervolume (HV) indicator (Zitzler et al., 2007) to numerically compare the performance for each method. The hypervolume indicator is widely used in the MOCO community for algorithm comparison. The hypervolume of a set is the volume in the objective space it dominates. For a set P ⊂ Rm in the objective space, we can find a reference point r∗ that dominated by all solutions in P , and define the hypervolume HV(P ) as the volume of the set: S = {r ∈ Rm | ∃y ∈ P such that y ≺ r ≺ r∗}, (17) where HV(P ) = Vol(S). An illustration example is shown in Figure 4. The grey area is the set S dominated by the solutions in set P = {p1, p2, p3, p4} with the reference point r∗. In this 2- dimensional case, the hypervolume HV(P ) is the size of the grey area. The hypervolume indicator has two important advantages for measuring the approximate set quality with respect to Pareto optimality (Zitzler et al., 2007). First, if an approximate set A dominates another approximate setB, it will have a strictly better hypervolume HV(A) > HV(B). In addition, if an approximate set C contains all Pareto optimal solutions, it is guaranteed to have the maximum hypervolume value. In comparison, an approximate set has better performance if it has a larger hypervolume. With different objective scales, the hypervolume value will vary significantly among different problems. We report the normalized hypervolume values Ĥ(P ) = HV(P )/ ∏m i r ∗ i for all methods and also their performance gaps to our method. For each experiment, all methods share the same reference point r∗, which contains the largest value achieved for each objective. Since all problems we consider have positive objective values, we have 0 ≤ Ĥ(P ) ≤ 1 for all solution sets. The ground truth Pareto set P ∗ usually has Ĥ(P ∗) < 1, unless the zero vector 0 ∈ Rm is feasible and in the Pareto set. D.2 OUT-OF-DISTRIBUTION PROBLEM WITH EXACT PARETO FRONT We conduct experiments on 6 two-objective MOTSP100 instance (L1-L6) in Florios & Mavrotas (2014) of which the exact Pareto fronts are available. In these problems, the objective functions have different ranges, and the cities are not uniformly located, so they are out of our method’s training distribution. The results can be found in the Table 6. In addition to hypervolume, we also report the Inverted Generational Distance (IGD) (Fonseca et al., 2006) to measure the average Euclidean distance between the set of approximated Pareto solutions to the exact Pareto front. A smaller IGD value means the approximated set is closer to the exact Pareto front. According to the results, our method, with the instance augmentation and/or active search (10 min budget), can have a good performance on these out-of-distribution (OOD) instances with a 1%− 1.5% hypervolume gap. The proposed method also significantly outperforms the weight-sum OR tools baseline. There is still a gap to the strong weight-sum LKH baseline. As discussed in the paper, robust OOD generalization is an important research direction for the learning-based solver. D.3 FLEXIBLE PREFERENCE-BASED APPROXIMATION With our model, it is flexible to generate different number of solutions to approximate the Pareto front. We present an example on the three-objective TSP in Figure 5. We use the structured weight assignment approach from Das & Dennis (1998) to give the sets of weights for different instances. This method can generate n = Cm+p−1p evenly distributed weights with an identical distance to their nearest neighbor on the unit simplex (e.g., ∑m i=1 λi = 1 with λi ≥ 0,∀i), where m is the number of objectives and p is a parameter to control the number of weights. For the three objective TSP problems (m = 3), we assign p = 13, 44 and 140 to generate n = 105, 1035 and 10011 weights respectively. We also show the corresponding generated solutions for MOTSP instances with 20, 50 and 100 cities. According to the results in Figure 5, our model can generate well-distributed solutions with a small number of preferences, and generate a dense approximation with more preferences. The ability to generate a dense approximation to the whole Pareto set also allows the decision-maker to generate arbitrary preferred solutions on the approximate front. D.4 PREFERENCE-SOLUTION CONNECTION We further analyze the connection between the preference and its corresponding solution on the uniform and non-uniform Pareto front. Figure 6 shows the connections in our model with different numbers of preferences for the MOTSP100 instance. Since the two objectives (costs) in MOTSP have the same scale, this problem has a uniform connection between the preferences and the (approximate) Pareto front. By increasing the number of preferences, we have three sparse to dense generated Pareto front approximations. We are more interested in MOCVRP, which has a non-uniform Pareto front. In this problem, we consider two different objectives to optimize, namely, the total tour length (objective 1) and the tour length for the longest route (objective 2). These two objectives are in quite different scales, where the first objective is significantly larger than the second one. In Figure 7, we show different connections for the MOCVRP100 instance. For MA-MOCO, we report the connections for all 101 models. For our proposed model, we report the connections with different numbers of uniform preferences. In this problem, 101 models or our model with 101 uniform preferences are not enough to generate a dense approximate Pareto front. The obtained solutions are biased to the area that objective 1 has a much better relative performance. By increasing the number of preferences, our proposed method can generate more solutions that have relatively better performance for objective 2, which leads to a better Pareto front approximation with higher hypervolume. In this work, we always use a straightforward uniform sampling method to select the preferences. It is interesting to design a learning-based approach to select the preferences for a given problem instance. Preference adjustment and model adaption with awareness on the shape of Pareto front are also worthy to investigate. We left them to the future work. In the MOCVRP instance, we also find the 101-model MA-MOCO has a worse performance compared to our method with 101 preferences. The reason would be the mismatch between the uniform transfer training and the non-uniform Pareto front. Increasing the training steps for fine-tuning each model might fix this issue, but will lead to an even larger computational overhead, given the current training already require 700 epochs. The fixed preferences assignment is another issue for MAMOCO. It requires a fixed set of preferences for each model at the start of the training procedure when the decision makers might have no knowledge on the problem. When the training procedure is done, it dose not allow any preference adjustment without retraining the models. D.5 CONNECTION BETWEEN PREFERENCES AND SOLUTIONS In the previous sections, we use the weighted Tchebycheff aggregation to connect the preference to its corresponding solution for two-objective optimization problems: gtch(x|λ) = max 1≤i≤m {λi|fi(x)− z∗i |}, (18) where z∗i < minx∈X fi(x) is an ideal value for fi(x). There are also many other aggregation function we can use to build the connection. For example, a modified version of weighted Tchebycheff aggregation can be defiend as: gmtch(x|λ) = max 1≤i≤m { 1 λi |fi(x)− z∗i |}, (19) where the only difference is the weight vector 1λi . The penalty-based boundary intersection (PBI) is another widely-used aggregation function for decomposition-based multiobjective optimization (Zhang & Li, 2007): gpbi(x|λ) = d1 + θd2, d1 = |(F (x)− z∗)Tλ|/||λ||, d2 = ||F (x)− z∗ − d1 λ ||λ|| ||, (20) where θ is the penalty parameter, F (x) = (f1(x), . . . , fm(x)) and z∗ = (z∗i , . . . , z ∗ i ) are the objective vector and ideal vector respectively. An inverted version of PBI (IPBI) aggregation function (Sato, 2014) can be defined as: gipbi(x|λ) = −d1 + θd2, d1 = |(zN − F (x))Tλ|/||λ||, d2 = ||zN − F (x)− d1 λ ||λ|| ||, (21) where zN is the nadir vector that contain each objective’s worst value among all Pareto solutions. For a two-objective optimization problem, when we can find a dense set of corresponding solutions to cover the Pareto front for each aggregation function, their performance could be similar to each other. However, different aggregation functions would have quite different performances on the problems with three or more objective functions (called many-objective optimization problems). The performances will heavily depend on the shape of Pareto front (Ishibuchi et al., 2016), especially with a limited number of approximate solutions. We compare the performance of our proposed method with different aggregation functions on MOTSP50 with 105, 1035 and 10011 preferences respectively in Fig. 8. According to the results, the IPBI method can generate the most uniformly distributed solutions for the MOTSP problem with an inverted triangular shape of Pareto front, of which the shape is similar to the weight vector distribution (e.g., see Fig 5). This observation is consistent with the findings and analysis in Ishibuchi et al. (2016). According to these results, we use the Tchebycheff aggregation for all two-objective optimization problems and IPBI aggregation for all problems with more than two objective functions in this work. Since the shape of Pareto front tends to be irregular for real-world applications (Ishibuchi et al., 2019), how to properly choose the aggregation function and assign the preference distribution could be an important future work. D.6 THREE-OBJECTIVE MOTSP WITH ASYMMETRIC PARETO FRONT In this subsection, we conduct experiments on the three-objective MOTSP100 instances with asymmetric Pareto fronts. The definition of irregular MOTSP instance is almost the same as in Section C.1, except the coordinates for the three objectives and randomly sampled from [0, 1]2, [0, 0.5]2 and [0, 0.1]2 respectively, rather than uniformly from [0, 1]6. In this way, the objective values for the MOTSP instance will be in quite different scales, thus leading to an irregular Pareto front (the axes in Figure 9 are in different scales). A well-known drawback of the scalarization-based approach is that it cannot evenly explore the irregular Pareto front with a set of uniform weights, which can also be observed in Figure 9(a)-(d). Our proposed approach allows the user to generate arbitrary trade-off Pareto solutions on the inference time, therefore they can directly generate a dense approximation and then select the preferred solutions as in Figure 9(d). This flexibility can partially address the unevenly distributed issues caused by a (small) set of fixed weights in the traditional scalarization-based approach. If we know the approximate range of different objectives in advance, we can first normalize them into [0, 1] to encourage a more symmetric Pareto front. Otherwise, on the inference time, we can use a (prior knowledge-based) biased and non-uniform weight assignment to generate uniformly distributed solutions. In Figure 9(e)-(h), we first multiple the three-dimensional weights by (1, 2, 10) and then normalize them back to [0, 1]3 which leads to a set of non-uniform weights as shown in Figure 9(e). With this weight assignment, we have a a set of more evenly distributed Pareto solutions as shown in Figure 9(f)-(h). D.7 PREFERENCE-BASED INFERENCE Even without any prior knowledge, our proposed approach allows the user to adaptively adjust the weights in real-time to search for the most suitable solutions in their preferred region(s). Some examples of selected weights and their corresponding solutions are shown in Figure 10 for symmetric Pareto front and Figure 11 for asymmetric Pareto front. If we have prior knowledge of the preference (e.g., the decision-makers will only care about a specific region of the Pareto front), we can modify the training preference distribution Λ accordingly to enhance the training efficiency. For the problem with a truly irregular Pareto front, it is also possible to adaptively adjust the given weights to make them evenly explore the Pareto front during the learning/searching process. One potential direction could be to consider the connection between scalarization and hypervolume maximization as in Zhang & Golovin (2020). We believe this could be an important research topic for the learning-based scalarization approach in future work. D.8 PROBLEM WITH MORE OBJECTIVES Finally, we test the performance of our proposed method on the 10-objective knapsack problems. We train a new model for the 10 objective MOKP with 100 items with uniform 10-dimension preferences. The obtained value path plots on the 10-objective MOKP100 are shown in Figure 12. For problems with more objectives, we need a large number of solutions to approximate the Pareto set. Training a large number of neural network models would have a huge computational and storage overhead, which is also not desirable in practice. Therefore, we do not compare with the AMMOCO and MOA-DRL methods on this problem. For inference, to approximate the Pareto set, we use a set of 715 fixed preferences following the weight assignment approach from (Das & Dennis, 1998) (with m = 10, p = 4, hence n = C10+4−14 = 715). The model generates different trade-off solution for each preference, so there are 715 different value paths (lines) on each plot. In MOKP, we want to maximize the values for all objectives under the capacity limitation. A set of good approximate solutions should have relatively high overall values. According to the results, our proposed method has the best performance. We also test the performance of our method on a larger problem with 500 items. The results shown in Figure 13 confirm that our trained model generalizes well to problems with a larger size.
1. What is the focus and contribution of the paper on multi-objective combinatorial optimization? 2. What are the strengths of the proposed approach, particularly in terms of its ability to provide flexible trade-off solutions? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any minor issues or suggestions for improvement regarding the paper's presentation or methodology?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a novel preference-conditioned method to approximate the whole Pareto front for Multi-Objective Combinatorial Optimization (MOCO) problems with a single model. According to the authors, this method provides extra flexibility for decision-makers to directly obtain arbitrary trade-off solutions without any extra search, which is a more principled way to deal with MOCO. Review The paper is well-written and presents an interesting approach to solving MOCO problems. The structure is appropriate, there is a very good review of related works, clear problem formulation, a good description of the method, experiments, and results. There are also extensive supplementary materials. The introduced method is novel, might be significant, and the quality of this article seems to be on-par with other papers applying ML techniques to solve TSP published at top-tier conferences (which are also cited in this paper). I don't see significant weaknesses. There are some minor typos (e.g., "a exceptionally" -> "an exceptionally", p. 4), so I recommend revising the paper before the final publication, but from the methodological point of view, the paper seems to be good enough to be accepted.
ICLR
Title Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization Abstract Multiobjective combinatorial optimization (MOCO) problems can be found in many real-world applications. However, exactly solving these problems would be very challenging, particularly when they are NP-hard. Many handcrafted heuristic methods have been proposed to tackle different MOCO problems over the past decades. In this work, we generalize the idea of neural combinatorial optimization, and develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure. We propose a single preference-conditioned model to directly generate approximate Pareto solutions for any trade-off preference, and design an efficient multiobjective reinforcement learning algorithm to train this model. Our proposed method can be treated as a learning-based extension for the widely-used decomposition-based multiobjective evolutionary algorithm (MOEA/D). It uses a single model to accommodate all the possible preferences, whereas other methods use a finite number of solutions to approximate the Pareto set. Experimental results show that our proposed method significantly outperforms some other methods on the multiobjective traveling salesman problem, multiobjective vehicle routing problem, and multiobjective knapsack problem in terms of solution quality, speed, and model efficiency. 1 INTRODUCTION Many real-world applications can be modeled as multiobjective combinatorial optimization (MOCO) problems (Ehrgott & Gandibleux, 2000). Examples include the multiobjective traveling salesman problem (MOTSP) (Lust & Teghem, 2010a), the multiobjective vehicle routing problem (MOVRP) (Jozefowiez et al., 2008) and the multiobjective knapsack problem (MOKP) (Bazgan et al., 2009). These problems have multiple objectives to optimize, and no single solution can optimize all the objectives at the same time. Instead, there is a set of Pareto optimal solutions with different trade-offs among the objectives. It is very challenging to find all the exact Pareto optimal solutions for a MOCO problem. Actually, finding one single Pareto optimal solution can be NP-hard for many problems (Ehrgott & Gandibleux, 2000), and the number of Pareto solutions could be exponentially large with regard to the problem size (Ehrgott, 2005; Herzel et al., 2021). The decision-maker’s preference among different objectives is usually unknown in advance, making it very difficult to reduce the problem into a single-objective one. Over the past several decades, many methods have been developed to find an approximate Pareto set for different MOCO problems within a reasonable computational time. These methods often need carefully handcrafted and specialized heuristics for each problem. It can be very labor-intensive in practice. In many real-world applications, practitioners need to solve many different instances for the same particular problem, where the instances can be easily obtained or generated (Bengio et al., 2020). It is desirable to learn the patterns behind these problem instances explicitly or implicitly to design efficient algorithms (Cappart et al., 2021a). Machine learning techniques can be naturally used for this purpose. Some learning-based methods have been recently proposed for solving single-objective combinatorial optimization problems (Bengio et al., 2020; Vesselinova et al., 2020; Mazyavkina et al., 2021; Cappart et al., 2021a). In this work, we extend the learning-based method to solve MOCO problems in a principled way as shown in Figure 1. Our main contributions include: • We propose a novel neural multiobjective combinatorial optimization method to approximate the whole Pareto set via a single preference-conditioned model. It allows decision makers to obtain any preferred trade-off solution without any search effort. • We develop an efficient end-to-end reinforcement learning algorithm to train the single model for all different preferences simultaneously, and a simple yet powerful active adaption method to handle out-of-distribution problem instances. • We conduct comprehensive experiments on MOTSP, MOVR and MOKP of different settings. The results show that our proposed method can successfully approximate the Pareto sets for different problems in an efficient way. It also significantly outperforms other methods in terms of solution quality, speed, and model efficiency. 2 BACKGROUND AND RELATED WORK Multiobjective Combinatorial Optimization (MOCO). MOCO has been attracting growing research efforts from different communities over the past several decades (Sawaragi et al., 1985; Wallenius et al., 2008; Herzel et al., 2021). There are two main approaches to tackle the MOCO problems: the exact methods and the approximation methods (Ehrgott, 2005). Exact methods could be prohibitively costly when, as it often happens, the MOCO problem is NP-hard and the problem size is very large (Florios & Mavrotas, 2014). For this reason, many heuristics (Jaszkiewicz, 2002; Zhang & Li, 2007; Ehrgott & Gandibleux, 2008) and approximation methods (Papadimitriou & Yannakakis, 2000; Herzel et al., 2021) have been developed to find a manageable number of approximated Pareto solutions with a reasonable computational budget. However, these methods usually depend on carefully handcrafted designs for each specific problem (Ehrgott & Gandibleux, 2000), and the required effort is often nontrivial in real-world applications. Machine Learning for Combinatorial Optimization. As summarized in Bengio et al. (2020), there are three main learning-based approaches for combinatorial optimization: learning to configure algorithms (Kruber et al., 2017; Bonami et al., 2018), learning alongside the algorithms (Lodi & Zarpellon, 2017; Gasse et al., 2019; Chen & Tian, 2019), and learning to directly predict the solutions (Nowak et al., 2018; Emami & Ranka, 2018; Larsen et al., 2018). Neural combinatorial optimization (NCO) belongs to the last category where the model directly produces a good solution for a given problem instance. Vinyals et al. (2015) proposed a pointer network to sequentially construct a solution for the TSP problem. Bello et al. (2017) made a critical improvement to use reinforcement learning to train the model, eliminating the impractical optimal solutions collection for NP-hard problems. Some other improvements on model structure and training procedure have been proposed in the past few years (Nazari et al., 2018; Deudon et al., 2018; Kool et al., 2019; Veličković & Blundell, 2021), especially with graph neural networks (GNNs) (Dai et al., 2017; Li et al., 2018; Joshi et al., 2019; Dwivedi et al., 2020; Drori et al., 2020). Recent efforts have been made on more efficient learning strategies (Kwon et al., 2020; Karalias & Loukas, 2020; Lisicki et al., 2020; Geisler et al., 2022), learning-based graph search (Cappart et al., 2021b; Kool et al., 2021; Fu et al., 2021; Xin et al., 2021; Hudson et al., 2022), and iterative improvement methods (Wu et al., 2021; Ma et al., 2021; Li et al., 2021). Neural MOCO. Most of the existing learning-based methods are for single-objective combinatorial problems. Recently, a few attempts have been made to solve MOCO problems (Li et al., 2020; Wu et al., 2020; Zhang et al., 2021a;b). These methods adopt the MOEA/D framework (Zhang & Li, 2007) to decompose a MOCO problem into a number of single-objective subproblems, and then build a set of models to solve each subproblem separately. However, since the number of Pareto solutions would be exponentially large (Ehrgott, 2005), the required number of models would be huge for finding the whole Pareto set. In this work, we propose a single preference-conditioned model for solving MOCO problems, with which the decision makers can easily obtain any trade-off solutions. The proposed single neural MOCO solver could be much easier to use in a real-world system (Veličković & Blundell, 2021), than those using a large set of different models. 3 PROBLEM FORMULATION 3.1 MULTIOBJECTIVE COMBINATORIAL OPTIMIZATION A multiobjective combinatorial optimization (MOCO) problem can be defined as follows: min x∈X F (x) = (f1(x), f2(x), . . . , fm(x)), (1) where X is a discrete search space, and F (x) = (f1(x), . . . , fm(x)) is an m-objective vector. Since the individual objectives conflict each other, no single solution can optimize all of them at the same time. Therefore, practitioners are interested in Pareto optimal solutions, defined as follows. Definition 1 (Pareto Dominance). Let xa, xb ∈ X , xa is said to dominate xb (xa ≺ xb) if and only if fi(xa) ≤ fi(xb),∀i ∈ {1, ...,m} and fj(xa) < fj(xb),∃j ∈ {1, ...,m}. Definition 2 (Pareto Optimality). A solution x∗ ∈ X is a Pareto optimal solution if there does not exist x̂ ∈ X such that x̂ ≺ x∗. The set of all Pareto optimal solutions is called the Pareto set, and the image of the Pareto set in the objective space is called the Pareto front. Each Pareto solution represents an optimal trade-off among the objectives, and it is impossible to further improve one of the objectives without deteriorating any other objectives. 3.2 DECOMPOSITION AND PREFERENCE-BASED SCALARIZATION Decomposition is a mainstream strategy for solving multiobjective optimization problem (Zhang & Li, 2007). It decomposes a multiobjective problem into a number of subproblems, each of which can be a single objective or multiobjective optimization problem. MOEA/D (Zhang & Li, 2007) and its variants (Trivedi et al., 2016) solve these subproblems in a collaborative manner and generate a finite set of Pareto solutions to approximate the Pareto front. The most widely used way for constructing a single objective subproblem is the preference-based scalarization (Ehrgott, 2005; Miettinen, 2012). For an m-objective optimization problem, a preference vector for the objective functions can be defined as λ ∈ Rm that satisfies λi ≥ 0 and ∑m i=1 λi = 1. Weighted-Sum Aggregation is the simplest approach. It defines the aggregation function to minimize in the subproblem associated with λ as gws(x|λ) = m∑ i=1 λifi(x). (2) However, this approach can only find solutions on the convex hull of the Pareto front (Ehrgott, 2005). Weighted-Tchebycheff (Weighted-TCH) Aggregation is an alternative approach to minimize: gtch(x|λ) = max 1≤i≤m {λi|fi(x)− z∗i |}, (3) where z∗i < minx∈X fi(x) is an ideal value for fi(x). Any Pareto optimal solution could be an optimal solution of problem (3) with a specific (but unknown) preference λ (Choo & Atkins, 1983). 3.3 CURRENT DRAWBACKS AND OUR METHOD Drawbacks of Existing Methods. For many MOCO problems, the size of the Pareto set would be exponentially large with respect to the input size (e.g., nodes in MOTSP). It is computationally impractical for existing methods to find the whole Pareto set (Herzel et al., 2021). For this reason, all of the existing heuristic-based and learning-based methods are to find a small subset of approximate Pareto solutions. Decision makers can only select solutions from this small set, which often does not contain their preferred solutions. In addition, scalarization may also produce a complicated single objective subproblem. For example, the Tchebycheff scalarized subproblem of MOTSP is not a classic TSP, and thus cannot be solved by the highly specialized TSP solvers such as LKH (Helsgaun, 2000) or Concorde (Applegate et al., 2007). Our Method. Instead of finding a set of finite solutions, we propose a novel way to approximate the whole Pareto set using a single model. With our proposed model, decision makers can easily obtain any solution from the approximate Pareto set to satisfy their preferred trade-offs in real time as shown in Figure 2. This is a clear advantage to support interactive decision making. In addition, our proposed reinforcement learning based method can use a scalarization method to combine multiobjective rewards, and does not need to consider the problem-specific condition. In this paper, we mainly consider learning the whole Pareto front. It is possible to incorporate decision-maker’s preferences on specific regions for model building and inference as discussed in Appendix D.6. We believe our proposed method is a new principled way to solve multiobjective combinatorial optimization problems. 4 THE PROPOSED MODEL: PREFERENCE-CONDITIONED NEURAL MOCO 4.1 PREFERENCE-CONDITIONED SOLUTION CONSTRUCTION Decomposition and scalarization link preferences to their corresponding Pareto solutions. This work builds a preference-conditioned model to accommodate all the preferences. We use the MOTSP as an example to explain our model design. In an MOTSP instance s, a fully connected graph of n nodes (cities) with m distance metrics on each edge is given. A feasible solution is a tour that visits each city exactly once and returns to the starting city. The i-th objective to minimize is the tour length (total cost) based on the i-th distance metric. A tour can be represented as π = (π1, · · · , πt, · · · , πn), πt ∈ {1, · · · , n}, a permutation of all the nodes defining the order in which n cities is visited. Our model defines a preference-conditioned stochastic policy pθ(λ)(π|s) parameterized by θ(λ) to construct a valid solution in sequence: pθ(λ)(π|s) = ∏n t=1 pθ(λ)(πt|s,π1:t−1). (4) The goal is to learn an optimal preference-conditioned policy pθ(λ)(π|s) to construct tours with the lowest scalarized costs for each preference λ. 4.2 THE PROPOSED MODEL We propose to use an Attention Model (AM) (Kool et al., 2019) as our basic encoder-decoder model as shown in Figure 3. For the MOCO problems considered in this work, a preference-agnostic encoder is capable to transfer problem instances into embeddings (e.g., embedding for all cities) used in the preference-conditioned decoder. In our model, only the decoder’s parameters θdecoder(λ) are conditioned on the preference λ: θ(λ) = [θencoder,θdecoder(λ)]. (5) Preference-agnostic Encoder. The encoder takes a problem instance s (e.g., an MOTSP instance with n cities) as its input, and outputs a set of d-dimensional node embeddings {h1, · · · ,hn} for each city. For a given instance, the same embeddings can be used for different preferences. Hence we only need a single forward pass for the dense encoder. We use the attention-based encoder as in Kool et al. (2019) for all preferences. Preference-based Attention Decoder. The decoder has the same structure as in the attention-based model (Kool et al., 2019), but with parameters θdecoder(λ) = [WQ(λ),WK(λ),WV (λ),WMHA(λ)] conditioned on the preference λ. It takes the nodes embeddings for all cities as input, and sequentially selects the next node πt with probability pθ(λ)(πt|s,π1:t−1). At time step t, the decoder first constructs a context embedding ĥ(C) = [hπ1 ,hπt−1 ]WQ(λ) from the first selected node hπ1 , and the last selected node hπt−1 . The matrix WQ(λ) ∈ R2d×d projects the concatenated 2d-dimensional vector to a d-dimensional vector. Then we further aggregate the context embedding via a Multi-Head Attention (MHA) (Vaswani et al., 2017) with the embeddings for all cities {h1, · · · ,hn}: h(C) = MHA(Q = ĥ(C),K = {h1, · · · ,hn}WK(λ), V = {h1, · · · ,hn}WV (λ))WMHA(λ), (6) where Q,K, V are the query, key and value for MHA, respectively. WMHA(λ) represents the MHA parameters. The context embedding h(C) contains all information for the instance and the current partial tour at step t. We can calculate the logit for selecting each city with its embedding hj : logitj = { C · tanh(h T (C)hj√ d ) if j ̸= πt′ ∀t′ < t, −∞ otherwise. (7) All already visited cities are masked with −∞ and will not be selected as the next city. The logits of the rest cities are clipped into [−C,C] (C = 10) as in the AM model (Kool et al., 2019). The probability for choosing the j-th city at time step t can be calculated as pθ(λ)(πt = j|s,π1:t−1) = elogitj/ ∑ k e logitk . With this probability, the decoder can construct a feasible tour. One remaining designing issue is how to generate the preference-conditioned parameters θdecoder(λ). Multiplicative interactions (Jayakumar et al., 2020) and hypernetwork (Schmidhuber, 1992; Ha et al., 2017) provide a powerful and efficient way for conditional computation, which is widely used for transfer learning (von Oswald et al., 2020; Ehret et al., 2021; Lin et al., 2020; Navon et al., 2021). We use a simple MLP hypernetwork θdecoder(λ) = MLP(λ|ψ) to generate the decoder parameters conditioned on the preference. The details of our proposed model can be found in Appendix B. Algorithm 1 Neural MOCO Training 1: Input: preference distribution Λ, instances distribution S, number of training steps T , number of preferences per iteration K, batch size B, number of tours N 2: Initialize the model parameters θ 3: for t = 1 to T do 4: λk ∼ SamplePreference(Λ) ∀k ∈ {1, · · · ,K} 5: si ∼ SampleInstance(S) ∀i ∈ {1, · · · , B} 6: πjki ∼ SampleTour(pθ(λk)(·|si)) ∀k, i ∀j ∈ {1, · · · , N} 7: b(si|λk)← 1N ∑N j=1 L(π j ki|λk, si) ∀k ∈ {1, · · · ,K} ∀i ∈ {1, · · · , B} 8: ∇J (θ)← 1KBN ∑K k=1 ∑B i=1 ∑N j=1[(L(π j ki|λk, si)− b(si|λk))∇θ(λk) log pθ(λk)(π j ki|si)] 9: θ ← ADAM(θ,∇J (θ)) 10: end for 11: Output: The model parameter θ Instance Augmentation for MOCO. Our proposed model only has a small extra computational and memory overhead to the original single-objective AM solver. We keep our model as simple as possible, making it easy for our approach to use other models and other improvements developed for single-objective NCO. These properties are crucially important for generalizing the NCO to multiobjective problems. In this work, we simply extend the instance augmentation method (Kwon et al., 2020) to MOCO. The details can be found in Appendix B.1. 5 PREFERENCE-CONDITIONED MULTIOBJECTIVE POLICY OPTIMIZATION 5.1 COST FUNCTION Our proposed node selection strategy guarantees that the model can always generate feasible solutions. In this section, we develop an efficient multiobjective policy optimization method to train the model for all the preferences simultaneously. For an MOTSP problem, the objective functions are a vector of m different costs (i.e. lengths) for a tour L(π) = [L1(π), · · · , Lm(π)]. We can define a weighted-Tchebycheff scalarized cost for each preference λ: L(π|λ) = max 1≤i≤m {λi|Li(π)− (z∗i − ε)|}, (8) where z∗i is an ideal cost for the i-th objective. For a given instance s, our goal is to minimize the expected cost for all preferences: J (θ|s) = Eλ∼Λ,π∼pθ(λ)(·|s)L(π|λ), (9) where Λ is the uniform distribution over all valid preferences. To train the model, we repeatedly sample different instances s ∼ S at each iteration. We define the training loss as J (θ) = Es∼SJ (θ|s). 5.2 MULTIOBJECTIVE REINFORCE For a given instance s and a specific preference λ, we use the REINFORCE (Williams, 1992) to estimate the gradient for the preference-conditioned scalar cost: ∇J (θ|λ, s) = Eπ∼pθ(λ)(·|s)[(L(π|λ, s)− b(s|λ))∇θ(λ) log pθ(λ)(π|s)], (10) where b(s|λ) is the baseline of expected cost to reduce the gradient variance. This gradient can be estimated by Monte Carlo sampling. At each update step, we randomly sample K preference {λ1, · · · , λK} ∼ Λ, B instances {s1, · · · , sB} ∼ S, and N different tour {π1i , · · · ,πNi } ∼ pθ(λk)(·|si) for each λk-si combination. The approximated gradient is: ∇J (θ) ≈ 1 KBN K∑ k=1 B∑ i=1 N∑ j=1 [(L(πji |λk, si)− b(si|λk))∇θ(λk) log pθ(λk)(π j i |si)]. (11) We use the shared baseline bshared(si|λk) = 1N ∑N j=1 L(π j ki|λk, si) over N sampled tours for each λk − si combination. The starting node for each tour πjki is chosen in random to force diverse rollouts as proposed in (Kwon et al., 2020). The algorithm is shown in Algorithm 1. 5.3 ACTIVE ADAPTION We also propose a simple yet powerful active adaption approach to further adjust the whole model to approximate the Pareto front for a given test instance in Appendix B.3. The proposed method does not depend on specific instance distribution S, and is suitable for out-of-distribution adaption. 6 EXPERIMENTS Problems and Model Setting. We consider MOTSP (Lust & Teghem, 2010a), MOCVRP (Lacomme et al., 2006) and MOKP (Bazgan et al., 2009) in our experimental studies, and use the same model settings for all problems with different task-specific input sizes and mask methods. The main policy model encoder is the Attention Model (Kool et al., 2019) and the hypernetwork is an MLP. We randomly generate 100, 000 problem instances on the fly for each epoch, and train the model for 200 epochs. The optimizer is ADAM with learning rate η = 10−4 and weight decay 10−6. We train our models on a single RTX 2080-Ti GPU, and it costs about 10 minutes for an epoch on MOTSP100. We give detailed model settings, problem formulations, and more experimental results in Appendix BCD. The source code can be found in https://github.com/Xi-L/PMOCO. Baseline. We call our proposed preference-conditioned multiobjective combinatorial optimization as P-MOCO. We compare it with three widely-used evolutionary algorithm frameworks for MOCO: MOGLS (Jaszkiewicz, 2002) is a multiobjective genetic local search algorithm, NSGAII (Deb et al., 2002) is a Pareto dominance-based multiobjective genetic algorithm, and MOEA/D (Zhang & Li, 2007) is a decomposition-based multiobjective evolutionary algorithm. All these algorithm frameworks need problem-specific heuristics to generate and search feasible solutions for different problems. We also compare P-MOCO with two other learning-based methods: DRL-MOA (Li et al., 2020) decomposes a MOCO with different preferences and builds a Pointer Network (Vinyals et al., 2015; Bello et al., 2017) to solve each subproblem, and AM-MOCO is a multi-models variant of our proposed model, which builds Attention Model (Kool et al., 2019) for each subproblem. The Weight-Sum scalarization of MOTSP and MOKP are their respective single-objective counterpart. Therefore, we also compare our method with the approach that uses some state-of-the-art singleobjective solvers for each weight-sum subproblem. Model Information for the learning-based methods is shown in Table 1. Our model supports flexible preference assignment and only has 1.1% total parameters to the multi-model counterpart. Inference and Metrics. We report the results and run time for solving 200 random test instances for each problem, with normally 101 to 105 different trade-offed solutions, and up to 10, 011 solutions for our proposed method. In most cases, we report our model’s zero-shot generalization performance without any search and fine-tune. We use the hypervolume indicator (Zitzler et al., 2003) to measure the performance for each method. For a set P ⊂ Rm in the objective space, we can find a reference point r∗ that dominated by all solutions in P , and define the hypervolume HV(P ) as volume for: S = {r ∈ Rm | ∃ y ∈ P such that y ≺ r ≺ r∗}, (12) where HV(P ) = Vol(S). In general, the larger the hypervolume, the better the solution set tends to be. The ground truth Pareto set always has the largest hypervolume. We report the normalized hypervolume values in [0, 1] with respect to the same r∗ for all the methods, and also the ratios of hypervolume difference to our method. A Wilcoxon rank-sum test with a significance level 1% is conducted to compare the results for each experiment. More details can be found in Appendix D.1. 6.1 RESULTS AND ANALYSIS MOTSP. The results on two and three objective MOTSP are shown in Table 2 and Table 3 respectively. MOGLS, NSGAII and MOEA/D all use 2-opt local search heuristic (Jaszkiewicz, 2002) to MOTSP MOCVRP MOKP search for promising solutions. We also include two weight-sum scalarization baselines with the state-of-the-art LKH solver (Helsgaun, 2000; Tinós et al., 2018) and Google OR tools (Perron & Furnon, 2019). For the bi-objective problems, our proposed method with a single model has similar performances compared with AM-MOCO on all problems. It achieves the best performance with instance augmentation, which significantly outperforms other methods but is beaten by the LKH solver. For the three objective problems, our method can further improve its performance by generating much more trade-off solutions within a reasonable amount of time, which other methods cannot do. As shown in Figure 2 and Figure 5, our method can successfully learn the mapping from preferences to the corresponding solutions, and can generate a good prediction to the whole Pareto front. Decision makers can easily obtain any preferred trade-off solutions as they like. This flexibility could be desirable in many real-world applications. More discussion on the connection between the preference and Pareto solution for three-objective TSP can be found in Appendix D.5 D.6 D.7. MOCVRP. In this problem, each node has a demand, and we need to construct multiple return routes for a vehicle with a fixed capacity from the same depot to handle all demands. The objectives we consider are to minimize the tour length for all routes and also the tour length for the longest routes (the makespan in scheduling) (Lacomme et al., 2006). All the non-learning algorithm frameworks use the problem-specific constructive heuristics and local search method proposed in Lacomme et al. (2006) to search feasible non-dominated solutions. The results in Table 2 show that our method significantly outperforms the non-learning heuristics in terms of both solution quality and running time. It also outperforms AM-MOCO with 100 individual models, which could be due to the asymmetric objective scales. We provide further analysis in Appendix D.4. MOKP. The multiobjective 0-1 knapsack problem can be found in many real-world applications (Bazgan et al., 2009). We consider the uni-dimension problem, where each item has multiple values and one weight. The goal is to select a subset of items to maximize all obtained values with a weight constraint. The non-learning methods use binary coding with a greedy transformation heuristic to maintain feasibility (Ishibuchi et al., 2014). We also include weight-sum scalarization baselines with dynamic programming (DP) and a strong greedy search based on the value-weight ratio. According to the results in Table 2, our method has the best performance on all problems. The DP method is also outperformed by our method since the weight-sum scalarization can only find the convex hull of the Pareto front. The Tchebycheff scalarization of MOKP is not a KP problem, while our method is more flexible to use Tchebycheff scalarization on the reward function. We also report the results on 10 objective MOKP100 and the generalization performance to problem with 500 items in Appendix D.8. Out-of-Distribution Problems and Active Adaption. We also validate the generalization performance of our method on 6 out-of-distribution (OOD) MOTSP problems from Fonseca et al. (2006). Their ground truth Pareto fronts can be obtained by exhaustive search. The results are shown in Appendix D.2 due to the page limit. With active adaption, our method can achieve good performance (1% - 1.5% HV gap to the ground truth Pareto fronts) on these OOD problems. 7 CONCLUSION AND FUTURE WORK Conclusion. We have proposed a novel preference-conditioned method to approximate the whole Pareto front for MOCO problems using a single model. It allows decision makers to directly obtain any trade-off solutions without any search procedure. Experiments on different problems have shown that our proposed method significantly outperforms other methods in terms of performance, speed and model efficiency. We believe the proposed method is a principled way for solving MOCO. Future Work. In a sense, our method can be regarded as a learning version of the decompositionbased algorithm (MOEA/D (Zhang & Li, 2007)) dealing with all the possible trade-off preferences. Instead of maintaining a set of finite solutions as in other MOEA/D vaiants (Trivedi et al., 2016), we build a single learning-based model to solve the subproblems for all the preferences simultaneously in a collaborative manner. We believe the single-model-for-all-preference approach is a promising alternative to the current default finite-population-based methods, and it could be an important research direction for multiobjective optimization. Our method can be further improved with other advanced models and efficient multiobjective training procedures. In the future, we will study fundamental issues of multiobjective optimization (e.g., convergence v.s. diversity, exploitation v.s. exploration trade-off) for Pareto set learning methods. Limitation. It is very difficult to give a convergence guarantee for learning-based MOCO, where each preference-based subproblem could be already NP-hard, and the number of Pareto solutions is exponentially large with respect to the input size. See detailed discussion in Appendix A. ACKNOWLEDGMENTS We thank Prof. Hisao Ishibuchi for his valuable comments on an earlier version of this work. This work was supported by the Hong Kong General Research Fund (11208121, CityU-9043148). A PARETO SET LEARNING AND APPROXIMATION ANALYSIS A.1 PARETO SET LEARNING AND CONVERGENCE GUARANTEE In this work, we have proposed a novel neural combinatorial optimization (NCO) method to approximate the whole Pareto set for MOCO problems with a single model. The proposed learning-based MOCO solver can directly generate arbitrary trade-off solutions without extra optimization. We believe it is a principled way to solve MOCO problems. However, the lack of an exact optimality guarantee is a limitation of the proposed method, which is also the case for previous work on single-objective neural combinatorial optimization (Vinyals et al., 2015; Bello et al., 2017; Kool et al., 2019). This limitation is mainly due to the fact that many singleobjective combinatorial optimization (CO) problems are NP-hard, and the size of Pareto sets for a MOCO problem would be exponentially huge, which makes it very difficult to exactly solving the problems (Ehrgott, 2005; Herzel et al., 2021). In addition, the training for the parameterized policy (neural network model) cannot guarantee to fit all training problems perfectly. The generalization ability to problem instances with different patterns (out-of-distribution generalization) is another critical issue that makes it difficult to give an exact optimality guarantee to the proposed learningbased algorithm. On the other hand, our proposed model is an efficient mapping from the preferences to the corresponding approximate set of the Pareto optimal solutions. It provides a flexible way for decision makers to obtain an approximate solution with their preferred trade-off directly. The experimental results also show that our proposed method can generate good approximate Pareto sets for three different MOCO problems. In the next subsection, we provide a thorough discussion on the approximation ability of our proposed method. A.2 APPROXIMATION ANALYSIS For a MOCO problem, the number of Pareto solutions could be exponentially large with respect to its input size, which makes the problem intractable (Ehrgott, 2005; Herzel et al., 2021). The preference-based scalarization methods and decomposition methods (Choo & Atkins, 1983; Zhang & Li, 2007) we used provides a principled way to link the Pareto solutions with preference, allowing us to tackle the problem in a systematic manner. In this work, we propose to approximately solve the scalarized subproblem with all preferences via a single model. We first briefly review the weighted scalarization method and its Pareto optimality guarantee as discussed in the main paper. Then we provide further discussion on the approximation analysis. Our proposed method decomposes a MOCO problem into preference-based subproblems with the weighted-Tchebycheff scalarization (Weighted-TCH): min x∈X gtch(x|λ) = min x∈X max 1≤i≤m {λi|fi(x)− (z∗i − ε)|}, (13) where z∗i is the ideal value for objective fi(x) (e.g., the lower bound), and u ∗ i = z ∗ i − ε is a utopia value with small positive component ε. The preference vector λ ∈ Rm satisfies λi ≥ 0 and ∑m i=1 λi = 1, where λi is the preference for the i-th objective. This approach has a desirable property: Lemma 1 (Choo & Atkins (1983)). A feasible solution x ∈ X is Pareto optimal if and only if there is a weight vector λ > 0 such that x is an optimal solution to the problem (13). According to Lemma 1, we can obtain any Pareto solution by solving the Weighted-TCH subproblem with a specific weight. However, the weight for each Pareto solution depends on its objective values, which are not known in advance (Sawaragi et al., 1985; Ehrgott, 2005). The decision-maker still needs to solve multiple subproblems with different preferences to find a desirable solution. To find the whole Pareto set, it needs to solve an exponentially huge number of subproblems. Given a problem instance s, our proposed model provides a single mapping function xλ = h(λ) from any preference λ to its corresponding solution xλ, which is constructed by the preferencebased policy pθ(λ)(x|s). In the ideal case, if all generated solutions xλ are the optimal solutions x∗λ of problem (13) with preference λ, according to Lemma 1, our proposed model can generate the whole Pareto set (all Pareto optimal solutions) for the original MOCO problem. In practice, we are interested in the proposed method’s approximation ability. We find that its performance strongly depends on the approximation ability of the parameterized policy (neural network model) on the single-objective scalarized subproblem. We first give an informal claim on our method’s approximation ability, then provide detailed explanations and discussions. (Informal) Claim 1. If the proposed method can approximately solve the subproblem (13) with any preference λ, it can generate a good approximation to the whole Pareto set for the MOCO problem. To support this claim, we follow the traditional ε-Pareto approximate method for MOCO problems (Papadimitriou & Yannakakis, 2000; Herzel et al., 2021). First, an ε-Pareto domination relation between two individual solutions can be defined as: Definition 3 (ε-Pareto Domination). For a MOCO problem and an ε > 0, let xa, xb ∈ X , xa is said to ε-dominate xb (xa ≺ε xb) if fi(xa) ≤ (1 + ε)fi(xb),∀i ∈ {1, · · · ,m}. This definition is a natural generalization from the (1 + ε)-approximation for single-objective optimization. With this concept, an ε-approximate Pareto set (Papadimitriou & Yannakakis, 2000) can be defined as: Definition 4 (ε-Approximate Pareto Set). For an ε > 0, a set Pε ⊂ X is an ε-approximate Pareto set, if for any feasible solution x ∈ X , there exists a solution x′ ∈ Pε such that x′ ≺ε x. In other words, all feasible solutions of the MOCO problem can be almost dominated by some solutions in Pε (Papadimitriou & Yannakakis, 2000). When the Pareto set is intractable and hard to find, the ε-approximate Pareto set would be a reasonable choice to achieve in practice. Each MOCO problem has a unique Pareto set, but can have different ε-approximate Pareto sets. The ability of our proposed method to find an ε-approximate Pareto set strongly depends on its performance on each single-objective preference-based subproblem. Theorem 1. Let x∗λ denotes the optimal solution of the problem (13) with preference λ, if the proposed method can generate an approximate solution xλ ≺ε x∗λ for any preference λ, it is able to generate an ε-approximate Pareto set Pε to the MOCO problem. Proof. Let P be the Pareto set for a MOCO problem, for any xPareto ∈ P , according to Lemma 1, there is a weight vector λ > 0 such that x = x∗λ is the optimal solution for subproblem (13) with a specific preference λ. Therefore, our proposed method can generate an approximated solution xλ ≺ε x∗λ = xPareto. By generating approximate solutions for all xPareto ∈ P , our proposed method is able to generate an ε-approximate Pareto set Pε to the MOCO problem. A.3 LIMITATION Strong Assumption on (Approximately) Solving all Subproblems: The approximation guarantee in Theorem 1 heavily depends on the ability to (approximately) solve each weighted subproblem. Due to the NP-harness, it is indeed non-trivial to give a convergence guarantee to generate ε-dominate solutions for any preference with a small enough ε. This limitation also applies for other end-to-end learning-based (e.g, neural combinatorial optimization) and heuristic-based methods. We are aware that some efforts have been made to combine the learning-based method with dynamic programming to achieve asymptotically optimal solution solution for specific single-objective problem in recent works (Cappart et al., 2021b; Kool et al., 2021). These methods provide a controllable trade-off between the solution quality and the computational cost for solving NP-hard problems. However, their generalization to the multi-objective problem is not straightforward, since the scalarized subproblem for each preference is not necessary the same as its single-objective counterpart. For example, a Tchebycheff scalarized MOTSP is not a single-objective TSP as discussed at the end of Section 3.2. In addition, according to Bengio et al. (2020), these methods belong to the class of learning alongside the algorithms, while our proposed approach is learning to directly produce the solutions (neural combinatorial optimization). Therefore, the idea for learning enhanced multiobjective combinatorial algorithm could be an important research topic in future, but out of the scope for the current work. Dense Approximation for the Whole Pareto Set: Another concern would be the required number of solutions in the ε-approximate Pareto set Pε. If the required number is exponential to the input size, the approximation itself is also intractable. In their seminal work, Papadimitriou & Yannakakis (2000) establish a promising result: Theorem 2 (Papadimitriou & Yannakakis (2000)). For any multiobjective optimization problem and any ε, there is an ε-approximate Pareto set Pε of which the size is polynomial in the number of solutions and 1ε (but exponential in the number of objectives). However, the existence of such a set still does not mean that it can be easily found (Papadimitriou & Yannakakis, 2000; Herzel et al., 2021). The computability (whether Pε can be constructed in polynomial time) would be hard to justify for a real-world problem. For a new unseen problem instance in practice, our proposed method might still need to generate an exponentially large number of solutions to construct an ε-approximate Pareto set Pε. It is also unclear how to properly select a set of preferences in advance. Many research efforts have been made on developing approximation methods for solving MOCO problems in the past decades (Herzel et al., 2021; Hansen, 1980; Papadimitriou & Yannakakis, 2000; Vassilvitskii & Yannakakis, 2005; Koltun & Papadimitriou, 2005; Bazgan et al., 2017). In future work, it is important to better leverage the current advanced approximation strategies to design more efficient preference-based methods. In the learning-based optimization scenario we consider, it is also possible to learn the suitable approximation method and/or preference distribution directly from the data (problem instances). B DETAILS ON THE PROPOSED MODEL B.1 MODEL SETTING We use the same model for all MOCO problems while tuning the input size and mask method for each problem. Table 4 shows the number of parameters of a standard single-objective attention model (Kool et al., 2019) and our proposed preference-based multiobjective attention model. Our model supports flexible preference assignment at the inference time with a small overhead, while the other neural MOCO methods all require training multiple AM models for different preferences. We build the single-preference attention models as well as our model following the implementation in Kwon et al. (2020). Attention Encoder. The encoder we use is the standard attention encoder as in Kool et al. (2019), and it is shared by all preferences. The encoder has 6 attention layers, and 128-dimensional node embedding for the input nodes. Each attention layer has a multi-head attention (MHA) with eight 16-dimensional heads, and a fully connected layer (FC) with one 512-dimension hidden sublayer. The encoder also includes skip-connection and batch normalization for each attention layer. We use the same model for all MOCO problems (MOTSP, MOCVRP, MOKP) but with different input dimensions for each problem, which will be introduced in the next section. Preference-Conditioned Decoder. The decoder’s main model structure is the same as the AM decoder (Kool et al., 2019). It has one multi-head attention layer with eight 16-dimensional heads similar to the encoder, but without skip-connection and batch normalization. The decoder uses a single 128-dimensional attention head to calculate the probabilities of selecting different nodes at each step. Different problems have different masking methods for probability calculation. We use a simple MLP model to generate the preference-conditioned parameters for the decoder. For all MOCO problems, the MLP model has two 128-dimensional hidden layers with ReLu activation. The input is an m-dimensional preference vector λ which satisfies λi ≥ 0 and ∑m i=1 λi = 1, where m is the number of objectives and λi is the preference for the i-th objective. We adopt the parameter compression approach in Ha et al. (2017) to control the model size. The MLP model first generates a hidden embedding e(λ) = MLP(λ|ψ), then maps the hidden embedding to the decoder parameters via linear projection θdecoder = We(λ) + b. The learnable parameters are ψ for the MLP model MLP(λ|ψ) and the parameter matrices W and b for the decoder. Training Procedure. For all problems, we train our proposed model for 200 epochs, with 100, 000 problem instances randomly generated on the fly at each epoch. At each iteration step, we need to sample K preferences, B problem instances, and N tours to calculate the policy gradient. We set K×B = 64 to make the batch of 64 instances for training a single AM model, and letN equal to the problem size (e.g., the number of nodes) as in Kwon et al. (2020). We find the model performance is equally good for setting K = 1, 2 and 4, and keep using K = 1 for all problems. In other words, we randomly generate a preference λ that satisfies λi ≥ 0 and ∑m i=1 λi = 1 at each training step. For the AM-MOCO baseline, we adapt the transfer training approach in Li et al. (2020) to train multiple AM models for different preferences. We first train a single AM model with a single preference on one objective from scratch with 200 epochs, then transfer its parameter to the model for neighbor subproblem with similar preference, and fine-tune the new model with 5 epochs. With sequentially transfer and fine-tune, we can obtain a set of trained models for different preferences. In most experiments, we set the number of preferences as 101. Therefore, we need to build 101 AM models with total 700 training epochs. Instance Augmentation for MOCO. Due to the design choice of minimal essential change (e.g., the preference-conditioned decoder), our method can also enjoy the current improvements that were originally proposed for the single objective NCO. Here, we generalize the instance augmentation method proposed in Kwon et al. (2020) to the MOCO version. The key idea of instance augmentation for NCO is to find multiple efficient transformations for the original problem such that they share the same optimal solution. Then, we can use an NCO method to solve all problems and select the best solution among all obtained (potentially different) solutions. In this way, we have a more robust result similar to the test-time augmentation for computer vision (Szegedy et al., 2016). For the single-objective euclidean TSP and CVRP, there is a set of straightforward transformations, which simply flips or rotates the coordinate for all the 2D locations in a problem instance (Kwon et al., 2020). For a location (x, y), there is eight different transformation, namely, {(x, y), (y, x), (x, 1−y), (y, 1−x), (1−x, y), (1−y, x), (1−x, 1−y), (1−y, 1−x)}. For an m-objective euclidean MOTSP problem, the concrete location representations are independent for each objective. Therefore, we can independently apply different transformations for each objective. Consider the above eight different transformations for each objective, we can have 8m different problem transformations for an MOTSP instance. We have fixed 8 transformations for MOCVRP since it only has one 2D coordinate, and no transformation for MOKP. The details for each problem can be found in the next section. B.2 TRAINING EFFICIENCY We use the same amount of samples to train our proposed preference-based model as the other single-objective solvers need (Kool et al., 2019; Kwon et al., 2020). Indeed, our proposed model requires significantly fewer samples and training epochs, compared to the other MOCO methods that need to build multiple models for different preferences. We compare our model’s performance on one of the objective (e.g., with preference (1, 0)) with the other SOTA single-objective solver and learning-based solver, the results are shown in Table 5. The results of Concorde/LKH/OR Tools are from Kwon et al. (2020), and we run the learning-based solver by ourselves. We report the average performance over 10, 000 test instances. AM is the single-objective solver (one model in AM-MOCO), P-MOCO (single preference) is our proposed model but only training on a single fixed preference (1, 0), and P-MOCO (all preferences) is our proposed model with the reported result on the preference (1, 0). With the same amount of training samples, our model has similar single-objective performance with learning-based single-objective solver, while it can additionally approximate the whole Pareto front. The learning-based solver’s performance can be further improved by sampling or active search. These results indicate that we can use a single encoder to efficiently learn a shared representation for all trade-offs among different objectives, and there is a positive knowledge transfer among preferences during the learning procedure. In addition, it also confirms the assumption that similar preferences should have similar corresponding (approximate) Pareto solutions for the multiobjective problems we consider in this paper. These findings could be useful to design more powerful learning-based models for MOCO in the future. B.3 ACTIVE ADAPTION After end-to-end training, our proposed method can directly generate different trade-off solutions to a given problem without further search procedure. However, similar to single-objective neural combinatorial optimization, this approach could still have a gap to the Pareto front, especially for problems out of the training distribution S (e.g., with different sizes and patterns) (Lisicki et al., 2020). Iterative search methods, such as sampling and beam search, can further improve the performance for a single solution or single preference (Veličković & Blundell, 2021). However, these approaches can not find a better approximation to the whole Pareto set for a MOCO problem. Algorithm 2 Neural MOCO Active Adaption 1: Input: model parameter θ, instance s, preference distribution Λ, number of adaption steps T , number of preferences per iteration K, number of tours N 2: for t = 1 to T do 3: λk ∼ SamplePreference(Λ) ∀k ∈ {1, · · · ,K} 4: πjk ∼ SampleTour(pθ(λk)(·|s)) ∀k ∈ {1, · · · ,K} ∀j ∈ {1, · · · , N} 5: b(s|λk)← 1N ∑N j=1 L(π j k|λk, s) ∀k ∈ {1, · · · ,K} 6: ∇J (θ)← 1KN ∑K k=1 ∑N j=1[(L(π j k|λk, s)− b(s|λk))∇θ(λk) log pθ(λk)(π j k|s)] 7: θ ← ADAM(θ,∇J (θ)) 8: end for 9: Output: The model parameter θ We propose a simple yet powerful active adaption approach as shown in Algorithm 2. It iteratively adapts the model parameter θ(λ) to a given instance s (or a batch of instances) with all preferences from the distribution Λ rather than searching for a specific solution. This method is similar to the active search in Bello et al. (2017) which actively refines the single-objective model for efficient candidate solutions searching. Our approach focuses on adapting the whole model for a better Pareto front approximation. Since this method is distribution-agnostic (not depend on specific instance distribution S), it is suitable for out-of-distribution adaption. C DETAILS OF THE MOCO PROBLEMS This section introduces the detailed problem formulation for the MOTSP, MOCVRP and MOKP we used in this work. We also provide the model configuration (e.g., input size, masks) for each problem. C.1 MOTSP We consider the Euclidean multiobjective traveling salesman problem (Euclidean MOTSP), which is widely used in the MOCO community (Lust & Teghem, 2010b; Florios & Mavrotas, 2014). Its single objective counterpart, 2D Euclidean TSP, has also been studied in single-objective neural combinatorial optimization (NCO) (Vinyals et al., 2015; Bello et al., 2017; Kool et al., 2019). A general m-objective MOTSP instance s with n nodes has m n × n cost matrices {Ci = (cijk), i = 1, · · · ,m} for m different costs. The problem is to find a tour (cyclic permutation π) to minimize all the costs: minL(π|s) = min(L1(π|s), L2(π|s), · · · , Lm(π|s)), where Li(π|s) = ciπ(n)π(1) + n−1∑ j=1 ciπ(j)π(j+1). (14) In a Euclidean MOTSP, the cost information is stored in the nodes rather than the edges. The j-th node has a 2m-dimensional vector [x1j ,x 2 j , · · · ,xmj ] where xij ∈ R2 is a 2D coordinate for the i-th objective. The i-th cost cijk = ||xij − xik||2 is the Euclidean distance for moving from node j to k. If we only have one objective m = 1, it reduces to the single-objective 2D Euclidean TSP: min π L1(π|s) = ||xπ(n) − xπ(1)||2 + n−1∑ j=1 ||xπ(i) − xπ(i+1)||2. (15) The single-objective TSP is already NP-hard, so does the MOTSP. In addition, the Pareto set of MOTSP has an exponential cardinality with respect to its input size (e.g., number of nodes), so it is intractable even for the 2-objective case (Ehrgott & Gandibleux, 2003). Problem Instance. Similar to the previous work on single-objective NCO (Lust & Teghem, 2010b; Florios & Mavrotas, 2014), we randomly sample all n nodes with uniform distribution on the 2mdimensional unit hyper-square (e.g., [0, 1]2m) for all problem instances. Model Details. In m-objective MOTSP, each node has a 2m-dimensional vector to store all cost information, so the input size is 2m for the encoder. To calculate the probability for selecting the next node, the decoder needs to mask all already visited nodes as unavailable. We have a valid tour when all node is selected (we assume the end node will connect to the start node). C.2 MOCVRP The vehicle routing problem (VRP) is a classical generalization of TSP, which has been studied for several decades. This work studies the capacitated vehicle routing problem (CVRP). In this problem, in addition to the location, each node (city) has a demand δi needed to be satisfied. There is an extra depot node and a vehicle with a fixed capacity D > δi,∀i to handle all the demands. The vehicle will always start from the depot node, then goes to different cities to satisfy multiple demands ∑ δi ≤ D , and turns back to the depot node. A solution to this problem is a set of routes that satisfies the demands for all cities. In the multiobjective problem, we consider two objectives to optimize. The first one is the total tour length as in the single-objective CVRP, and the other one is the tour length for the longest route (which is also called makespan in scheduling theory). This problem has been studied in the MOCO community (Lacomme et al., 2006). Problem Instance. Similar to the TSP problem, the location of n nodes are uniformly sampled from the unit square. For the demand, similar to the previous work on the single-objective counterpart (Kool et al., 2019; Kwon et al., 2020), we randomly sample discrete δi from the set {1, · · · , 9}. For problem with size n = 20, 50, 100, we set the capacity as D20 = 30, D50 = 40 and D100 = 50, respectively. Without loss of generality, we normalize the demands δ̂i = δiD and capacity D̂ = DD = 1 as in the previous work (Kool et al., 2019; Kwon et al., 2020). Split delivery is not allowed in this problem. Model Details. In the MOCVRP, the depot node has a 2-dimensional location vector, and the other nodes all have 3-dimensional vectors to store their locations and demands. We use different parameter matrices to project the nodes into the input embedding with the same dimension dh = 128. For node selection, the model records the current capacity of the vehicle and the rest demands for all nodes. If a node has been already visited or has demand larger than the vehicle’s current capacity, it will be masked as unavailable for the vehicle to visit. If no node is available to visit, the vehicle will go back to the depot. Once all nodes have 0 demands, the node selection is finished and we have a valid solution to the problem. C.3 MOKP Knapsack problem (KP) is also a widely studied combinatorial optimization problem. In this work, we consider the 0-1 multiobjective knapsack problem (MOKP) with m objectives and n items: max f(x) = max(f1(x), f2(x), · · · , fm(x)), where fi(x) = ∑n j=1 v i jxj , subject to ∑n j=1 wjxj ≤W, xj ∈ {0, 1}, (16) where each item has a weight wj and m different values {vij , i = 1, · · · ,m}. The problem (e.g., knapsack) has a maximum weight capacity W , and the goal is to select a set of items within the weight capacity to maximize the sum values for each objective. To make this problem nontrivial, we further assume all values vij ,∀i, j, weights wj∀j and the total capacity are non-negative real value. The total weight of all items is larger than the capacity ∑ wi > W , while each single weight is smaller than the capacity wi < W, ∀i = 1, · · · , n. The single-objective knapsack problem is NP-hard, so does the MOKP problem (Ehrgott & Gandibleux, 2003). Problem Instance. We randomly generate the values and weight for each item both uniformly in [0, 1]. We consider problems with n = 50, 100, 200 nodes, and the weight capacities are W50 = 12.5,W100 =W200 = 25 as in the previous work (Bello et al., 2017; Kwon et al., 2020). Model Details. In an m-objective MOKP, each item has m values and 1 weight, so the input dimension is 3 for the encoder. For node selection at each step, we mask all already selected nodes and nodes with weights larger than the remained capacity as unavailable. We terminate the selection when all nodes are labeled as unavailable. D ADDITIONAL EXPERIMENTAL RESULTS D.1 HYPERVOLUME INDICATOR To solve a MOCO problem, the result for each method is a set of approximate Pareto solutions. Since the ground truth Pareto set is usually unknown, we use the hypervolume (HV) indicator (Zitzler et al., 2007) to numerically compare the performance for each method. The hypervolume indicator is widely used in the MOCO community for algorithm comparison. The hypervolume of a set is the volume in the objective space it dominates. For a set P ⊂ Rm in the objective space, we can find a reference point r∗ that dominated by all solutions in P , and define the hypervolume HV(P ) as the volume of the set: S = {r ∈ Rm | ∃y ∈ P such that y ≺ r ≺ r∗}, (17) where HV(P ) = Vol(S). An illustration example is shown in Figure 4. The grey area is the set S dominated by the solutions in set P = {p1, p2, p3, p4} with the reference point r∗. In this 2- dimensional case, the hypervolume HV(P ) is the size of the grey area. The hypervolume indicator has two important advantages for measuring the approximate set quality with respect to Pareto optimality (Zitzler et al., 2007). First, if an approximate set A dominates another approximate setB, it will have a strictly better hypervolume HV(A) > HV(B). In addition, if an approximate set C contains all Pareto optimal solutions, it is guaranteed to have the maximum hypervolume value. In comparison, an approximate set has better performance if it has a larger hypervolume. With different objective scales, the hypervolume value will vary significantly among different problems. We report the normalized hypervolume values Ĥ(P ) = HV(P )/ ∏m i r ∗ i for all methods and also their performance gaps to our method. For each experiment, all methods share the same reference point r∗, which contains the largest value achieved for each objective. Since all problems we consider have positive objective values, we have 0 ≤ Ĥ(P ) ≤ 1 for all solution sets. The ground truth Pareto set P ∗ usually has Ĥ(P ∗) < 1, unless the zero vector 0 ∈ Rm is feasible and in the Pareto set. D.2 OUT-OF-DISTRIBUTION PROBLEM WITH EXACT PARETO FRONT We conduct experiments on 6 two-objective MOTSP100 instance (L1-L6) in Florios & Mavrotas (2014) of which the exact Pareto fronts are available. In these problems, the objective functions have different ranges, and the cities are not uniformly located, so they are out of our method’s training distribution. The results can be found in the Table 6. In addition to hypervolume, we also report the Inverted Generational Distance (IGD) (Fonseca et al., 2006) to measure the average Euclidean distance between the set of approximated Pareto solutions to the exact Pareto front. A smaller IGD value means the approximated set is closer to the exact Pareto front. According to the results, our method, with the instance augmentation and/or active search (10 min budget), can have a good performance on these out-of-distribution (OOD) instances with a 1%− 1.5% hypervolume gap. The proposed method also significantly outperforms the weight-sum OR tools baseline. There is still a gap to the strong weight-sum LKH baseline. As discussed in the paper, robust OOD generalization is an important research direction for the learning-based solver. D.3 FLEXIBLE PREFERENCE-BASED APPROXIMATION With our model, it is flexible to generate different number of solutions to approximate the Pareto front. We present an example on the three-objective TSP in Figure 5. We use the structured weight assignment approach from Das & Dennis (1998) to give the sets of weights for different instances. This method can generate n = Cm+p−1p evenly distributed weights with an identical distance to their nearest neighbor on the unit simplex (e.g., ∑m i=1 λi = 1 with λi ≥ 0,∀i), where m is the number of objectives and p is a parameter to control the number of weights. For the three objective TSP problems (m = 3), we assign p = 13, 44 and 140 to generate n = 105, 1035 and 10011 weights respectively. We also show the corresponding generated solutions for MOTSP instances with 20, 50 and 100 cities. According to the results in Figure 5, our model can generate well-distributed solutions with a small number of preferences, and generate a dense approximation with more preferences. The ability to generate a dense approximation to the whole Pareto set also allows the decision-maker to generate arbitrary preferred solutions on the approximate front. D.4 PREFERENCE-SOLUTION CONNECTION We further analyze the connection between the preference and its corresponding solution on the uniform and non-uniform Pareto front. Figure 6 shows the connections in our model with different numbers of preferences for the MOTSP100 instance. Since the two objectives (costs) in MOTSP have the same scale, this problem has a uniform connection between the preferences and the (approximate) Pareto front. By increasing the number of preferences, we have three sparse to dense generated Pareto front approximations. We are more interested in MOCVRP, which has a non-uniform Pareto front. In this problem, we consider two different objectives to optimize, namely, the total tour length (objective 1) and the tour length for the longest route (objective 2). These two objectives are in quite different scales, where the first objective is significantly larger than the second one. In Figure 7, we show different connections for the MOCVRP100 instance. For MA-MOCO, we report the connections for all 101 models. For our proposed model, we report the connections with different numbers of uniform preferences. In this problem, 101 models or our model with 101 uniform preferences are not enough to generate a dense approximate Pareto front. The obtained solutions are biased to the area that objective 1 has a much better relative performance. By increasing the number of preferences, our proposed method can generate more solutions that have relatively better performance for objective 2, which leads to a better Pareto front approximation with higher hypervolume. In this work, we always use a straightforward uniform sampling method to select the preferences. It is interesting to design a learning-based approach to select the preferences for a given problem instance. Preference adjustment and model adaption with awareness on the shape of Pareto front are also worthy to investigate. We left them to the future work. In the MOCVRP instance, we also find the 101-model MA-MOCO has a worse performance compared to our method with 101 preferences. The reason would be the mismatch between the uniform transfer training and the non-uniform Pareto front. Increasing the training steps for fine-tuning each model might fix this issue, but will lead to an even larger computational overhead, given the current training already require 700 epochs. The fixed preferences assignment is another issue for MAMOCO. It requires a fixed set of preferences for each model at the start of the training procedure when the decision makers might have no knowledge on the problem. When the training procedure is done, it dose not allow any preference adjustment without retraining the models. D.5 CONNECTION BETWEEN PREFERENCES AND SOLUTIONS In the previous sections, we use the weighted Tchebycheff aggregation to connect the preference to its corresponding solution for two-objective optimization problems: gtch(x|λ) = max 1≤i≤m {λi|fi(x)− z∗i |}, (18) where z∗i < minx∈X fi(x) is an ideal value for fi(x). There are also many other aggregation function we can use to build the connection. For example, a modified version of weighted Tchebycheff aggregation can be defiend as: gmtch(x|λ) = max 1≤i≤m { 1 λi |fi(x)− z∗i |}, (19) where the only difference is the weight vector 1λi . The penalty-based boundary intersection (PBI) is another widely-used aggregation function for decomposition-based multiobjective optimization (Zhang & Li, 2007): gpbi(x|λ) = d1 + θd2, d1 = |(F (x)− z∗)Tλ|/||λ||, d2 = ||F (x)− z∗ − d1 λ ||λ|| ||, (20) where θ is the penalty parameter, F (x) = (f1(x), . . . , fm(x)) and z∗ = (z∗i , . . . , z ∗ i ) are the objective vector and ideal vector respectively. An inverted version of PBI (IPBI) aggregation function (Sato, 2014) can be defined as: gipbi(x|λ) = −d1 + θd2, d1 = |(zN − F (x))Tλ|/||λ||, d2 = ||zN − F (x)− d1 λ ||λ|| ||, (21) where zN is the nadir vector that contain each objective’s worst value among all Pareto solutions. For a two-objective optimization problem, when we can find a dense set of corresponding solutions to cover the Pareto front for each aggregation function, their performance could be similar to each other. However, different aggregation functions would have quite different performances on the problems with three or more objective functions (called many-objective optimization problems). The performances will heavily depend on the shape of Pareto front (Ishibuchi et al., 2016), especially with a limited number of approximate solutions. We compare the performance of our proposed method with different aggregation functions on MOTSP50 with 105, 1035 and 10011 preferences respectively in Fig. 8. According to the results, the IPBI method can generate the most uniformly distributed solutions for the MOTSP problem with an inverted triangular shape of Pareto front, of which the shape is similar to the weight vector distribution (e.g., see Fig 5). This observation is consistent with the findings and analysis in Ishibuchi et al. (2016). According to these results, we use the Tchebycheff aggregation for all two-objective optimization problems and IPBI aggregation for all problems with more than two objective functions in this work. Since the shape of Pareto front tends to be irregular for real-world applications (Ishibuchi et al., 2019), how to properly choose the aggregation function and assign the preference distribution could be an important future work. D.6 THREE-OBJECTIVE MOTSP WITH ASYMMETRIC PARETO FRONT In this subsection, we conduct experiments on the three-objective MOTSP100 instances with asymmetric Pareto fronts. The definition of irregular MOTSP instance is almost the same as in Section C.1, except the coordinates for the three objectives and randomly sampled from [0, 1]2, [0, 0.5]2 and [0, 0.1]2 respectively, rather than uniformly from [0, 1]6. In this way, the objective values for the MOTSP instance will be in quite different scales, thus leading to an irregular Pareto front (the axes in Figure 9 are in different scales). A well-known drawback of the scalarization-based approach is that it cannot evenly explore the irregular Pareto front with a set of uniform weights, which can also be observed in Figure 9(a)-(d). Our proposed approach allows the user to generate arbitrary trade-off Pareto solutions on the inference time, therefore they can directly generate a dense approximation and then select the preferred solutions as in Figure 9(d). This flexibility can partially address the unevenly distributed issues caused by a (small) set of fixed weights in the traditional scalarization-based approach. If we know the approximate range of different objectives in advance, we can first normalize them into [0, 1] to encourage a more symmetric Pareto front. Otherwise, on the inference time, we can use a (prior knowledge-based) biased and non-uniform weight assignment to generate uniformly distributed solutions. In Figure 9(e)-(h), we first multiple the three-dimensional weights by (1, 2, 10) and then normalize them back to [0, 1]3 which leads to a set of non-uniform weights as shown in Figure 9(e). With this weight assignment, we have a a set of more evenly distributed Pareto solutions as shown in Figure 9(f)-(h). D.7 PREFERENCE-BASED INFERENCE Even without any prior knowledge, our proposed approach allows the user to adaptively adjust the weights in real-time to search for the most suitable solutions in their preferred region(s). Some examples of selected weights and their corresponding solutions are shown in Figure 10 for symmetric Pareto front and Figure 11 for asymmetric Pareto front. If we have prior knowledge of the preference (e.g., the decision-makers will only care about a specific region of the Pareto front), we can modify the training preference distribution Λ accordingly to enhance the training efficiency. For the problem with a truly irregular Pareto front, it is also possible to adaptively adjust the given weights to make them evenly explore the Pareto front during the learning/searching process. One potential direction could be to consider the connection between scalarization and hypervolume maximization as in Zhang & Golovin (2020). We believe this could be an important research topic for the learning-based scalarization approach in future work. D.8 PROBLEM WITH MORE OBJECTIVES Finally, we test the performance of our proposed method on the 10-objective knapsack problems. We train a new model for the 10 objective MOKP with 100 items with uniform 10-dimension preferences. The obtained value path plots on the 10-objective MOKP100 are shown in Figure 12. For problems with more objectives, we need a large number of solutions to approximate the Pareto set. Training a large number of neural network models would have a huge computational and storage overhead, which is also not desirable in practice. Therefore, we do not compare with the AMMOCO and MOA-DRL methods on this problem. For inference, to approximate the Pareto set, we use a set of 715 fixed preferences following the weight assignment approach from (Das & Dennis, 1998) (with m = 10, p = 4, hence n = C10+4−14 = 715). The model generates different trade-off solution for each preference, so there are 715 different value paths (lines) on each plot. In MOKP, we want to maximize the values for all objectives under the capacity limitation. A set of good approximate solutions should have relatively high overall values. According to the results, our proposed method has the best performance. We also test the performance of our method on a larger problem with 500 items. The results shown in Figure 13 confirm that our trained model generalizes well to problems with a larger size.
1. What is the focus of the paper regarding multi-objective combinatorial optimization issues? 2. What method does the author suggest to approximate the Pareto set? 3. What is the reviewer's opinion on the significance of representing the Pareto set using an ML model? 4. Does the reviewer have any concerns about the approach suggested in the paper? 5. How does the reviewer assess the theoretical contribution of the paper? 6. Are there any questions or suggestions regarding the experimental results presented in the paper?
Summary Of The Paper Review
Summary Of The Paper This submission treats multi-objective combinatorial optimization problems and aims to approximate the pareto set. The idea is to build a single ML model that represents the pareto set by providing a pareto set solution for any desired trade-off. The model is build using reinforcement learning and may either be used to find singular solutions with fixed trade-off, or to approximate the pareto set with uniform samples. The authors prove that the pareto set is approximated well if individual trade-offs are approximated well. Review In my opinion, the authors provide an interesting and novel approach to multi-objecive optimization problems. Even representing the potentially exponentially large pareto set is a challenge, and is interesting that this can be done via an ML model. A concern could be that in order to achieve a good approximation of the pareto set, it seems that learning the single objective problem well is required (which often is a difficult problem in itself). Would it be possible to solve the single-objective variants with a traditional combinatorial algorithm? The theoretical contribution of this submission is limited. The approach is evaluated for three very relevant problems: the multi-objective traveling salesperson problem, the multi-objective capacitated vehicle routing problem and the capacitated knapsack problem. A minor concern is that I found the tables with the experimental results hard to read. Could you please add a short explanation for the meaning of the table/column headers?
ICLR
Title Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization Abstract Multiobjective combinatorial optimization (MOCO) problems can be found in many real-world applications. However, exactly solving these problems would be very challenging, particularly when they are NP-hard. Many handcrafted heuristic methods have been proposed to tackle different MOCO problems over the past decades. In this work, we generalize the idea of neural combinatorial optimization, and develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure. We propose a single preference-conditioned model to directly generate approximate Pareto solutions for any trade-off preference, and design an efficient multiobjective reinforcement learning algorithm to train this model. Our proposed method can be treated as a learning-based extension for the widely-used decomposition-based multiobjective evolutionary algorithm (MOEA/D). It uses a single model to accommodate all the possible preferences, whereas other methods use a finite number of solutions to approximate the Pareto set. Experimental results show that our proposed method significantly outperforms some other methods on the multiobjective traveling salesman problem, multiobjective vehicle routing problem, and multiobjective knapsack problem in terms of solution quality, speed, and model efficiency. 1 INTRODUCTION Many real-world applications can be modeled as multiobjective combinatorial optimization (MOCO) problems (Ehrgott & Gandibleux, 2000). Examples include the multiobjective traveling salesman problem (MOTSP) (Lust & Teghem, 2010a), the multiobjective vehicle routing problem (MOVRP) (Jozefowiez et al., 2008) and the multiobjective knapsack problem (MOKP) (Bazgan et al., 2009). These problems have multiple objectives to optimize, and no single solution can optimize all the objectives at the same time. Instead, there is a set of Pareto optimal solutions with different trade-offs among the objectives. It is very challenging to find all the exact Pareto optimal solutions for a MOCO problem. Actually, finding one single Pareto optimal solution can be NP-hard for many problems (Ehrgott & Gandibleux, 2000), and the number of Pareto solutions could be exponentially large with regard to the problem size (Ehrgott, 2005; Herzel et al., 2021). The decision-maker’s preference among different objectives is usually unknown in advance, making it very difficult to reduce the problem into a single-objective one. Over the past several decades, many methods have been developed to find an approximate Pareto set for different MOCO problems within a reasonable computational time. These methods often need carefully handcrafted and specialized heuristics for each problem. It can be very labor-intensive in practice. In many real-world applications, practitioners need to solve many different instances for the same particular problem, where the instances can be easily obtained or generated (Bengio et al., 2020). It is desirable to learn the patterns behind these problem instances explicitly or implicitly to design efficient algorithms (Cappart et al., 2021a). Machine learning techniques can be naturally used for this purpose. Some learning-based methods have been recently proposed for solving single-objective combinatorial optimization problems (Bengio et al., 2020; Vesselinova et al., 2020; Mazyavkina et al., 2021; Cappart et al., 2021a). In this work, we extend the learning-based method to solve MOCO problems in a principled way as shown in Figure 1. Our main contributions include: • We propose a novel neural multiobjective combinatorial optimization method to approximate the whole Pareto set via a single preference-conditioned model. It allows decision makers to obtain any preferred trade-off solution without any search effort. • We develop an efficient end-to-end reinforcement learning algorithm to train the single model for all different preferences simultaneously, and a simple yet powerful active adaption method to handle out-of-distribution problem instances. • We conduct comprehensive experiments on MOTSP, MOVR and MOKP of different settings. The results show that our proposed method can successfully approximate the Pareto sets for different problems in an efficient way. It also significantly outperforms other methods in terms of solution quality, speed, and model efficiency. 2 BACKGROUND AND RELATED WORK Multiobjective Combinatorial Optimization (MOCO). MOCO has been attracting growing research efforts from different communities over the past several decades (Sawaragi et al., 1985; Wallenius et al., 2008; Herzel et al., 2021). There are two main approaches to tackle the MOCO problems: the exact methods and the approximation methods (Ehrgott, 2005). Exact methods could be prohibitively costly when, as it often happens, the MOCO problem is NP-hard and the problem size is very large (Florios & Mavrotas, 2014). For this reason, many heuristics (Jaszkiewicz, 2002; Zhang & Li, 2007; Ehrgott & Gandibleux, 2008) and approximation methods (Papadimitriou & Yannakakis, 2000; Herzel et al., 2021) have been developed to find a manageable number of approximated Pareto solutions with a reasonable computational budget. However, these methods usually depend on carefully handcrafted designs for each specific problem (Ehrgott & Gandibleux, 2000), and the required effort is often nontrivial in real-world applications. Machine Learning for Combinatorial Optimization. As summarized in Bengio et al. (2020), there are three main learning-based approaches for combinatorial optimization: learning to configure algorithms (Kruber et al., 2017; Bonami et al., 2018), learning alongside the algorithms (Lodi & Zarpellon, 2017; Gasse et al., 2019; Chen & Tian, 2019), and learning to directly predict the solutions (Nowak et al., 2018; Emami & Ranka, 2018; Larsen et al., 2018). Neural combinatorial optimization (NCO) belongs to the last category where the model directly produces a good solution for a given problem instance. Vinyals et al. (2015) proposed a pointer network to sequentially construct a solution for the TSP problem. Bello et al. (2017) made a critical improvement to use reinforcement learning to train the model, eliminating the impractical optimal solutions collection for NP-hard problems. Some other improvements on model structure and training procedure have been proposed in the past few years (Nazari et al., 2018; Deudon et al., 2018; Kool et al., 2019; Veličković & Blundell, 2021), especially with graph neural networks (GNNs) (Dai et al., 2017; Li et al., 2018; Joshi et al., 2019; Dwivedi et al., 2020; Drori et al., 2020). Recent efforts have been made on more efficient learning strategies (Kwon et al., 2020; Karalias & Loukas, 2020; Lisicki et al., 2020; Geisler et al., 2022), learning-based graph search (Cappart et al., 2021b; Kool et al., 2021; Fu et al., 2021; Xin et al., 2021; Hudson et al., 2022), and iterative improvement methods (Wu et al., 2021; Ma et al., 2021; Li et al., 2021). Neural MOCO. Most of the existing learning-based methods are for single-objective combinatorial problems. Recently, a few attempts have been made to solve MOCO problems (Li et al., 2020; Wu et al., 2020; Zhang et al., 2021a;b). These methods adopt the MOEA/D framework (Zhang & Li, 2007) to decompose a MOCO problem into a number of single-objective subproblems, and then build a set of models to solve each subproblem separately. However, since the number of Pareto solutions would be exponentially large (Ehrgott, 2005), the required number of models would be huge for finding the whole Pareto set. In this work, we propose a single preference-conditioned model for solving MOCO problems, with which the decision makers can easily obtain any trade-off solutions. The proposed single neural MOCO solver could be much easier to use in a real-world system (Veličković & Blundell, 2021), than those using a large set of different models. 3 PROBLEM FORMULATION 3.1 MULTIOBJECTIVE COMBINATORIAL OPTIMIZATION A multiobjective combinatorial optimization (MOCO) problem can be defined as follows: min x∈X F (x) = (f1(x), f2(x), . . . , fm(x)), (1) where X is a discrete search space, and F (x) = (f1(x), . . . , fm(x)) is an m-objective vector. Since the individual objectives conflict each other, no single solution can optimize all of them at the same time. Therefore, practitioners are interested in Pareto optimal solutions, defined as follows. Definition 1 (Pareto Dominance). Let xa, xb ∈ X , xa is said to dominate xb (xa ≺ xb) if and only if fi(xa) ≤ fi(xb),∀i ∈ {1, ...,m} and fj(xa) < fj(xb),∃j ∈ {1, ...,m}. Definition 2 (Pareto Optimality). A solution x∗ ∈ X is a Pareto optimal solution if there does not exist x̂ ∈ X such that x̂ ≺ x∗. The set of all Pareto optimal solutions is called the Pareto set, and the image of the Pareto set in the objective space is called the Pareto front. Each Pareto solution represents an optimal trade-off among the objectives, and it is impossible to further improve one of the objectives without deteriorating any other objectives. 3.2 DECOMPOSITION AND PREFERENCE-BASED SCALARIZATION Decomposition is a mainstream strategy for solving multiobjective optimization problem (Zhang & Li, 2007). It decomposes a multiobjective problem into a number of subproblems, each of which can be a single objective or multiobjective optimization problem. MOEA/D (Zhang & Li, 2007) and its variants (Trivedi et al., 2016) solve these subproblems in a collaborative manner and generate a finite set of Pareto solutions to approximate the Pareto front. The most widely used way for constructing a single objective subproblem is the preference-based scalarization (Ehrgott, 2005; Miettinen, 2012). For an m-objective optimization problem, a preference vector for the objective functions can be defined as λ ∈ Rm that satisfies λi ≥ 0 and ∑m i=1 λi = 1. Weighted-Sum Aggregation is the simplest approach. It defines the aggregation function to minimize in the subproblem associated with λ as gws(x|λ) = m∑ i=1 λifi(x). (2) However, this approach can only find solutions on the convex hull of the Pareto front (Ehrgott, 2005). Weighted-Tchebycheff (Weighted-TCH) Aggregation is an alternative approach to minimize: gtch(x|λ) = max 1≤i≤m {λi|fi(x)− z∗i |}, (3) where z∗i < minx∈X fi(x) is an ideal value for fi(x). Any Pareto optimal solution could be an optimal solution of problem (3) with a specific (but unknown) preference λ (Choo & Atkins, 1983). 3.3 CURRENT DRAWBACKS AND OUR METHOD Drawbacks of Existing Methods. For many MOCO problems, the size of the Pareto set would be exponentially large with respect to the input size (e.g., nodes in MOTSP). It is computationally impractical for existing methods to find the whole Pareto set (Herzel et al., 2021). For this reason, all of the existing heuristic-based and learning-based methods are to find a small subset of approximate Pareto solutions. Decision makers can only select solutions from this small set, which often does not contain their preferred solutions. In addition, scalarization may also produce a complicated single objective subproblem. For example, the Tchebycheff scalarized subproblem of MOTSP is not a classic TSP, and thus cannot be solved by the highly specialized TSP solvers such as LKH (Helsgaun, 2000) or Concorde (Applegate et al., 2007). Our Method. Instead of finding a set of finite solutions, we propose a novel way to approximate the whole Pareto set using a single model. With our proposed model, decision makers can easily obtain any solution from the approximate Pareto set to satisfy their preferred trade-offs in real time as shown in Figure 2. This is a clear advantage to support interactive decision making. In addition, our proposed reinforcement learning based method can use a scalarization method to combine multiobjective rewards, and does not need to consider the problem-specific condition. In this paper, we mainly consider learning the whole Pareto front. It is possible to incorporate decision-maker’s preferences on specific regions for model building and inference as discussed in Appendix D.6. We believe our proposed method is a new principled way to solve multiobjective combinatorial optimization problems. 4 THE PROPOSED MODEL: PREFERENCE-CONDITIONED NEURAL MOCO 4.1 PREFERENCE-CONDITIONED SOLUTION CONSTRUCTION Decomposition and scalarization link preferences to their corresponding Pareto solutions. This work builds a preference-conditioned model to accommodate all the preferences. We use the MOTSP as an example to explain our model design. In an MOTSP instance s, a fully connected graph of n nodes (cities) with m distance metrics on each edge is given. A feasible solution is a tour that visits each city exactly once and returns to the starting city. The i-th objective to minimize is the tour length (total cost) based on the i-th distance metric. A tour can be represented as π = (π1, · · · , πt, · · · , πn), πt ∈ {1, · · · , n}, a permutation of all the nodes defining the order in which n cities is visited. Our model defines a preference-conditioned stochastic policy pθ(λ)(π|s) parameterized by θ(λ) to construct a valid solution in sequence: pθ(λ)(π|s) = ∏n t=1 pθ(λ)(πt|s,π1:t−1). (4) The goal is to learn an optimal preference-conditioned policy pθ(λ)(π|s) to construct tours with the lowest scalarized costs for each preference λ. 4.2 THE PROPOSED MODEL We propose to use an Attention Model (AM) (Kool et al., 2019) as our basic encoder-decoder model as shown in Figure 3. For the MOCO problems considered in this work, a preference-agnostic encoder is capable to transfer problem instances into embeddings (e.g., embedding for all cities) used in the preference-conditioned decoder. In our model, only the decoder’s parameters θdecoder(λ) are conditioned on the preference λ: θ(λ) = [θencoder,θdecoder(λ)]. (5) Preference-agnostic Encoder. The encoder takes a problem instance s (e.g., an MOTSP instance with n cities) as its input, and outputs a set of d-dimensional node embeddings {h1, · · · ,hn} for each city. For a given instance, the same embeddings can be used for different preferences. Hence we only need a single forward pass for the dense encoder. We use the attention-based encoder as in Kool et al. (2019) for all preferences. Preference-based Attention Decoder. The decoder has the same structure as in the attention-based model (Kool et al., 2019), but with parameters θdecoder(λ) = [WQ(λ),WK(λ),WV (λ),WMHA(λ)] conditioned on the preference λ. It takes the nodes embeddings for all cities as input, and sequentially selects the next node πt with probability pθ(λ)(πt|s,π1:t−1). At time step t, the decoder first constructs a context embedding ĥ(C) = [hπ1 ,hπt−1 ]WQ(λ) from the first selected node hπ1 , and the last selected node hπt−1 . The matrix WQ(λ) ∈ R2d×d projects the concatenated 2d-dimensional vector to a d-dimensional vector. Then we further aggregate the context embedding via a Multi-Head Attention (MHA) (Vaswani et al., 2017) with the embeddings for all cities {h1, · · · ,hn}: h(C) = MHA(Q = ĥ(C),K = {h1, · · · ,hn}WK(λ), V = {h1, · · · ,hn}WV (λ))WMHA(λ), (6) where Q,K, V are the query, key and value for MHA, respectively. WMHA(λ) represents the MHA parameters. The context embedding h(C) contains all information for the instance and the current partial tour at step t. We can calculate the logit for selecting each city with its embedding hj : logitj = { C · tanh(h T (C)hj√ d ) if j ̸= πt′ ∀t′ < t, −∞ otherwise. (7) All already visited cities are masked with −∞ and will not be selected as the next city. The logits of the rest cities are clipped into [−C,C] (C = 10) as in the AM model (Kool et al., 2019). The probability for choosing the j-th city at time step t can be calculated as pθ(λ)(πt = j|s,π1:t−1) = elogitj/ ∑ k e logitk . With this probability, the decoder can construct a feasible tour. One remaining designing issue is how to generate the preference-conditioned parameters θdecoder(λ). Multiplicative interactions (Jayakumar et al., 2020) and hypernetwork (Schmidhuber, 1992; Ha et al., 2017) provide a powerful and efficient way for conditional computation, which is widely used for transfer learning (von Oswald et al., 2020; Ehret et al., 2021; Lin et al., 2020; Navon et al., 2021). We use a simple MLP hypernetwork θdecoder(λ) = MLP(λ|ψ) to generate the decoder parameters conditioned on the preference. The details of our proposed model can be found in Appendix B. Algorithm 1 Neural MOCO Training 1: Input: preference distribution Λ, instances distribution S, number of training steps T , number of preferences per iteration K, batch size B, number of tours N 2: Initialize the model parameters θ 3: for t = 1 to T do 4: λk ∼ SamplePreference(Λ) ∀k ∈ {1, · · · ,K} 5: si ∼ SampleInstance(S) ∀i ∈ {1, · · · , B} 6: πjki ∼ SampleTour(pθ(λk)(·|si)) ∀k, i ∀j ∈ {1, · · · , N} 7: b(si|λk)← 1N ∑N j=1 L(π j ki|λk, si) ∀k ∈ {1, · · · ,K} ∀i ∈ {1, · · · , B} 8: ∇J (θ)← 1KBN ∑K k=1 ∑B i=1 ∑N j=1[(L(π j ki|λk, si)− b(si|λk))∇θ(λk) log pθ(λk)(π j ki|si)] 9: θ ← ADAM(θ,∇J (θ)) 10: end for 11: Output: The model parameter θ Instance Augmentation for MOCO. Our proposed model only has a small extra computational and memory overhead to the original single-objective AM solver. We keep our model as simple as possible, making it easy for our approach to use other models and other improvements developed for single-objective NCO. These properties are crucially important for generalizing the NCO to multiobjective problems. In this work, we simply extend the instance augmentation method (Kwon et al., 2020) to MOCO. The details can be found in Appendix B.1. 5 PREFERENCE-CONDITIONED MULTIOBJECTIVE POLICY OPTIMIZATION 5.1 COST FUNCTION Our proposed node selection strategy guarantees that the model can always generate feasible solutions. In this section, we develop an efficient multiobjective policy optimization method to train the model for all the preferences simultaneously. For an MOTSP problem, the objective functions are a vector of m different costs (i.e. lengths) for a tour L(π) = [L1(π), · · · , Lm(π)]. We can define a weighted-Tchebycheff scalarized cost for each preference λ: L(π|λ) = max 1≤i≤m {λi|Li(π)− (z∗i − ε)|}, (8) where z∗i is an ideal cost for the i-th objective. For a given instance s, our goal is to minimize the expected cost for all preferences: J (θ|s) = Eλ∼Λ,π∼pθ(λ)(·|s)L(π|λ), (9) where Λ is the uniform distribution over all valid preferences. To train the model, we repeatedly sample different instances s ∼ S at each iteration. We define the training loss as J (θ) = Es∼SJ (θ|s). 5.2 MULTIOBJECTIVE REINFORCE For a given instance s and a specific preference λ, we use the REINFORCE (Williams, 1992) to estimate the gradient for the preference-conditioned scalar cost: ∇J (θ|λ, s) = Eπ∼pθ(λ)(·|s)[(L(π|λ, s)− b(s|λ))∇θ(λ) log pθ(λ)(π|s)], (10) where b(s|λ) is the baseline of expected cost to reduce the gradient variance. This gradient can be estimated by Monte Carlo sampling. At each update step, we randomly sample K preference {λ1, · · · , λK} ∼ Λ, B instances {s1, · · · , sB} ∼ S, and N different tour {π1i , · · · ,πNi } ∼ pθ(λk)(·|si) for each λk-si combination. The approximated gradient is: ∇J (θ) ≈ 1 KBN K∑ k=1 B∑ i=1 N∑ j=1 [(L(πji |λk, si)− b(si|λk))∇θ(λk) log pθ(λk)(π j i |si)]. (11) We use the shared baseline bshared(si|λk) = 1N ∑N j=1 L(π j ki|λk, si) over N sampled tours for each λk − si combination. The starting node for each tour πjki is chosen in random to force diverse rollouts as proposed in (Kwon et al., 2020). The algorithm is shown in Algorithm 1. 5.3 ACTIVE ADAPTION We also propose a simple yet powerful active adaption approach to further adjust the whole model to approximate the Pareto front for a given test instance in Appendix B.3. The proposed method does not depend on specific instance distribution S, and is suitable for out-of-distribution adaption. 6 EXPERIMENTS Problems and Model Setting. We consider MOTSP (Lust & Teghem, 2010a), MOCVRP (Lacomme et al., 2006) and MOKP (Bazgan et al., 2009) in our experimental studies, and use the same model settings for all problems with different task-specific input sizes and mask methods. The main policy model encoder is the Attention Model (Kool et al., 2019) and the hypernetwork is an MLP. We randomly generate 100, 000 problem instances on the fly for each epoch, and train the model for 200 epochs. The optimizer is ADAM with learning rate η = 10−4 and weight decay 10−6. We train our models on a single RTX 2080-Ti GPU, and it costs about 10 minutes for an epoch on MOTSP100. We give detailed model settings, problem formulations, and more experimental results in Appendix BCD. The source code can be found in https://github.com/Xi-L/PMOCO. Baseline. We call our proposed preference-conditioned multiobjective combinatorial optimization as P-MOCO. We compare it with three widely-used evolutionary algorithm frameworks for MOCO: MOGLS (Jaszkiewicz, 2002) is a multiobjective genetic local search algorithm, NSGAII (Deb et al., 2002) is a Pareto dominance-based multiobjective genetic algorithm, and MOEA/D (Zhang & Li, 2007) is a decomposition-based multiobjective evolutionary algorithm. All these algorithm frameworks need problem-specific heuristics to generate and search feasible solutions for different problems. We also compare P-MOCO with two other learning-based methods: DRL-MOA (Li et al., 2020) decomposes a MOCO with different preferences and builds a Pointer Network (Vinyals et al., 2015; Bello et al., 2017) to solve each subproblem, and AM-MOCO is a multi-models variant of our proposed model, which builds Attention Model (Kool et al., 2019) for each subproblem. The Weight-Sum scalarization of MOTSP and MOKP are their respective single-objective counterpart. Therefore, we also compare our method with the approach that uses some state-of-the-art singleobjective solvers for each weight-sum subproblem. Model Information for the learning-based methods is shown in Table 1. Our model supports flexible preference assignment and only has 1.1% total parameters to the multi-model counterpart. Inference and Metrics. We report the results and run time for solving 200 random test instances for each problem, with normally 101 to 105 different trade-offed solutions, and up to 10, 011 solutions for our proposed method. In most cases, we report our model’s zero-shot generalization performance without any search and fine-tune. We use the hypervolume indicator (Zitzler et al., 2003) to measure the performance for each method. For a set P ⊂ Rm in the objective space, we can find a reference point r∗ that dominated by all solutions in P , and define the hypervolume HV(P ) as volume for: S = {r ∈ Rm | ∃ y ∈ P such that y ≺ r ≺ r∗}, (12) where HV(P ) = Vol(S). In general, the larger the hypervolume, the better the solution set tends to be. The ground truth Pareto set always has the largest hypervolume. We report the normalized hypervolume values in [0, 1] with respect to the same r∗ for all the methods, and also the ratios of hypervolume difference to our method. A Wilcoxon rank-sum test with a significance level 1% is conducted to compare the results for each experiment. More details can be found in Appendix D.1. 6.1 RESULTS AND ANALYSIS MOTSP. The results on two and three objective MOTSP are shown in Table 2 and Table 3 respectively. MOGLS, NSGAII and MOEA/D all use 2-opt local search heuristic (Jaszkiewicz, 2002) to MOTSP MOCVRP MOKP search for promising solutions. We also include two weight-sum scalarization baselines with the state-of-the-art LKH solver (Helsgaun, 2000; Tinós et al., 2018) and Google OR tools (Perron & Furnon, 2019). For the bi-objective problems, our proposed method with a single model has similar performances compared with AM-MOCO on all problems. It achieves the best performance with instance augmentation, which significantly outperforms other methods but is beaten by the LKH solver. For the three objective problems, our method can further improve its performance by generating much more trade-off solutions within a reasonable amount of time, which other methods cannot do. As shown in Figure 2 and Figure 5, our method can successfully learn the mapping from preferences to the corresponding solutions, and can generate a good prediction to the whole Pareto front. Decision makers can easily obtain any preferred trade-off solutions as they like. This flexibility could be desirable in many real-world applications. More discussion on the connection between the preference and Pareto solution for three-objective TSP can be found in Appendix D.5 D.6 D.7. MOCVRP. In this problem, each node has a demand, and we need to construct multiple return routes for a vehicle with a fixed capacity from the same depot to handle all demands. The objectives we consider are to minimize the tour length for all routes and also the tour length for the longest routes (the makespan in scheduling) (Lacomme et al., 2006). All the non-learning algorithm frameworks use the problem-specific constructive heuristics and local search method proposed in Lacomme et al. (2006) to search feasible non-dominated solutions. The results in Table 2 show that our method significantly outperforms the non-learning heuristics in terms of both solution quality and running time. It also outperforms AM-MOCO with 100 individual models, which could be due to the asymmetric objective scales. We provide further analysis in Appendix D.4. MOKP. The multiobjective 0-1 knapsack problem can be found in many real-world applications (Bazgan et al., 2009). We consider the uni-dimension problem, where each item has multiple values and one weight. The goal is to select a subset of items to maximize all obtained values with a weight constraint. The non-learning methods use binary coding with a greedy transformation heuristic to maintain feasibility (Ishibuchi et al., 2014). We also include weight-sum scalarization baselines with dynamic programming (DP) and a strong greedy search based on the value-weight ratio. According to the results in Table 2, our method has the best performance on all problems. The DP method is also outperformed by our method since the weight-sum scalarization can only find the convex hull of the Pareto front. The Tchebycheff scalarization of MOKP is not a KP problem, while our method is more flexible to use Tchebycheff scalarization on the reward function. We also report the results on 10 objective MOKP100 and the generalization performance to problem with 500 items in Appendix D.8. Out-of-Distribution Problems and Active Adaption. We also validate the generalization performance of our method on 6 out-of-distribution (OOD) MOTSP problems from Fonseca et al. (2006). Their ground truth Pareto fronts can be obtained by exhaustive search. The results are shown in Appendix D.2 due to the page limit. With active adaption, our method can achieve good performance (1% - 1.5% HV gap to the ground truth Pareto fronts) on these OOD problems. 7 CONCLUSION AND FUTURE WORK Conclusion. We have proposed a novel preference-conditioned method to approximate the whole Pareto front for MOCO problems using a single model. It allows decision makers to directly obtain any trade-off solutions without any search procedure. Experiments on different problems have shown that our proposed method significantly outperforms other methods in terms of performance, speed and model efficiency. We believe the proposed method is a principled way for solving MOCO. Future Work. In a sense, our method can be regarded as a learning version of the decompositionbased algorithm (MOEA/D (Zhang & Li, 2007)) dealing with all the possible trade-off preferences. Instead of maintaining a set of finite solutions as in other MOEA/D vaiants (Trivedi et al., 2016), we build a single learning-based model to solve the subproblems for all the preferences simultaneously in a collaborative manner. We believe the single-model-for-all-preference approach is a promising alternative to the current default finite-population-based methods, and it could be an important research direction for multiobjective optimization. Our method can be further improved with other advanced models and efficient multiobjective training procedures. In the future, we will study fundamental issues of multiobjective optimization (e.g., convergence v.s. diversity, exploitation v.s. exploration trade-off) for Pareto set learning methods. Limitation. It is very difficult to give a convergence guarantee for learning-based MOCO, where each preference-based subproblem could be already NP-hard, and the number of Pareto solutions is exponentially large with respect to the input size. See detailed discussion in Appendix A. ACKNOWLEDGMENTS We thank Prof. Hisao Ishibuchi for his valuable comments on an earlier version of this work. This work was supported by the Hong Kong General Research Fund (11208121, CityU-9043148). A PARETO SET LEARNING AND APPROXIMATION ANALYSIS A.1 PARETO SET LEARNING AND CONVERGENCE GUARANTEE In this work, we have proposed a novel neural combinatorial optimization (NCO) method to approximate the whole Pareto set for MOCO problems with a single model. The proposed learning-based MOCO solver can directly generate arbitrary trade-off solutions without extra optimization. We believe it is a principled way to solve MOCO problems. However, the lack of an exact optimality guarantee is a limitation of the proposed method, which is also the case for previous work on single-objective neural combinatorial optimization (Vinyals et al., 2015; Bello et al., 2017; Kool et al., 2019). This limitation is mainly due to the fact that many singleobjective combinatorial optimization (CO) problems are NP-hard, and the size of Pareto sets for a MOCO problem would be exponentially huge, which makes it very difficult to exactly solving the problems (Ehrgott, 2005; Herzel et al., 2021). In addition, the training for the parameterized policy (neural network model) cannot guarantee to fit all training problems perfectly. The generalization ability to problem instances with different patterns (out-of-distribution generalization) is another critical issue that makes it difficult to give an exact optimality guarantee to the proposed learningbased algorithm. On the other hand, our proposed model is an efficient mapping from the preferences to the corresponding approximate set of the Pareto optimal solutions. It provides a flexible way for decision makers to obtain an approximate solution with their preferred trade-off directly. The experimental results also show that our proposed method can generate good approximate Pareto sets for three different MOCO problems. In the next subsection, we provide a thorough discussion on the approximation ability of our proposed method. A.2 APPROXIMATION ANALYSIS For a MOCO problem, the number of Pareto solutions could be exponentially large with respect to its input size, which makes the problem intractable (Ehrgott, 2005; Herzel et al., 2021). The preference-based scalarization methods and decomposition methods (Choo & Atkins, 1983; Zhang & Li, 2007) we used provides a principled way to link the Pareto solutions with preference, allowing us to tackle the problem in a systematic manner. In this work, we propose to approximately solve the scalarized subproblem with all preferences via a single model. We first briefly review the weighted scalarization method and its Pareto optimality guarantee as discussed in the main paper. Then we provide further discussion on the approximation analysis. Our proposed method decomposes a MOCO problem into preference-based subproblems with the weighted-Tchebycheff scalarization (Weighted-TCH): min x∈X gtch(x|λ) = min x∈X max 1≤i≤m {λi|fi(x)− (z∗i − ε)|}, (13) where z∗i is the ideal value for objective fi(x) (e.g., the lower bound), and u ∗ i = z ∗ i − ε is a utopia value with small positive component ε. The preference vector λ ∈ Rm satisfies λi ≥ 0 and ∑m i=1 λi = 1, where λi is the preference for the i-th objective. This approach has a desirable property: Lemma 1 (Choo & Atkins (1983)). A feasible solution x ∈ X is Pareto optimal if and only if there is a weight vector λ > 0 such that x is an optimal solution to the problem (13). According to Lemma 1, we can obtain any Pareto solution by solving the Weighted-TCH subproblem with a specific weight. However, the weight for each Pareto solution depends on its objective values, which are not known in advance (Sawaragi et al., 1985; Ehrgott, 2005). The decision-maker still needs to solve multiple subproblems with different preferences to find a desirable solution. To find the whole Pareto set, it needs to solve an exponentially huge number of subproblems. Given a problem instance s, our proposed model provides a single mapping function xλ = h(λ) from any preference λ to its corresponding solution xλ, which is constructed by the preferencebased policy pθ(λ)(x|s). In the ideal case, if all generated solutions xλ are the optimal solutions x∗λ of problem (13) with preference λ, according to Lemma 1, our proposed model can generate the whole Pareto set (all Pareto optimal solutions) for the original MOCO problem. In practice, we are interested in the proposed method’s approximation ability. We find that its performance strongly depends on the approximation ability of the parameterized policy (neural network model) on the single-objective scalarized subproblem. We first give an informal claim on our method’s approximation ability, then provide detailed explanations and discussions. (Informal) Claim 1. If the proposed method can approximately solve the subproblem (13) with any preference λ, it can generate a good approximation to the whole Pareto set for the MOCO problem. To support this claim, we follow the traditional ε-Pareto approximate method for MOCO problems (Papadimitriou & Yannakakis, 2000; Herzel et al., 2021). First, an ε-Pareto domination relation between two individual solutions can be defined as: Definition 3 (ε-Pareto Domination). For a MOCO problem and an ε > 0, let xa, xb ∈ X , xa is said to ε-dominate xb (xa ≺ε xb) if fi(xa) ≤ (1 + ε)fi(xb),∀i ∈ {1, · · · ,m}. This definition is a natural generalization from the (1 + ε)-approximation for single-objective optimization. With this concept, an ε-approximate Pareto set (Papadimitriou & Yannakakis, 2000) can be defined as: Definition 4 (ε-Approximate Pareto Set). For an ε > 0, a set Pε ⊂ X is an ε-approximate Pareto set, if for any feasible solution x ∈ X , there exists a solution x′ ∈ Pε such that x′ ≺ε x. In other words, all feasible solutions of the MOCO problem can be almost dominated by some solutions in Pε (Papadimitriou & Yannakakis, 2000). When the Pareto set is intractable and hard to find, the ε-approximate Pareto set would be a reasonable choice to achieve in practice. Each MOCO problem has a unique Pareto set, but can have different ε-approximate Pareto sets. The ability of our proposed method to find an ε-approximate Pareto set strongly depends on its performance on each single-objective preference-based subproblem. Theorem 1. Let x∗λ denotes the optimal solution of the problem (13) with preference λ, if the proposed method can generate an approximate solution xλ ≺ε x∗λ for any preference λ, it is able to generate an ε-approximate Pareto set Pε to the MOCO problem. Proof. Let P be the Pareto set for a MOCO problem, for any xPareto ∈ P , according to Lemma 1, there is a weight vector λ > 0 such that x = x∗λ is the optimal solution for subproblem (13) with a specific preference λ. Therefore, our proposed method can generate an approximated solution xλ ≺ε x∗λ = xPareto. By generating approximate solutions for all xPareto ∈ P , our proposed method is able to generate an ε-approximate Pareto set Pε to the MOCO problem. A.3 LIMITATION Strong Assumption on (Approximately) Solving all Subproblems: The approximation guarantee in Theorem 1 heavily depends on the ability to (approximately) solve each weighted subproblem. Due to the NP-harness, it is indeed non-trivial to give a convergence guarantee to generate ε-dominate solutions for any preference with a small enough ε. This limitation also applies for other end-to-end learning-based (e.g, neural combinatorial optimization) and heuristic-based methods. We are aware that some efforts have been made to combine the learning-based method with dynamic programming to achieve asymptotically optimal solution solution for specific single-objective problem in recent works (Cappart et al., 2021b; Kool et al., 2021). These methods provide a controllable trade-off between the solution quality and the computational cost for solving NP-hard problems. However, their generalization to the multi-objective problem is not straightforward, since the scalarized subproblem for each preference is not necessary the same as its single-objective counterpart. For example, a Tchebycheff scalarized MOTSP is not a single-objective TSP as discussed at the end of Section 3.2. In addition, according to Bengio et al. (2020), these methods belong to the class of learning alongside the algorithms, while our proposed approach is learning to directly produce the solutions (neural combinatorial optimization). Therefore, the idea for learning enhanced multiobjective combinatorial algorithm could be an important research topic in future, but out of the scope for the current work. Dense Approximation for the Whole Pareto Set: Another concern would be the required number of solutions in the ε-approximate Pareto set Pε. If the required number is exponential to the input size, the approximation itself is also intractable. In their seminal work, Papadimitriou & Yannakakis (2000) establish a promising result: Theorem 2 (Papadimitriou & Yannakakis (2000)). For any multiobjective optimization problem and any ε, there is an ε-approximate Pareto set Pε of which the size is polynomial in the number of solutions and 1ε (but exponential in the number of objectives). However, the existence of such a set still does not mean that it can be easily found (Papadimitriou & Yannakakis, 2000; Herzel et al., 2021). The computability (whether Pε can be constructed in polynomial time) would be hard to justify for a real-world problem. For a new unseen problem instance in practice, our proposed method might still need to generate an exponentially large number of solutions to construct an ε-approximate Pareto set Pε. It is also unclear how to properly select a set of preferences in advance. Many research efforts have been made on developing approximation methods for solving MOCO problems in the past decades (Herzel et al., 2021; Hansen, 1980; Papadimitriou & Yannakakis, 2000; Vassilvitskii & Yannakakis, 2005; Koltun & Papadimitriou, 2005; Bazgan et al., 2017). In future work, it is important to better leverage the current advanced approximation strategies to design more efficient preference-based methods. In the learning-based optimization scenario we consider, it is also possible to learn the suitable approximation method and/or preference distribution directly from the data (problem instances). B DETAILS ON THE PROPOSED MODEL B.1 MODEL SETTING We use the same model for all MOCO problems while tuning the input size and mask method for each problem. Table 4 shows the number of parameters of a standard single-objective attention model (Kool et al., 2019) and our proposed preference-based multiobjective attention model. Our model supports flexible preference assignment at the inference time with a small overhead, while the other neural MOCO methods all require training multiple AM models for different preferences. We build the single-preference attention models as well as our model following the implementation in Kwon et al. (2020). Attention Encoder. The encoder we use is the standard attention encoder as in Kool et al. (2019), and it is shared by all preferences. The encoder has 6 attention layers, and 128-dimensional node embedding for the input nodes. Each attention layer has a multi-head attention (MHA) with eight 16-dimensional heads, and a fully connected layer (FC) with one 512-dimension hidden sublayer. The encoder also includes skip-connection and batch normalization for each attention layer. We use the same model for all MOCO problems (MOTSP, MOCVRP, MOKP) but with different input dimensions for each problem, which will be introduced in the next section. Preference-Conditioned Decoder. The decoder’s main model structure is the same as the AM decoder (Kool et al., 2019). It has one multi-head attention layer with eight 16-dimensional heads similar to the encoder, but without skip-connection and batch normalization. The decoder uses a single 128-dimensional attention head to calculate the probabilities of selecting different nodes at each step. Different problems have different masking methods for probability calculation. We use a simple MLP model to generate the preference-conditioned parameters for the decoder. For all MOCO problems, the MLP model has two 128-dimensional hidden layers with ReLu activation. The input is an m-dimensional preference vector λ which satisfies λi ≥ 0 and ∑m i=1 λi = 1, where m is the number of objectives and λi is the preference for the i-th objective. We adopt the parameter compression approach in Ha et al. (2017) to control the model size. The MLP model first generates a hidden embedding e(λ) = MLP(λ|ψ), then maps the hidden embedding to the decoder parameters via linear projection θdecoder = We(λ) + b. The learnable parameters are ψ for the MLP model MLP(λ|ψ) and the parameter matrices W and b for the decoder. Training Procedure. For all problems, we train our proposed model for 200 epochs, with 100, 000 problem instances randomly generated on the fly at each epoch. At each iteration step, we need to sample K preferences, B problem instances, and N tours to calculate the policy gradient. We set K×B = 64 to make the batch of 64 instances for training a single AM model, and letN equal to the problem size (e.g., the number of nodes) as in Kwon et al. (2020). We find the model performance is equally good for setting K = 1, 2 and 4, and keep using K = 1 for all problems. In other words, we randomly generate a preference λ that satisfies λi ≥ 0 and ∑m i=1 λi = 1 at each training step. For the AM-MOCO baseline, we adapt the transfer training approach in Li et al. (2020) to train multiple AM models for different preferences. We first train a single AM model with a single preference on one objective from scratch with 200 epochs, then transfer its parameter to the model for neighbor subproblem with similar preference, and fine-tune the new model with 5 epochs. With sequentially transfer and fine-tune, we can obtain a set of trained models for different preferences. In most experiments, we set the number of preferences as 101. Therefore, we need to build 101 AM models with total 700 training epochs. Instance Augmentation for MOCO. Due to the design choice of minimal essential change (e.g., the preference-conditioned decoder), our method can also enjoy the current improvements that were originally proposed for the single objective NCO. Here, we generalize the instance augmentation method proposed in Kwon et al. (2020) to the MOCO version. The key idea of instance augmentation for NCO is to find multiple efficient transformations for the original problem such that they share the same optimal solution. Then, we can use an NCO method to solve all problems and select the best solution among all obtained (potentially different) solutions. In this way, we have a more robust result similar to the test-time augmentation for computer vision (Szegedy et al., 2016). For the single-objective euclidean TSP and CVRP, there is a set of straightforward transformations, which simply flips or rotates the coordinate for all the 2D locations in a problem instance (Kwon et al., 2020). For a location (x, y), there is eight different transformation, namely, {(x, y), (y, x), (x, 1−y), (y, 1−x), (1−x, y), (1−y, x), (1−x, 1−y), (1−y, 1−x)}. For an m-objective euclidean MOTSP problem, the concrete location representations are independent for each objective. Therefore, we can independently apply different transformations for each objective. Consider the above eight different transformations for each objective, we can have 8m different problem transformations for an MOTSP instance. We have fixed 8 transformations for MOCVRP since it only has one 2D coordinate, and no transformation for MOKP. The details for each problem can be found in the next section. B.2 TRAINING EFFICIENCY We use the same amount of samples to train our proposed preference-based model as the other single-objective solvers need (Kool et al., 2019; Kwon et al., 2020). Indeed, our proposed model requires significantly fewer samples and training epochs, compared to the other MOCO methods that need to build multiple models for different preferences. We compare our model’s performance on one of the objective (e.g., with preference (1, 0)) with the other SOTA single-objective solver and learning-based solver, the results are shown in Table 5. The results of Concorde/LKH/OR Tools are from Kwon et al. (2020), and we run the learning-based solver by ourselves. We report the average performance over 10, 000 test instances. AM is the single-objective solver (one model in AM-MOCO), P-MOCO (single preference) is our proposed model but only training on a single fixed preference (1, 0), and P-MOCO (all preferences) is our proposed model with the reported result on the preference (1, 0). With the same amount of training samples, our model has similar single-objective performance with learning-based single-objective solver, while it can additionally approximate the whole Pareto front. The learning-based solver’s performance can be further improved by sampling or active search. These results indicate that we can use a single encoder to efficiently learn a shared representation for all trade-offs among different objectives, and there is a positive knowledge transfer among preferences during the learning procedure. In addition, it also confirms the assumption that similar preferences should have similar corresponding (approximate) Pareto solutions for the multiobjective problems we consider in this paper. These findings could be useful to design more powerful learning-based models for MOCO in the future. B.3 ACTIVE ADAPTION After end-to-end training, our proposed method can directly generate different trade-off solutions to a given problem without further search procedure. However, similar to single-objective neural combinatorial optimization, this approach could still have a gap to the Pareto front, especially for problems out of the training distribution S (e.g., with different sizes and patterns) (Lisicki et al., 2020). Iterative search methods, such as sampling and beam search, can further improve the performance for a single solution or single preference (Veličković & Blundell, 2021). However, these approaches can not find a better approximation to the whole Pareto set for a MOCO problem. Algorithm 2 Neural MOCO Active Adaption 1: Input: model parameter θ, instance s, preference distribution Λ, number of adaption steps T , number of preferences per iteration K, number of tours N 2: for t = 1 to T do 3: λk ∼ SamplePreference(Λ) ∀k ∈ {1, · · · ,K} 4: πjk ∼ SampleTour(pθ(λk)(·|s)) ∀k ∈ {1, · · · ,K} ∀j ∈ {1, · · · , N} 5: b(s|λk)← 1N ∑N j=1 L(π j k|λk, s) ∀k ∈ {1, · · · ,K} 6: ∇J (θ)← 1KN ∑K k=1 ∑N j=1[(L(π j k|λk, s)− b(s|λk))∇θ(λk) log pθ(λk)(π j k|s)] 7: θ ← ADAM(θ,∇J (θ)) 8: end for 9: Output: The model parameter θ We propose a simple yet powerful active adaption approach as shown in Algorithm 2. It iteratively adapts the model parameter θ(λ) to a given instance s (or a batch of instances) with all preferences from the distribution Λ rather than searching for a specific solution. This method is similar to the active search in Bello et al. (2017) which actively refines the single-objective model for efficient candidate solutions searching. Our approach focuses on adapting the whole model for a better Pareto front approximation. Since this method is distribution-agnostic (not depend on specific instance distribution S), it is suitable for out-of-distribution adaption. C DETAILS OF THE MOCO PROBLEMS This section introduces the detailed problem formulation for the MOTSP, MOCVRP and MOKP we used in this work. We also provide the model configuration (e.g., input size, masks) for each problem. C.1 MOTSP We consider the Euclidean multiobjective traveling salesman problem (Euclidean MOTSP), which is widely used in the MOCO community (Lust & Teghem, 2010b; Florios & Mavrotas, 2014). Its single objective counterpart, 2D Euclidean TSP, has also been studied in single-objective neural combinatorial optimization (NCO) (Vinyals et al., 2015; Bello et al., 2017; Kool et al., 2019). A general m-objective MOTSP instance s with n nodes has m n × n cost matrices {Ci = (cijk), i = 1, · · · ,m} for m different costs. The problem is to find a tour (cyclic permutation π) to minimize all the costs: minL(π|s) = min(L1(π|s), L2(π|s), · · · , Lm(π|s)), where Li(π|s) = ciπ(n)π(1) + n−1∑ j=1 ciπ(j)π(j+1). (14) In a Euclidean MOTSP, the cost information is stored in the nodes rather than the edges. The j-th node has a 2m-dimensional vector [x1j ,x 2 j , · · · ,xmj ] where xij ∈ R2 is a 2D coordinate for the i-th objective. The i-th cost cijk = ||xij − xik||2 is the Euclidean distance for moving from node j to k. If we only have one objective m = 1, it reduces to the single-objective 2D Euclidean TSP: min π L1(π|s) = ||xπ(n) − xπ(1)||2 + n−1∑ j=1 ||xπ(i) − xπ(i+1)||2. (15) The single-objective TSP is already NP-hard, so does the MOTSP. In addition, the Pareto set of MOTSP has an exponential cardinality with respect to its input size (e.g., number of nodes), so it is intractable even for the 2-objective case (Ehrgott & Gandibleux, 2003). Problem Instance. Similar to the previous work on single-objective NCO (Lust & Teghem, 2010b; Florios & Mavrotas, 2014), we randomly sample all n nodes with uniform distribution on the 2mdimensional unit hyper-square (e.g., [0, 1]2m) for all problem instances. Model Details. In m-objective MOTSP, each node has a 2m-dimensional vector to store all cost information, so the input size is 2m for the encoder. To calculate the probability for selecting the next node, the decoder needs to mask all already visited nodes as unavailable. We have a valid tour when all node is selected (we assume the end node will connect to the start node). C.2 MOCVRP The vehicle routing problem (VRP) is a classical generalization of TSP, which has been studied for several decades. This work studies the capacitated vehicle routing problem (CVRP). In this problem, in addition to the location, each node (city) has a demand δi needed to be satisfied. There is an extra depot node and a vehicle with a fixed capacity D > δi,∀i to handle all the demands. The vehicle will always start from the depot node, then goes to different cities to satisfy multiple demands ∑ δi ≤ D , and turns back to the depot node. A solution to this problem is a set of routes that satisfies the demands for all cities. In the multiobjective problem, we consider two objectives to optimize. The first one is the total tour length as in the single-objective CVRP, and the other one is the tour length for the longest route (which is also called makespan in scheduling theory). This problem has been studied in the MOCO community (Lacomme et al., 2006). Problem Instance. Similar to the TSP problem, the location of n nodes are uniformly sampled from the unit square. For the demand, similar to the previous work on the single-objective counterpart (Kool et al., 2019; Kwon et al., 2020), we randomly sample discrete δi from the set {1, · · · , 9}. For problem with size n = 20, 50, 100, we set the capacity as D20 = 30, D50 = 40 and D100 = 50, respectively. Without loss of generality, we normalize the demands δ̂i = δiD and capacity D̂ = DD = 1 as in the previous work (Kool et al., 2019; Kwon et al., 2020). Split delivery is not allowed in this problem. Model Details. In the MOCVRP, the depot node has a 2-dimensional location vector, and the other nodes all have 3-dimensional vectors to store their locations and demands. We use different parameter matrices to project the nodes into the input embedding with the same dimension dh = 128. For node selection, the model records the current capacity of the vehicle and the rest demands for all nodes. If a node has been already visited or has demand larger than the vehicle’s current capacity, it will be masked as unavailable for the vehicle to visit. If no node is available to visit, the vehicle will go back to the depot. Once all nodes have 0 demands, the node selection is finished and we have a valid solution to the problem. C.3 MOKP Knapsack problem (KP) is also a widely studied combinatorial optimization problem. In this work, we consider the 0-1 multiobjective knapsack problem (MOKP) with m objectives and n items: max f(x) = max(f1(x), f2(x), · · · , fm(x)), where fi(x) = ∑n j=1 v i jxj , subject to ∑n j=1 wjxj ≤W, xj ∈ {0, 1}, (16) where each item has a weight wj and m different values {vij , i = 1, · · · ,m}. The problem (e.g., knapsack) has a maximum weight capacity W , and the goal is to select a set of items within the weight capacity to maximize the sum values for each objective. To make this problem nontrivial, we further assume all values vij ,∀i, j, weights wj∀j and the total capacity are non-negative real value. The total weight of all items is larger than the capacity ∑ wi > W , while each single weight is smaller than the capacity wi < W, ∀i = 1, · · · , n. The single-objective knapsack problem is NP-hard, so does the MOKP problem (Ehrgott & Gandibleux, 2003). Problem Instance. We randomly generate the values and weight for each item both uniformly in [0, 1]. We consider problems with n = 50, 100, 200 nodes, and the weight capacities are W50 = 12.5,W100 =W200 = 25 as in the previous work (Bello et al., 2017; Kwon et al., 2020). Model Details. In an m-objective MOKP, each item has m values and 1 weight, so the input dimension is 3 for the encoder. For node selection at each step, we mask all already selected nodes and nodes with weights larger than the remained capacity as unavailable. We terminate the selection when all nodes are labeled as unavailable. D ADDITIONAL EXPERIMENTAL RESULTS D.1 HYPERVOLUME INDICATOR To solve a MOCO problem, the result for each method is a set of approximate Pareto solutions. Since the ground truth Pareto set is usually unknown, we use the hypervolume (HV) indicator (Zitzler et al., 2007) to numerically compare the performance for each method. The hypervolume indicator is widely used in the MOCO community for algorithm comparison. The hypervolume of a set is the volume in the objective space it dominates. For a set P ⊂ Rm in the objective space, we can find a reference point r∗ that dominated by all solutions in P , and define the hypervolume HV(P ) as the volume of the set: S = {r ∈ Rm | ∃y ∈ P such that y ≺ r ≺ r∗}, (17) where HV(P ) = Vol(S). An illustration example is shown in Figure 4. The grey area is the set S dominated by the solutions in set P = {p1, p2, p3, p4} with the reference point r∗. In this 2- dimensional case, the hypervolume HV(P ) is the size of the grey area. The hypervolume indicator has two important advantages for measuring the approximate set quality with respect to Pareto optimality (Zitzler et al., 2007). First, if an approximate set A dominates another approximate setB, it will have a strictly better hypervolume HV(A) > HV(B). In addition, if an approximate set C contains all Pareto optimal solutions, it is guaranteed to have the maximum hypervolume value. In comparison, an approximate set has better performance if it has a larger hypervolume. With different objective scales, the hypervolume value will vary significantly among different problems. We report the normalized hypervolume values Ĥ(P ) = HV(P )/ ∏m i r ∗ i for all methods and also their performance gaps to our method. For each experiment, all methods share the same reference point r∗, which contains the largest value achieved for each objective. Since all problems we consider have positive objective values, we have 0 ≤ Ĥ(P ) ≤ 1 for all solution sets. The ground truth Pareto set P ∗ usually has Ĥ(P ∗) < 1, unless the zero vector 0 ∈ Rm is feasible and in the Pareto set. D.2 OUT-OF-DISTRIBUTION PROBLEM WITH EXACT PARETO FRONT We conduct experiments on 6 two-objective MOTSP100 instance (L1-L6) in Florios & Mavrotas (2014) of which the exact Pareto fronts are available. In these problems, the objective functions have different ranges, and the cities are not uniformly located, so they are out of our method’s training distribution. The results can be found in the Table 6. In addition to hypervolume, we also report the Inverted Generational Distance (IGD) (Fonseca et al., 2006) to measure the average Euclidean distance between the set of approximated Pareto solutions to the exact Pareto front. A smaller IGD value means the approximated set is closer to the exact Pareto front. According to the results, our method, with the instance augmentation and/or active search (10 min budget), can have a good performance on these out-of-distribution (OOD) instances with a 1%− 1.5% hypervolume gap. The proposed method also significantly outperforms the weight-sum OR tools baseline. There is still a gap to the strong weight-sum LKH baseline. As discussed in the paper, robust OOD generalization is an important research direction for the learning-based solver. D.3 FLEXIBLE PREFERENCE-BASED APPROXIMATION With our model, it is flexible to generate different number of solutions to approximate the Pareto front. We present an example on the three-objective TSP in Figure 5. We use the structured weight assignment approach from Das & Dennis (1998) to give the sets of weights for different instances. This method can generate n = Cm+p−1p evenly distributed weights with an identical distance to their nearest neighbor on the unit simplex (e.g., ∑m i=1 λi = 1 with λi ≥ 0,∀i), where m is the number of objectives and p is a parameter to control the number of weights. For the three objective TSP problems (m = 3), we assign p = 13, 44 and 140 to generate n = 105, 1035 and 10011 weights respectively. We also show the corresponding generated solutions for MOTSP instances with 20, 50 and 100 cities. According to the results in Figure 5, our model can generate well-distributed solutions with a small number of preferences, and generate a dense approximation with more preferences. The ability to generate a dense approximation to the whole Pareto set also allows the decision-maker to generate arbitrary preferred solutions on the approximate front. D.4 PREFERENCE-SOLUTION CONNECTION We further analyze the connection between the preference and its corresponding solution on the uniform and non-uniform Pareto front. Figure 6 shows the connections in our model with different numbers of preferences for the MOTSP100 instance. Since the two objectives (costs) in MOTSP have the same scale, this problem has a uniform connection between the preferences and the (approximate) Pareto front. By increasing the number of preferences, we have three sparse to dense generated Pareto front approximations. We are more interested in MOCVRP, which has a non-uniform Pareto front. In this problem, we consider two different objectives to optimize, namely, the total tour length (objective 1) and the tour length for the longest route (objective 2). These two objectives are in quite different scales, where the first objective is significantly larger than the second one. In Figure 7, we show different connections for the MOCVRP100 instance. For MA-MOCO, we report the connections for all 101 models. For our proposed model, we report the connections with different numbers of uniform preferences. In this problem, 101 models or our model with 101 uniform preferences are not enough to generate a dense approximate Pareto front. The obtained solutions are biased to the area that objective 1 has a much better relative performance. By increasing the number of preferences, our proposed method can generate more solutions that have relatively better performance for objective 2, which leads to a better Pareto front approximation with higher hypervolume. In this work, we always use a straightforward uniform sampling method to select the preferences. It is interesting to design a learning-based approach to select the preferences for a given problem instance. Preference adjustment and model adaption with awareness on the shape of Pareto front are also worthy to investigate. We left them to the future work. In the MOCVRP instance, we also find the 101-model MA-MOCO has a worse performance compared to our method with 101 preferences. The reason would be the mismatch between the uniform transfer training and the non-uniform Pareto front. Increasing the training steps for fine-tuning each model might fix this issue, but will lead to an even larger computational overhead, given the current training already require 700 epochs. The fixed preferences assignment is another issue for MAMOCO. It requires a fixed set of preferences for each model at the start of the training procedure when the decision makers might have no knowledge on the problem. When the training procedure is done, it dose not allow any preference adjustment without retraining the models. D.5 CONNECTION BETWEEN PREFERENCES AND SOLUTIONS In the previous sections, we use the weighted Tchebycheff aggregation to connect the preference to its corresponding solution for two-objective optimization problems: gtch(x|λ) = max 1≤i≤m {λi|fi(x)− z∗i |}, (18) where z∗i < minx∈X fi(x) is an ideal value for fi(x). There are also many other aggregation function we can use to build the connection. For example, a modified version of weighted Tchebycheff aggregation can be defiend as: gmtch(x|λ) = max 1≤i≤m { 1 λi |fi(x)− z∗i |}, (19) where the only difference is the weight vector 1λi . The penalty-based boundary intersection (PBI) is another widely-used aggregation function for decomposition-based multiobjective optimization (Zhang & Li, 2007): gpbi(x|λ) = d1 + θd2, d1 = |(F (x)− z∗)Tλ|/||λ||, d2 = ||F (x)− z∗ − d1 λ ||λ|| ||, (20) where θ is the penalty parameter, F (x) = (f1(x), . . . , fm(x)) and z∗ = (z∗i , . . . , z ∗ i ) are the objective vector and ideal vector respectively. An inverted version of PBI (IPBI) aggregation function (Sato, 2014) can be defined as: gipbi(x|λ) = −d1 + θd2, d1 = |(zN − F (x))Tλ|/||λ||, d2 = ||zN − F (x)− d1 λ ||λ|| ||, (21) where zN is the nadir vector that contain each objective’s worst value among all Pareto solutions. For a two-objective optimization problem, when we can find a dense set of corresponding solutions to cover the Pareto front for each aggregation function, their performance could be similar to each other. However, different aggregation functions would have quite different performances on the problems with three or more objective functions (called many-objective optimization problems). The performances will heavily depend on the shape of Pareto front (Ishibuchi et al., 2016), especially with a limited number of approximate solutions. We compare the performance of our proposed method with different aggregation functions on MOTSP50 with 105, 1035 and 10011 preferences respectively in Fig. 8. According to the results, the IPBI method can generate the most uniformly distributed solutions for the MOTSP problem with an inverted triangular shape of Pareto front, of which the shape is similar to the weight vector distribution (e.g., see Fig 5). This observation is consistent with the findings and analysis in Ishibuchi et al. (2016). According to these results, we use the Tchebycheff aggregation for all two-objective optimization problems and IPBI aggregation for all problems with more than two objective functions in this work. Since the shape of Pareto front tends to be irregular for real-world applications (Ishibuchi et al., 2019), how to properly choose the aggregation function and assign the preference distribution could be an important future work. D.6 THREE-OBJECTIVE MOTSP WITH ASYMMETRIC PARETO FRONT In this subsection, we conduct experiments on the three-objective MOTSP100 instances with asymmetric Pareto fronts. The definition of irregular MOTSP instance is almost the same as in Section C.1, except the coordinates for the three objectives and randomly sampled from [0, 1]2, [0, 0.5]2 and [0, 0.1]2 respectively, rather than uniformly from [0, 1]6. In this way, the objective values for the MOTSP instance will be in quite different scales, thus leading to an irregular Pareto front (the axes in Figure 9 are in different scales). A well-known drawback of the scalarization-based approach is that it cannot evenly explore the irregular Pareto front with a set of uniform weights, which can also be observed in Figure 9(a)-(d). Our proposed approach allows the user to generate arbitrary trade-off Pareto solutions on the inference time, therefore they can directly generate a dense approximation and then select the preferred solutions as in Figure 9(d). This flexibility can partially address the unevenly distributed issues caused by a (small) set of fixed weights in the traditional scalarization-based approach. If we know the approximate range of different objectives in advance, we can first normalize them into [0, 1] to encourage a more symmetric Pareto front. Otherwise, on the inference time, we can use a (prior knowledge-based) biased and non-uniform weight assignment to generate uniformly distributed solutions. In Figure 9(e)-(h), we first multiple the three-dimensional weights by (1, 2, 10) and then normalize them back to [0, 1]3 which leads to a set of non-uniform weights as shown in Figure 9(e). With this weight assignment, we have a a set of more evenly distributed Pareto solutions as shown in Figure 9(f)-(h). D.7 PREFERENCE-BASED INFERENCE Even without any prior knowledge, our proposed approach allows the user to adaptively adjust the weights in real-time to search for the most suitable solutions in their preferred region(s). Some examples of selected weights and their corresponding solutions are shown in Figure 10 for symmetric Pareto front and Figure 11 for asymmetric Pareto front. If we have prior knowledge of the preference (e.g., the decision-makers will only care about a specific region of the Pareto front), we can modify the training preference distribution Λ accordingly to enhance the training efficiency. For the problem with a truly irregular Pareto front, it is also possible to adaptively adjust the given weights to make them evenly explore the Pareto front during the learning/searching process. One potential direction could be to consider the connection between scalarization and hypervolume maximization as in Zhang & Golovin (2020). We believe this could be an important research topic for the learning-based scalarization approach in future work. D.8 PROBLEM WITH MORE OBJECTIVES Finally, we test the performance of our proposed method on the 10-objective knapsack problems. We train a new model for the 10 objective MOKP with 100 items with uniform 10-dimension preferences. The obtained value path plots on the 10-objective MOKP100 are shown in Figure 12. For problems with more objectives, we need a large number of solutions to approximate the Pareto set. Training a large number of neural network models would have a huge computational and storage overhead, which is also not desirable in practice. Therefore, we do not compare with the AMMOCO and MOA-DRL methods on this problem. For inference, to approximate the Pareto set, we use a set of 715 fixed preferences following the weight assignment approach from (Das & Dennis, 1998) (with m = 10, p = 4, hence n = C10+4−14 = 715). The model generates different trade-off solution for each preference, so there are 715 different value paths (lines) on each plot. In MOKP, we want to maximize the values for all objectives under the capacity limitation. A set of good approximate solutions should have relatively high overall values. According to the results, our proposed method has the best performance. We also test the performance of our method on a larger problem with 500 items. The results shown in Figure 13 confirm that our trained model generalizes well to problems with a larger size.
1. What is the focus of the paper in terms of the problem it addresses? 2. What are the strengths of the proposed approach, particularly in its ability to predict Pareto optimal solutions? 3. What are the weaknesses of the paper, especially regarding the theoretical analysis? 4. Do you have any concerns about the use of deep reinforcement learning in solving multi-objective combinatorial optimization problems? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper presents a learning approach for multi-objective combinatorial optimization (MOCO), which is a challenging problem but not well-studied by previous machine learning researchers. The proposed model is capable of predicting approximate Pareto optimal solutions from various preferences by a single model, via attention networks, and by a so-called "hypernetwork". Experiment result on the multi-objective versions of TSP, VRP, and KP shows the effectiveness of the proposed approach. Review Strengths Multi-objective combinatorial optimization is a family of important problems but is really challenging to solve by traditional methods. The efforts of introducing deep reinforcement learning to this direction are appealing and novel. This paper may inspire more ML researchers in this interesting direction. The experiment results seem detailed and sound. The proposed neural network MOCO method outperforms traditional approaches and other single-objective deep learning baselines. Weaknesses The theoretical part of this paper (Section 6 and appendix A) does not seem sound to me. The assumption that the model can generate ϵ -dominate solutions for any preference seems to be non-trivial for models with small enough ϵ . Other comments The hypervolume (HV) metric should be discussed in the main paper instead of in the supplementary material to ensure the main paper is self-contained.
ICLR
Title Dynamic of Stochastic Gradient Descent with State-dependent Noise Abstract Stochastic gradient descent (SGD) and its variants are mainstream methods to train deep neural networks. Since neural networks are non-convex, more and more works study the dynamic behavior of SGD and its impact to generalization, especially the escaping efficiency from local minima. However, these works make the over-simplified assumption that the distribution of gradient noise is stateindependent, although it is state-dependent. In this work, we propose a novel power-law dynamic with state-dependent diffusion to approximate the dynamic of SGD. Then, we prove that the stationary distribution of power-law dynamic is heavy-tailed, which matches the existing empirical observations. Next, we study the escaping efficiency from local minimum of power-law dynamic and prove that the mean escaping time is in polynomial order of the barrier height of the basin, much faster than exponential order of previous dynamics. It indicates that SGD can escape deep sharp minima efficiently and tends to stop at flat minima that have lower generalization error. Finally, we conduct experiments to compare SGD and power-law dynamic, and the results verify our theoretical findings. 1 INTRODUCTION Deep learning has achieved great success in various AI applications, such as computer vision, natural language processing, and speech recognition (He et al., 2016b; Vaswani et al., 2017; He et al., 2016a). Stochastic gradient descent (SGD) and its variants are the mainstream methods to train deep neural networks, since they can deal with the computational bottleneck of the training over large-scale datasets (Bottou & Bousquet, 2008). Although SGD can converge to the minimum in convex optimization (Rakhlin et al., 2012), neural networks are highly non-convex. To understand the behavior of SGD on non-convex optimization landscape, on one hand, researchers are investigating the loss surface of the neural networks with variant architectures (Choromanska et al., 2015; Li et al., 2018b; He et al., 2019b; Draxler et al., 2018; Li et al., 2018a); on the other hand, researchers illustrate that the noise in stochastic algorithm may make it escape from local minima (Keskar et al., 2016; He et al., 2019a; Zhu et al., 2019; Wu et al., 2019a; HaoChen et al., 2020). It is clear that whether stochastic algorithms can escape from poor local minima and finally stop at a minimum with low generalization error is crucial to its test performance. In this work, we focus on the dynamic of SGD and its impact to generalization, especially the escaping efficiency from local minima. To study the dynamic behavior of SGD, most of the works consider SGD as the discretization of a continuous-time dynamic system and investigate its dynamic properties. There are two typical types of models to approximate dynamic of SGD. (Li et al., 2017; Zhou et al., 2019; Liu et al., 2018; Chaudhari & Soatto, 2018; He et al., 2019a; Zhu et al., 2019; Hu et al., 2019; Xie et al., 2020) approximate the dynamic of SGD by Langevin dynamic with constant diffusion coefficient and proved its escaping efficiency from local minima.These works make over-simplified assumption that the covariance matrix of gradient noise is constant, although it is state-dependent in general. The simplified assumption makes the proposed dynamic unable to explain the empirical observation that the distribution of parameters trained by SGD is heavy-tailed (Mahoney & Martin, 2019). To model the heavy-tailed phenomenon, Simsekli et al. (2019); Şimşekli et al. (2019) point that the variance of stochastic gradient may be infinite, and they propose to approximate SGD by dynamic driven by α-stable process with the strong infinite variance condition. However, as shown in the work (Xie et al., 2020; Mandt et al., 2017), the gradient noise follows Gaussian distribution and the infinite variance condition does not satisfied. Therefore it is still lack of suitable theoretical explanation on the implicit regularization of dynamic of SGD. In this work, we conduct a formal study on the (state-dependent) noise structure of SGD and its dynamic behavior. First, we show that the covariance of the noise of SGD in the quadratic basin surrounding the local minima is a quadratic function of the state (i.e., the model parameter). Thus, we propose approximating the dynamic of SGD near the local minimum using a stochastic differential equation whose diffusion coefficient is a quadratic function of state. We call the new dynamic power-law dynamic. We prove that its stationary distribution is power-law κ distribution, where κ is the signal to noise ratio of the second order derivatives at local minimum. Compared with Gaussian distribution, power-law κ distribution is heavy-tailed with tail-index κ. It matches the empirical observation that the distribution of parameters becomes heavy-tailed after SGD training without assuming infinite variance of stochastic gradient in (Simsekli et al., 2019). Second, we analyze the escaping efficiency of power-law dynamic from local minima and its relation to generalization. By using the random perturbation theory for diffused dynamic systems, we analyze the mean escaping time for power-law dynamic. Our results show that: (1) Power-law dynamic can escape from sharp minima faster than flat minima. (2) The mean escaping time for power-law dynamic is only in the polynomial order of the barrier height, much faster than the exponential order for dynamic with constant diffusion coefficient. Furthermore, we provide a PAC-Bayes generalization bound and show power-law dynamic can generalize better than dynamic with constant diffusion coefficient. Therefore, our results indicate that the state-dependent noise helps SGD to escape from sharp minima quickly and implicitly learn well-generalized model. Finally, we corroborate our theory by experiments. We investigate the distributions of parameters trained by SGD on various types of deep neural networks and show that they are well fitted by power-law κ distribution. Then, we compare the escaping efficiency of dynamics with constant diffusion or state-dependent diffusion to that of SGD. Results show that the behavior of power-law dynamic is more consistent with SGD. Our contributions are summarized as follows: (1) We propose a novel power-law dynamic with state-dependent diffusion to approximate dynamic of SGD based on both theoretical derivation and empirical evidence. The power-law dynamic can explain the heavy-tailed phenomenon of parameters trained by SGD without assuming infinite variance of gradient noise. (2) We analyze the mean escaping time and PAC-Bayes generalization bound for power-law dynamic and results show that power-law dynamic can escape sharp local minima faster and generalize better compared with the dynamics with constant diffusion. Our experimental results can support the theoretical findings. 2 BACKGROUND In empirical risk minimization problem, the objective is L(w) = 1n ∑n i=1 `(xi, w), where xi, i = 1, · · · , n are n i.i.d. training samples, w ∈ Rd is the model parameter, and ` is the loss function. Stochastic gradient descent (SGD) is a popular optimization algorithm to minimize L(w). The update rule is wt+1 = wt − η · g̃(wt), where g̃(wt) = 1b ∑ x∈Sb ∇w`(x,wt) is the minibatch gradient calculated by a randomly sampled minibatch Sb of size b and η is the learning rate. The minibatch gradient g̃(wt) is an unbiased estimator of the full gradient g(wt) = ∇L(wt), and the term (g(wt)− g̃(wt)) is called gradient noise in SGD. Langevin Dynamic In (He et al., 2019a; Zhu et al., 2019), the gradient noise is assumed to be drawn from Gaussian distribution according to central limit theorem (CLT), i.e., g(w)− g̃(w) ∼ N (0, C), where covariance matrix C is a constant matrix for all w. Then SGD can be regarded as the numerical discretization of the following Langevin dynamic, dwt = −g(wt)dt+ √ ηC1/2dBt, (1) where Bt is a standard Brownian motion in Rd and √ ηC1/2dBt is called the diffusion term. α-stable Process Simsekli et al. (2019) assume the variance of gradient noise is unbounded. By generalized CLT, the distribution of gradient noise is α-stable distribution S(α, σ), where σ is the α-th moment of gradient noise for given α with α ∈ (0, 2]. Under this assumption, SGD is approximated by the stochastic differential equation (SDE) driven by an α-stable process. 2.1 RELATED WORK There are many works that approximate SGD by Langevin dynamic and most of the theoretical results are obtained for Langevin dynamic with constant diffusion coefficient. From the aspect of optimization, the convergence rate of SGD and its optimal hyper-parameters have been studied in (Li et al., 2017; He et al., 2018; Liu et al., 2018; He et al., 2018) via optimal control theory. From the aspect of generalization, Chaudhari & Soatto (2018); Zhang et al. (2018); Smith & Le (2017) show that SGD implicitly regularizes the negative entropy of the learned distribution. Recently, the escaping efficiency from local minima of Langevin dynamic has been studied (Zhu et al., 2019; Hu et al., 2019; Xie et al., 2020). He et al. (2019a) analyze the PAC-Bayes generalization error of Langevin dynamic to explain the generalization of SGD. The solution of Langevin dynamic with constant diffusion coefficient is Gaussian process, which does not match the empirical observations that the distribution of parameters trained by SGD is a heavy-tailed (Mahoney & Martin, 2019; Hodgkinson & Mahoney, 2020; Gurbuzbalaban et al., 2020). Simsekli et al. (2019); Şimşekli et al. (2019) assume the variance of stochastic gradient is infinite and regard SGD as discretization of a stochastic differential equation (SDE) driven by an α-stable process. The escaping efficiency for the SDE is also shown in (Simsekli et al., 2019). However, these theoretical results are derived for dynamics with constant diffusion term, although the gradient noise in SGD is state-dependent. There are some related works analyze state-dependent noise structure in SGD, such as label noise in (HaoChen et al., 2020) and multiplicative noise in (Wu et al., 2019b). These works propose new algorithms motivated by the noise structure, but they do not analyze the escaping behavior of dynamic of SGD and the impact to generalization. Wu et al. (2018) analyze the escaping behavior of SGD with considering the fluctuations of the second order derivatives and propose the concept linearly stability. In our work, we propose power-law dynamic to approximate SGD and analyze the stationary distribution and the mean escaping time for it. 3 APPROXIMATING SGD BY POWER-LAW DYNAMIC In this section, we study the (state-dependent) noise structure of SGD (in Section 3.1) and propose power-law dynamic to approximate the dynamic of SGD. We first study 1-dimensional power-law dynamic in Section 3.2 and extend it to high dimensional case in Section 3.3. 3.1 NOISE STRUCTURE OF STOCHASTIC GRADIENT DESCENT For non-convex optimization, we investigate the noise structure of SGD around local minima so that we can analyze the escaping efficiency from it. We first describe the quadratic basin where the local minimum is located. Suppose w∗ is a local minimum of the training loss L(w) and g(w∗) = 0. We name the -ball B(w∗, ) with center w∗ and radius as a quadratic basin if the loss function for w ∈ B(w∗, ) is equal to its second-order Taylor expansion as L(w) = L(w∗) + 12 (w − w ∗)TH(w∗)(w − w∗). Here, H(w∗) is the Hessian matrix of loss at w∗, which is (semi) positive definite. Then we start to analyze the gradient noise of SGD. The full gradient of training loss is g(w) = H(w∗)(w − w∗). The stochastic gradient is g̃(w) = g̃(w∗) + H̃(w∗)(w − w∗) by Taylor expansion where g̃(·) and H̃(·) are stochastic version of gradient and Hessian calculated by the minibatch. The randomness of gradient noise comes from two parts: g̃(w∗) and H̃(w∗), which reflects the fluctuations of the first-order and second-order derivatives of the model at w∗ over different minibatches, respectively. The following proposition gives the variance of the gradient noise. Proposition 1 For w ∈ B(w∗, ) ⊂ R, the variance of gradient noise is σ(g(w) − g̃(w)) = σ(g̃(w∗)) + 2ρ(g̃(w∗), H̃(w∗))(w − w∗) + σ(H̃(w∗))(w − w∗)2, where σ(·) and ρ(·, ·) are the variance and covariance in terms of the minibatch. From Proposition 1, we can conclude that: (1) The variance of noise is finite if g̃(w∗) and H̃(w∗) have finite variance because ρ(g̃(w∗), H̃(w∗)) ≤ √ σ(g̃(w∗)) · σ(H̃(w∗)) according to Cauchy–Schwarz inequality. For fixed w∗, a sufficient condition for that g̃(w∗) and H̃(w∗) have finite variance is that the training data x are sampled from bounded domain. This condition is easy to be satisfied because the domain of training data are usually normalized to be bounded before training. In this case, the infinite variance assumption about the stochastic gradient in α-stable process is not satisfied. (2) The variance of noise is state-dependent, which contradicts the assumption in Langevin dynamic. Notations: For ease of the presentation, we use C(w), σg, σH , ρg,H to denote σ(g(w) − g̃(w∗)), σ(g̃(w∗)), σ(H̃(w∗)), ρ(g̃(w∗), H̃(w∗)) in the following context, respectively. 1 3.2 POWER-LAW DYNAMIC According to CLT, the gradient noise follows Gaussian distribution if it has finite variance, i.e., g(w)− g̃(w)→d N (0, C(w)) as b→∞, (2) where→d means “converge in distribution”. Using Gaussian distribution to model the gradient noise in SGD, the update rule of SGD can be written as: wt+1 = wt − ηg(wt) + ηξt, ξt ∼ N (0, C(w)). (3) Eq.3 can be treated as the discretization of the following SDE, which we call it power-law dynamic: dwt = −g(wt)dt+ √ ηC(w)dBt. (4) Power-law dynamic characterizes how the distribution of w changes as time goes on. The distribution density of parameterw at time t (i.e., p(w, t)) is determined by the Fokker-Planck equation (Zwanzig’s type (Guo & Du, 2014)): ∂ ∂t p(w, t) = ∇p(w, t)g(w) + η 2 · ∇ (C(w) · ∇p(w, t)) . (5) The stationary distribution of power-law dynamic can be obtained if we let the left side of FokkerPlanck equation be zero. The following theorem shows the analytic form of the stationary distribution of power-law dynamic, which is heavy-tailed and the tail of the distribution density decays at polynomial order of w − w∗. This is the reason why we call the stochastic differential equation in Eq.4 power-law dynamic. Theorem 2 The stationary distribution density for 1-dimensional power-law dynamic (Eq.4) is p(w) = 1 Z (C(w)) − H ησH exp H ( 4ρg,H ·ArcTan ( C′(w)/ √ 4σHσg − 4ρ2g,H )) ησH √ 4σHσg − 4ρ2g,H , (6) whereC(w) = σg+2ρg,H(w−w∗)+σH(w−w∗)2, Z is the normalization constant andArcTan(·) is the arctangent function. We make discussions on property of p(w). The decreasing rate of p(w) as w goes away from the center w∗ is mainly determined by the term C(w)− H ησH (because the function ArcTan(·) is bounded) which is a polynomial function about w − w∗. Compared with Gaussian distribution the probability density which follows exponential decreasing rate, power-law distribution is less concentrated in the quadratic basin B(w∗, ) and heavy-tailed. We call HησH the tail-index of p(w) and denote it as κ in the following context. We can conclude that the state-dependent noise results in heavy-tailed distribution of parameters, which matches the observations in (Mahoney & Martin, 2019). Langevin dynamic with constant diffusion can be regarded as special case of power-law dynamic when ρH,g = 0 and σH = 0. In this case, p(w) degenerates to Gaussian distribution. Compared with α-stable process, we do not assume infinite variance on gradient noise and demonstrate another mechanism that results in heavy-tailed distribution of parameters. We empirically observe the covariance matrix around the local minimum of training loss on deep neural networks. The results are shown in Figure.1. Readers can refer more details in Appendix 7.1. We have the following observations: (1) The traces of covariance matrices for the deep neural 1In the following context, we assume σg is positive number. networks can be well approximated by quadratic curves, which supports Proposition 1. (2) The minimum of the quadratic curve is nearly located at the local minimum w∗. It indicates that the coefficient of the first-order term ρg,H ≈ 0. Based on the fact that ρg,H is not the determinant factor of the tail of the distribution in Eq.6 and the observations in Figure.1, we consider a simplified form of C(w) that C(w) = σg + σH(w − w∗)2. Corollary 3 If C(w) = σg + σH(w−w∗)2, the stationary distribution of 1-dimensional power-law dynamic (Eq.4) is p(w) = 1 Z (1 + σHσ −1 g (w − w∗)2)−κ, (7) where Z is the normalization constant and κ = HησH is the tail-index. The distribution density in Eq.7 is known as the power-law κ distribution (Zhou & Du, 2014) (It is also named as q-Gaussian distribution in (Tsallis & Bukman, 1996)). As κ→∞, the distribution density tends to be Gaussian, i.e., p(w) ∝ exp(−H(w−w ∗)2 ησg ). Power-law κ distribution becomes more heavy-tailed as κ becomes smaller. Meanwhile, it produces higher probability to appear values far away from the center w∗. Intuitively, smaller κ helps the dynamic to escape from local minima faster. In the approximation of dynamic of SGD, κ equals the signal (i.e., H(w∗)) to noise (i.e., ησH ) ratio of second-order derivative at w∗ in SGD, and κ is linked with three factors: (1) the curvature H(w∗); (2) the fluctuation of the curvature over training data; (3) the hyper-parameters including η and minibatch size b. Please note that σH linearly decreases as the batch size b increases. 3.3 MULTIVARIATE POWER-LAW DYNAMIC In this section, we extend the power-law dynamic to d-dimensional case. We first illustrate the covariance matrix C(w) of gradient noise in SGD. We use the subscripts to denote the element in a vector or a matrix. We use Σg to denote the covariance matrix of g̃(w∗) and assume that Σg is isotropic (i.e., Σg = σg · I). We also assume that Cov(H̃i(w∗), H̃j(w∗)) are equal for all i, j. It can be shown that C(w) = Σg(1 + (w−w∗)TΣHΣ−1g (w−w∗)). Similarly as 1-dimensional case, we omit the first-order term (w − w∗) in C(w). Readers can refer Proposition 10 in Appendix 7.2 for the detailed derivation. We suppose that the signal to noise ratio of H̃(w∗) can be characterized by a scalar κ, i.e., ηΣH = 1 κ ·H(w ∗). Then C(w) can be written as C(w) = Σg(1 + 1 ηκ (w − w∗)TH(w∗)Σ−1g (w − w∗)). (8) Theorem 4 Ifw ∈ Rd andC(w) has the form in Eq.(8) forw ∈ B(w∗, ). The stationary distribution density of power-law dynamic is p(w) = 1 Z [1 + 1 ηκ (w − w∗)TH(w∗)Σ−1g (w − w∗)]−κ (9) for w ∈ B(w∗, ), where Z is the normalization constant and κ satisfies ηΣH = 1κ ·H(w ∗). Remark: The multivariate power-law κ distribution (Eq.9) is a natural extension of the 1-dimensional case. Actually, the assumptions on Σg and κ can be replaced by just assuming Σg, H(w∗),ΣH are codiagonalized. Readers can refer Proposition 11 in Appendix 7.2 for the derivation. 4 ESCAPING EFFICIENCY OF POWER-LAW DYNAMIC In this section, we analyze the escaping efficiency of power-law dynamic from local minima and its relation to generalization. Specifically, we analyze the mean escaping time for wt to escape from a basin. As shown in Figure.2, we suppose that there are two basins whose bottoms are denoted as a and c respectively and the saddle point b is the barrier between two basins. The barrier height is denoted as ∆L = L(b)−L(a). Definition 5 Suppose wt starts at the local minimum a, we denote the time for wt to first reach the saddle point b as inf{t > 0|w0 = a,wt = b}. The mean escaping time τ is defined as τ = Ewt [inf{t > 0|w0 = a,wt = b}]. We first give the mean escaping time for 1-dimensional case in Lemma 6 and then we give the mean escaping time for high-dimensional power-law dynamic in Theorem 7. To analyze the mean escaping time, we take the following assumptions. Assumption 1: The loss function around critical points can be written as L(w) = L(w∗) + 12 (w − w∗)TH(w∗)(w − w∗), where w∗ is a critical point. Assumption 2: The system is in equilibrium near minima, i.e., ∂p(w,t)∂t = 0. Assumption 3: (Low temperature assumption) The gradient noise is small, i.e., ησg ∆L. These three assumptions are commonly used in analyzing escaping time (Xie et al., 2020; Zhou & Du, 2014) for a dynamic. Because both a and b are critical points, we can apply Assumption 1 to get the loss surface around them. We put more discussions about the assumptions in Appendix 7.3.2. We suppose the basin a is quadratic and the variance of noise has the form that C(w) = σga +σHa(w− a)2, which can also be written as C(w) = σga + 2σHa Ha (L(w)− L(a)). Furthermore, we suppose that C(w) = σga + 2σHa Ha (L(w) − L(a)) on the whole escaping path from a to b (not just near the local minimum a). It means that the variance of gradient noise becomes larger as the loss becomes larger. The following lemma gives the mean escaping time of power-law dynamic for 1-dimensional case. Lemma 6 Suppose that Assumption 1-3 are satisfied and C(w) = σga + 2σHa Ha (L(w)− L(a)) on the whole escaping path from a to b. The mean escaping time of 1-dimensional power-law dynamic is, τ = 2π (1− 1 2κ ) √ Ha|Hb| ( 1 + 2 κησga ∆L )κ− 1 2 , (10) where κ = HaησHa > 1 2 , Ha and Hb are the second-order derivatives of training loss at local minimum a and at saddle point b, respectively. The proof of Lemma 6 is based on the results in (Zhou & Du, 2014). We provide a full proof in Appendix 7.3.1. For the dynamic near the saddle point, we just assume that its dynamic is the same as that near the local minimum for simplicity. This assumption is not necessary and we put the extension to more complex dynamic in Appendix 7.3.3. We summarize the mean escaping time of power-law dynamic and dynamics in previous works in Table 1. Based on the results, we have the following discussions. Comparison with other dynamics: (1) Both power-law dynamic and Langevin dynamic can escape sharp minima faster than flat minima, where the sharpness is measured by Ha and larger Ha corresponds to sharper minimum. Power-law dynamic improves the order of barrier height (i.e., ∆L) from exponential to polynomial compared with Langevin dynamic, which implies a faster escaping efficiency of SGD to escape from deep basin. (2) The mean escaping time for α-stable process is independent with the barrier height, but it is in polynomial order of the width of the basin (i.e., width=|b− a|). Compared with α-stable process, the result for power-law dynamic is superior in the sense that it is also in polynomial order of the width (if ∆L ≈ O(|b− a|2)) and power-law dynamic does not rely on the infinite variance assumption. Based on Lemma 6, we analyze the mean escaping time for d-dimensional case. Under the low temperature condition, the probability density concentrates only along the most possible escaping paths in the high-dimensional landscape. For rigorous definition of most possible escaping paths, readers can refer section 3 in (Xie et al., 2020). For simplicity, we consider the case that there is only one most possible escaping path between basin a and basin c. Specifically, the Hessian at saddle point b has only one negative eigenvalue and the most possible escaping direction is the direction corresponding to the negative eigenvalue of the Hessian at b. Theorem 7 Suppose that Assumption 1-3 are satisfied. For w ∈ Rd, we suppose C(w) = Σga + 2 ηκ (L(w) − L(a)) on the whole escaping path from a to b and there is only one most possible path path between basin a and basin c. The mean escaping time for power-law dynamic escaping from basin a to basin c is τ = 2π √ −det(Hb) (1− d 2κ ) √ det(Ha) 1 |Hbe| ( 1 + 1 ηκσe ∆L )κ− 1 2 , (11) where e indicates the most possible escaping direction, Hbe is the only negative eigenvalue of Hb, σe is the eigenvalue of Σga that corresponds to the escaping direction, ∆L = L(b)− L(a), and det(·) is the determinant of a matrix. Remark: In d-dimensional case, the flatness is measured by det(Ha). IfHa has zero eigenvalues, we can replace Ha by H+a in above theorem, where H + a is obtained by projecting Ha onto the subspace composed by the eigenvectors corresponding to the positive eigenvalues of Ha. This is because by Taylor expansion, the loss L(w) only depends on the positive eigenvalues and the corresponding eigenvectors of Ha, i.e., L(w) = L(a) + 12 (w − a) THa(w − a) = L(a) + 12 (P(w − a)) TΛ H+a P(w − a), where Λ H+a is a diagonal matrix composed by non-zero eigenvalues of Ha and the operator P(·) operates the vector to the subspace corresponding to non-zero eigenvalues of Ha. Therefore, the dimension d in Theorem 7 can be regarded as the dimension of subspace that is composed by directions with large eigenvalues. It has been observed that most of the eigenvalues in H is very small (Sagun et al., 2016). Therefore, d will not be a large number and power-law dynamic in multi-dimensional case will inherit the benefit of that in 1-dimensional case compared with Langevin dynamic and α-stable process. The next theorem give an upper bound of the generalization error of the stationary distribution of power-law dynamic, which shows that flatter minimum has smaller generalization error. Theorem 8 Suppose that w ∈ Rd and κ > d2 . For δ > 0, with probability at least 1 − δ, the stationary distribution of power-law dynamic has the following generalization error bound, Ew∼p(w),x∼P(x)`(w, x) ≤ Ew∼p(w)L(w) + √ KL(p||p′) + log 1δ + log n+ 2 n− 1 , where KL(p||p′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η , p(w) is the stationary distribution of d-dimensional power-law dynamic, p′(w) is a prior distribution which is selected to be standard Gaussian distribution, and P(x) is the underlying distribution of data x, det(·) and Tr(·) are the determinant and trace of a matrix, respectively. We make the following discussions on results in Theorem 8. For 1-dimensional case, we have if H > η 2(1+ 1 2κ ) , KL divergence is decreasing as H decreases. For d > 1 and fixed Tr(ΣgH−1) and det(Σg), the generalization error (i.e., Ew∼p(w),x∼P(x)`(w, x)− Ew∼p(w)L(w)) is decreasing as det(H) decreases, which indicates that flatter minimum has smaller generalization error. Moreover, if 2d > Tr(ηΣgH−1), the generalization error is decreasing as κ increases. When κ → ∞, the generalization error tends to that for Langevin dynamic. Combining the mean escaping time and the generalization error bound, we can conclude that state-dependent noise makes SGD escape from sharp minima faster and implicitly tend to learn a flatter model which generalizes better. 5 EXPERIMENTS In this section, we conduct experiments to verify the theoretical results. We first study the fitness between parameter distribution trained by SGD and power-law κ distribution. Then we compare the escaping behavior for power-law dynamic, Langevin dynamic and SGD. 5.1 FITTING PARAMETER DISTRIBUTION USING POWER-LAW DISTRIBUTION We investigate the distribution of parameters trained by SGD on deep neural networks and use power-law κ distribution to fit the parameter distribution. We first use SGD to train various types of deep neural networks till it converge. For each network, we run SGD with different minibatch sizes over the range {64, 256, 1024}. For the settings of other hyper-parameters, readers can refer Appendix 7.5.2. We plot the distribution of model parameters at the same layer using histogram. Next, we use power-law κ distribution to fit the distribution of the parameters and estimate the value of κ via the embedded function "TsallisQGaussianDistribution[]" in Mathematica software. We show results for LeNet-5 with MNIST dataset and ResNet-18 with CIFAR10 dataset (LeCun et al., 2015; He et al., 2016b) in this section, and put results for other network architectures in Appendix 7.5.2. In Figure 3, we report the generalization error (i.e., Test error - Training error) and the values of κ that best fit the histogram. 2 We have the following observations: (1) The distribution of the parameter trained by SGD can be well fitted by power-law κ distribution (blue curve). (2) As the minibatch size becomes larger, κ becomes larger. It is because the noise σH linearly decreases as minibatch size becomes larger and κ = HησH . (3) As κ becomes smaller, the generalization error becomes lower. It indicates that κ also plays a role as indicator of generalization. These results are consistent with the theory in Section 4. 5.2 COMPARISON ON ESCAPING EFFICIENCY We use a 2-dimensional model to simulate the escaping efficiency from minima for power-law dynamic, Langevin dynamic and SGD. We design a non-convex 2-dimensional function written as L(w) = 1n ∑n i=1 `(w − xi), where `(w) = 15 ∑2 j=1 |wj − 1|2.5 · |wj + 1|3 and training data xi ∼ N (0, 0.01I2). We regard the following optimization iterates as the numerical discretization of the power-law dynamic, wt+1 = wt − ηg(wt) + ηλ2 √ 1 + λ1(wt − w∗)2 ξ, where ξ ∼ N (0, I2), λ1, λ2 are two hyper-parameters and stands for Hadamard product. Note that if we set λ1 = 0, it can be regarded as discretization of Langevin dynamic. We set learning rate η = 0.025, and we take 500 iterations in each training. In order to match the trace of covariance matrix of stochastic gradient at minimum point w∗ with the methods above, λ2 is chosen to satisfy Tr(Cov(λ2ξ)) = Tr(Cov(g(w∗))). We compare the success rate of escaping for power-law dynamic, Langevin dynamic and SGD by repeating the experiments 100 times. To analyze the noise term λ1, we choose different λ1 and evaluate corresponding success rate of escaping, as shown in Figure.4(c). The results show that: (1) there is a positive correlation between λ1 and the success rate of escaping; (2) power-law dynamic can mimic the escaping efficiency of SGD, while Langevin dynamic can not. We then scale the loss 2The training errors under the six settings are almost zero. function by 0.9 to make the minima flatter and repeat all the algorithms under the same setting. The success rate for the scaled loss function is shown in Figure.4(d). We can observe that all dynamics escape flatter minima slower. 6 CONCLUSION In this work, we study the dynamic of SGD via investigating state-dependent variance of the stochastic gradient. We propose power-law dynamic with state-dependent diffusion to approximate the dynamic of SGD. We analyze the escaping efficiency from local minima and the PAC-Bayes generalization error bound for power-law dynamic. Results indicate that state-dependent noise helps SGD escape from poor local minima faster and generalize better. We present direct empirical evidence to support our theoretical findings.This work may motivate many interesting research topics, for example, nonGaussian state-dependent noise, new types of state-dependent regularization tricks in deep learning algorithms and more accurate characterization about the loss surface of deep neural networks. We will investigate these topics in future work. 7 APPENDIX 7.1 POWER-LAW DYNAMIC AND STATIONARY DISTRIBUTION Theorem 9 (Theorem 2 in main paper) The stationary distribution density for 1-dimensional powerlaw dynamic (Eq.4) is p(w) = 1 Z (C(w)) − H ησH exp H ( 4ρg,H ·ArcTan ( C′(w)/ √ 4σHσg − 4ρ2g,H )) ησH √ 4σHσg − 4ρ2g,H , whereC(w) = σg+2ρg,H(w−w∗)+σH(w−w∗)2, Z is the normalization constant andArcTan(·) is the arctangent function. Proof: We denote the function H(4ρg,H ·ArcTan(C′(w)/ √ 4σHσg−4ρg,H)) ησH √ 4σHσg−4ρ2g,H as h(w). According to the Fokker-Planck equation, p(w) satisfies 0 = ∇p(w)g(w) + η 2 · ∇ · (C(w)∇p(w)) = ∇ · [ (p(w) · ∇L(w)) + η 2 C(w)∇p(w) ] = ∇ · [η 2 C(w) − HησH +1eh(w)∇(C(w) H ησH · e−h(w) · p(w)) ] Readers can check the third equality by calculating∇(C(w) H ησH · e−h(w) · p(w)) with C(w) = σg + 2ρg,H(w−w∗)+σH(w−w∗)2. Because the left side equals zero, we have C(w) H ησH ·e−h(w) ·p(w) equals to constant. So p(w) ∝ C(w)− H ησH ·eh(w) ·p(w). So we can get the conclusion in the theorem. Theorem 10 (Corollary 3 in main paper) If C(w) = σg + σH(w−w∗)2, the stationary distribution density of power-law dynamic is p(w) = 1 Z (1 + σHσ −1 g (w − w∗)2)−κ, (12) where Z = ∫ w (1 + σHσ −1 g (w − w∗)2)−κdw is the normalization constant and κ = HησH is the tail-index. Proof: According to the Fokker-Planck equation, p(w) satisfies 0 = ∇p(w)g(w) + η 2 · ∇ · (C(w)∇p(w)) = ∇(p(w) · ∇L(w)) + η 2 ∇ · (σg + 2σH H (L(w)− L(w∗)))∇p(w) = ∇ · η 2 C(w)(1 + 2σH Hσg (L(w)− L(w∗))) H −ησH ∇(1 + 2σH Hσg (L(w)− L(w∗))) H ησH p(w) Because the left side equals zero, we have (1 + 2σHHσg (L(w)− L(w ∗))) H ησH p(w) equals to constant. So p(w) ∝ (1 + 2σHHσg (L(w)− L(w ∗))) H −ησH . So we can get the conclusion in the theorem. We plot the un-normalized distribution density for 1-dimensional power-law dynamics with different κ in Figure 5. For the four curves, we set β = 10. We set κ = 1, 0.5, 0.1, 0 and use green, red, purple and blue line to illustrate their corresponding density function, respectively. When κ = 0, it is Gaussian distribution. From the figure, we can see that the tail for power-law κ-distribution is heavier than Gaussian distribution. Actually, for any given time t, the distribution p(w, t) for wt that satisfies power-law dynamic has analytic form, i.e., p(w, t) ∝ (1 + Hηκσ(t) (w −w(t)) 2)−κ, where w(t) = w∗ + (w0 −w∗)e−Ht and σ(t) is a function of σg and t. Readers can refer Eq.18 - Eq.23 in (Tsallis & Bukman, 1995) for the detailed expression. 7.2 SGD AND MULTIVARIATE POWER-LAW DYNAMIC The following proposition shows the covariance of stochastic gradient in SGD in d-dimensional case. We use the subscripts to denote the elements in a vector or a matrix. Proposition 11 For w ∈ Rd, we use C(w) to denote the covariance matrix of stochastic gradient g̃(w) = g̃(w∗)+H̃(w−w∗) and Σ to denote the covariance matrix of g̃(w∗). IfCov(g̃i(w∗), H̃jk) = 0,∀i, j, k, we have Cij(w) = Σij + (w − w∗)TA(ij)(w − w∗), (13) where Σij = Cov(g̃i(w∗), g̃j(w∗)), A(ij) is a d × d matrix with elements A(ij)ab = Cov(H̃ia, H̃jb) with a ∈ [d], b ∈ [d]. Eq.13 can be obtained by directly calculating the covariance of g̃i(w) and g̃j(w) where g̃i(w) = g̃i(w ∗) + ∑d a=1 H̃ia(wa − w∗a), g̃j(w) = g̃j(w∗) + ∑d b=1 H̃jb(wb − w∗b ). In order to get a analytic tractable form of C(w), we make the following assumptions: (1) If Σij = 0, A(ij) is a zero matrix; (2) For Σij 6= 0, A (ij) Σij are equal for all i ∈ [d], j ∈ [d]. The first assumption is reasonable because both Σij andA(ij) reflect the dependence of the derivatives along the i-th direction and j-th direction. Let ΣH = A (ij) Σij ,C(w) can be written asC(w) = Σg(1+(w−w∗)TΣH(w−w∗)). The d-dimensional power-law dynamic is written as dwt = −H(w − w∗)dt+ √ ηC(w)dBt, (14) where C(w) = Σg(1 + (w − w∗)TΣH(w − w∗)) which is a symmetric positive definite matrix that C(w)1/2 exists. The following proposition shows the stationary distribution of the d-dimensional power-law dynamic. Proposition 12 Suppose Σg,ΣH , H are codiagonalizable, i.e., there exist orthogonal matrix Q and diagonal matrices Λ,Γ,Π to satisfy Σg = QTΛQ,ΣH = QTΓQ,H = QTΠQ. Then, the stationary distribution of power-law dynamic is p(w) = 1 Z (1 + (w − w∗)TΣH(w − w∗))−κ, (15) where Z is the normalization constant and κ = Tr(H)ηTr(ΣHΣg) . Proof: Under the codiagonalization assumption on Σg,ΣH , H , Eq.15 can be rewritten as dvt = −Πvtdt+ √ ηΛ(1 + vTt Γvt)dBt if we let vt = Q(wt − w∗). We use φ(v) = ηC(v)2 = η 2 Λ(1 + v TΓv), the stationary probability density p(v) satisfies the Smoluchowski equation: 0 = d∑ i=1 ∂ ∂vi (Πivi · p(v)) + d∑ i=1 ∂ ∂vi · ( φi(w) ∂ ∂vi p(v) ) (16) = d∑ i=1 ∂ ∂vi (Πi·vi · p(v)) + d∑ i=1 ∂ ∂vi · ( ηΛi 2 (1 + vTΓv) ∂ ∂vi p(v) ) . (17) According to the result for 1-dimensional case, we have the expression of p(v) is p(v) ∝ (1 + vTΓv)−κ. To determine the value of κ, we put p(v) in the Smoluchowski equation to obtain d∑ i=1 Πip(v)− 2κ d∑ i=1 Πivi · Γivi · (1 + vTΓv)−κ−1 = d∑ i=1 ∂ ∂vi ( ηΛiκ(1 + v TΓv)−κ · Γivi ) = d∑ i=1 ( ηΛiκ(1 + v TΓv)−κ · Γi ) − 2 d∑ i=1 ( ηΛiκ 2(1 + vTΓv)−κ−1 · (Γivi)2 ) . The we have ∑d i=1 Πi = ηκ ∑d i=1 ΛiΓi. So we have κ = Tr(H) ηTr(ΣHΣg) . According to Proposition 11, we can also consider another assumption on Σg,ΣH , H without assuming their codiagonalization. Instead, we assume (1) If Σij = 0, A(ij) is a zero matrix; (2) For Σij 6= 0,A(ij) are equal for all i ∈ [d], j ∈ [d] and we denoteA(ij) = ΣH . We suppose η ·ΣH = κH . (3) Σg = σg · Id which is isotropic. Under these assumptions, we can get the following theorem. Theorem 13 (Theorem 4 in main paper) If w is d-dimensional and C(w) has the form in Eq.(8). The stationary distribution density of multivariate power-law dynamic is p(w) = 1 Z [1 + 1 ηκ (w − w∗)THΣ−1g (w − w∗)]−κ (18) where Z = ∫∞ −∞[1 + 1 ηκ (w − w ∗)THΣ−1g (w − w∗)]−κdw is the normalization constant. The proof for Theorem 12 is similar to that for Proposition 11. Readers can check that p(w) satisfies the Smoluchowski equation. An example to illustrate why C(w) is diagonally dominant. In Theorem 13, C(w) is assumed to be diagonally dominant. Diagonally dominant indicates that the variance of each dimension of g̃(w) is significantly larger than the covariance of two different dimensions of g̃(w). Consider a two layer fully-connected linear neural network fw,v(x) = wvx where w ∈ R1×m, v ∈ Rm×d, x ∈ Rd and h(·) is the ReLU activation. We consider the regression loss `(w, v) = 12 (y − fw,v(x)) 2. The gradient of wi and vjk can be written as ∂`(w, v) ∂wi = (fw,v(x)− y) · vix (19) ∂`(w, v) ∂vjk = (fw,v(x)− y) · wjxk, (20) where vi denotes the i-th row of matrix v. Suppose that the initialization of w and v is: wi i.i.d∼ N(0, δ1) and vij i.i.d∼ N(0, δ2) . We also assume that Exi = Exj = 0 and xi, xj are independent with each other for i 6= j where xi is the i-th dimension. We have Ew,v ∂`(w, v) ∂wi ∂`(w, v) ∂wj = Ew,v(fw,v(x)− y)2 · vix · vjx (21) = Ew,vy2 · vix · vjx+ Ew,v m∑ i=1 (wivix) 2 · vix · vjx− 2Ew,v( m∑ i=1 ywivix) · vix · vjx (22) Because the independence of vi, vj and their expectations are zero, we can obtain Ew,v ∂`(w,v)∂wi ∂`(w,v) ∂wj = 0 for i 6= j. Similarly, we can get Ew,v ∂`(w,v)∂wi ∂`(w,v) ∂vjk = 0 and Ew,v ∂`(w,v)∂vj′k′ ∂`(w,v) ∂vjk = 0 for (j, k) 6= (j′, k′). The above analyses show that the gradients for different dimensions are independent at initialization. It has been observed that many weights are kept random during training because of the over-parameterization Balduzzi et al. (2017). So, diagonalization dominant property of C(w) is reasonable. 7.3 SUPPLEMENTARY MATERIALS FOR RESULTS IN SECTION 4 7.3.1 PROOF FOR MEAN ESCAPING TIME Lemma 14 (Lemma 6 in main paper) We suppose C(w) = σga + 2σHa Ha (L(w)− L(a)) on the whole escaping path from a to b. The mean escaping time of the 1-dimensional power-law dynamic is, τ = 2π (1− 1 2κ ) √ Ha|Hb| ( 1 + 2 κησga ∆L )κ− 1 2 , (23) where κ = HaησHa , Ha, Hb are the second-order derivatives of training loss at local minimum a and saddle point b. Proof: According to (Van Kampen, 1992), the mean escaping time τ is expressed as τ = P (w∈Va)∫ Ω JdΩ , where Va is the volume of basin a, J is the probability current that satisfies −∇J(w, t) = ∂ ∂w (g(w) · p(w, t)) + ∂ ∂w ( φ(w) ∂p(w, t) ∂w ) = ∂ ∂w φ(w) · (1 + µ σg ∆L(w) )−κ ∂ ((1 + µ σg ∆L(w) )κ p(w, t) ) ∂w , where φ(w) = η2C(w) and µ = 2σHa Ha , σg = σga and ∆L(w) = L(w) − L(a). Integrating both sides, we obtain J(w) = −φ(w) · ( 1 + µ σg ∆L(w) )−κ ∂((1+ µσg ∆L(w))κp(w,t)) ∂w . Because there is no field source on the escape path, J(w) is fixed constant on the escape path. Multiplying φ(w)−1 · ( 1 + µσg ∆L(w) )κ on both sizes, we have J · ∫ c a φ(w)−1 · ( 1 + µ σg ∆L(w) )κ dw = − ∫ c a ∂ (( 1 + µσg ∆L(w) )κ p(w, t) ) ∂w dw = −0 + p(a). Then we get J = p(a)∫ c a φ(w)−1· ( 1+ µσg ∆L(w) )κ dw . As for the term ∫ c a φ(w)−1 · ( 1 + µσg ∆L(w) ) 1 κ dw, we have ∫ c a φ(w)−1 · ( 1 + µ σg ∆L(w) )κ dw (24) = 2 ησg ∫ c a ( 1 + µ σg ∆L(w) )−1+κ dw = 2 ησg ∫ b c ( 1 + µ σg (∆L(b)− 1 2 |Hb|(w − b)2) )−1+κ dw = 2 ησg ∫ b c ( 1 + µ σg (∆L(b)− 1 2 |Hb|(w − b)2) )−1+κ dw = 2 ησg (1 + µ σg ∆L(b))−1+κ ∫ b c ( 1− µ σg · 1 2 |Hb|(w − b)2 1 + µ σg ∆L(b) )−1+κ dw = 2 ησg (1 + µ σg ∆L(b))−1+κ · ( 1 2 µ σg |Hb| 1 + µ σg ∆L(b) )−1/2 ∫ 1 0 y−1/2(1− y)−1+κdy = 2 ησg (1 + µ σg ∆L(b))− 1 2 +κ √ 2σg µ|Hb| B( 1 2 , κ), where the third formula is based on the second order Taylor expansion. Under the low temperature assumption, we can use the second-order Taylor expansion around the saddle point b. As for the term P (w ∈ Va), we have P (w ∈ Va) = ∫ Va p(w)dV = ∫ w∈Va p(a)(1 + µ σg ∆L(w))−κ = p(a) √ 2σg µHa B( 1 2 , κ − 1 2 ), where we use Taylor expansion of L(w) near local minimum a. Then we have τ = P (w∈Va)∫ Ω JdΩ = P (w∈Va)J because J is a constant. Combining all the results, we can get the result in the lemma. Theorem 15 (Theorem 7 in main paper) Suppose w ∈ Rd and there is only one most possible path path between basin a and the outside of basin a. The mean escaping time for power-law dynamic escaping from basin a to the outside of basin a is τ = 2π √ −det(Hb) (1− d 2κ ) √ det(Ha) 1 |Hbe| ( 1 + 1 ηκσe ∆L )κ− 1 2 , (25) where e indicates the most possible escape direction, Hbe is the only negative eigenvalue of Hb, σe is the eigenvalue of Σga corresponding to the escape direction and ∆L = L(b)− L(a). Proof: According to (Van Kampen, 1992), the mean escaping time τ is expressed as τ = P (w∈Va)∫ Ω JdΩ , where Va is the volume of basin a, J is the probability current that satisfies −∇ · J(w, t) = ∂p(w,t)∂t . Under the low temperature assumption, the probability current J concentrates along the direction corresponding the negative eigenvalue of Hbe, and the probability flux of other directions can be ignored. Then we have∫ Ω JdΩ = Je · ∫ Ω ( 1 + 1 ηκ (w − b)T (HbΣ−1g )⊥e(w − b) )−κ+ 12 dΩ, (26) where Je = p(a) · η(1+µσe∆L(b)) −κ+ 1 2 √ µσe|Hbe| 2 √ 2B( 12 ,κ) which is obtained by the calculation of Je for 1-dimensional case in the proof of Lemma 13, and (·)⊥e denotes the directions perpendicular to the escape direction e. Suppose HbΣ−1g are symmetric matrix. Then there exist orthogonal matrix Q and diagonal matrix Λ = diag(λ1, · · · , λd) that satisfy HbΣ−1g = QTΛQ. We also denote v = Q(w − b). We define a sequence as Tk = 1 + 1ηκ · ∑d j=k λjv 2 j for k = 1, · · · , d. As for the term∫ Ω ( 1 + 1ηκ (w − b) T (HbΣ −1 g ) ⊥e(w − b) )−κ+ 12 dΩ, we have∫ Ω ( 1 + 1 ηκ (w − b)T (HbΣ−1g )⊥e(w − b) )−κ+ 12 dΩ = ∫ (1 + 1 ηκ · vTΛv)−κ+ 12 dw = ∫ (1 + 1 ηκ · d∑ j 6=e λjv 2 j ) −κ+ 12 dv =((ηκ)−1λ1) − 12 ∫ T −κ+ 12 2 B( 1 2 , κ)dv = d−2∏ j=0 ((ηκ)−1λj) − 12B( 1 2 , κ− j 2 ) = d−2∏ j=0 ((ηκ)−1λj) − 12 · √ πdΓ(κ− d2 ) Γ(κ) = √ (ηκπ)d−1 · Γ(κ− d−22 ) Γ(κ+ 12 ) √ det((HbΣ −1 g )⊥e) . As for the term P (w ∈ Va), we have P (w ∈ Va) = ∫ Va p(w)dV = p(a) ∫ w∈Va ( 1 + (w − w∗)THaΣ−1g (w − w∗) ) dw (27) =p(a) · √ (ηκπ)d · Γ(κ− d2 ) Γ(κ) √ det((HaΣ −1 g )) (28) where we use Taylor expansion of L(w) near local minimum a. Combined the results for P (w ∈ Va) and J , we can get the result. 7.3.2 FURTHER EXPLANATION ABOUT ASSUMPTION 1-3 We adopt the commonly used assumptions to analyze mean escaping time for dynamic system (Xie et al., 2020; Smith & Le, 2017; Zhou & Du, 2014). Assumption 2 can be replaced by weaker assumption that the system is quasi-equilibrium which is adopted in (Xie et al., 2020). For the differences between quasi-equilibrium and equilibrium, readers can refer to (Xie et al., 2020) for detailed discussions. Assumption 3 is commonly used (Xie et al., 2020; Zhou & Du, 2014). Under Assumption 3, the probability densities will concentrate around minima and the most possible paths. Assumption 3 will make the second order Taylor approximation more reasonable. 7.3.3 EXTENSION TO MORE COMPLEX DYNAMIC ON THE ESCAPING PATH In Lemma 6, we assume that C(w) = σga + 2σHa Ha (L(w) − L(a)) on the whole escaping path from a to b for ease of comparison and presentation. This assumption is not necessary and we can assume a different dynamic near saddle point b. Specially, we can assume the point z is the midpoint on the most possible path beween a and b, where L(z) = (1 − z)L(a) + zL(b). The dynamic with C(w) = σga + 2σHa Ha (L(w) − L(a)) dominates the path a → z and the dynamic with C(w) = σgb + 2σHb Hb (L(b)−L(w)) dominates the path z → b. Then only two things will be changed in proof of Lemma 6. First, we need to change the stationary distribution near saddle points according to its own dynamic in Eq.20. Second, we need to change the integral about probability density on the whole path to sum of integrals on these two sub-paths. Similar proof techniques are adopted for analyzing escaping time of Langevin dynamic in proof of Theorem 4.1 in the work Xie et al. (2020). Since the proof is analogous, we omit the details here. 7.4 PAC-BAYES GENERALIZATION BOUND We briefly introduce the basic settings for PAC-Bayes generalization error. The expected risk is defined as Ex∼P(x)`(w, x). Suppose the parameter follows a distribution with density p(w), the expected risk in terms of p(w) is defined as Ew∼p(w),x∼P(x)`(w, x). The empirical risk in terms of p(w) is defined as Ew∼p(w)L(w) = Ew∼p(w) 1n ∑n i=1 `(w, xi). Suppose the prior distribution over the parameter space is p′(w) and p(w) is the distribution on the parameter space expressing the learned hypothesis function. For power-law dynamic, p(w) is its stationary distribution and we choose p′(w) to be Gaussian distribution with center w∗ and covariance matrix I . Then we can get the following theorem. Theorem 16 (Theorem 8 in main paper) For w ∈ Rd, we select the prior distribution p′(w) to be standard Gaussian distribution. For δ > 0, with probability at least 1− δ, the stationary distribution of power-law dynamic has the following generalization error bound, Ew∼p(w),x∼P(x)`(w, x) ≤ Ew∼p(w)L(w) + √ KL(p||p′) + log 1δ + log n+ 2 n− 1 , (29) whereKL(p||p′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η andP(x) is the underlying distribution of data x. Proof: Eq.(29) directly follows the results in (McAllester, 1999). Here we calculate the Kullback–Leibler (KL) divergence between prior distribution and the stationary distribution of power-law dynamic. The prior distribution is selected to be standard Gaussion distribution with distribution density p′(w) = 1√ (2π)d det (I) exp{− 12 (w−w ∗)T I(w−w∗)}. The posterior distribution density is the stationary distribution for power-law dynamic, i.e., p(w) = 1Z ·(1+ 1 ηκ ·(w−w ∗)THΣ−1g (w−w∗))−κ. Suppose HΣ−1g are symmetric matrix. Then there exist orthogonal matrix Q and diagonal matrix Λ = diag(λ1, · · · , λd) that satisfy HΣ−1g = QTΛQ. We also denote v = Q(w − w∗). We have log ( p(w) p′(w) ) = −κ log(1 + 1 ηκ · (w − w∗)THΣ−1g (w − w∗))− logZ + 1 2 (w − w∗)T I(w − w∗) + d 2 log 2π The KL-divergence is defined as KL(p(w)||p′(w)) = ∫ w p(w) log ( p(w) p′(w) ) dw. Putting v = Q(w − w∗) in the integral, we have KL(p(w)||p′(w)) = d 2 log 2π − logZ + 1 2Z ∫ v vT v ( 1 + 1 ηκ · vTΛv )−κ dv − 1 Zη ∫ v vTΛv · (1 + 1 ηκ · vTΛv)−κdv, (30) where we use the approximation that log(1 + x) ≈ x. We define a sequence as Tk = 1 + 1ηκ ·∑d j=k λjv 2 j for k = 1, · · · , d. We first calculate the normalization constant Z. Z = ∫ (1 + 1 ηκ · vTΛv)−κdw = ∫ (1 + 1 ηκ · d∑ j=1 λjv 2 j ) −κdv =((ηκ)−1λ1) − 12 ∫ T −κ+ 12 2 B( 1 2 , κ− 1 2 )dv = d∏ j=1 ((ηκ)−1λj) − 12B( 1 2 , κ− j 2 ) = d∏ j=1 ((ηκ)−1λj) − 12 · √ πdΓ(κ− d2 ) Γ(κ) We define Zj = ((ηκ)−1λj)− 1 2B ( 1 2 , κ− j 2 ) . For the third term in Eq.(30), we have 2Z · III = ∫ v vT v(1 + 1 ηκ vTΛv)−κdv = ∫ v2,···vd ∫ v1 v21 ( 1 + 1 ηκ · vTΛv )−κ dv1 + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd = ∫ v2,···vd T−κ2 ∫ v1 v21 ( 1 + (ηκ)−1λ1v 2 1 T2 )−κ dv1 + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd = ∫ v2,··· ,vd T−κ2 ∫ ( T2 (ηκ)−1λ1 ) 3 2 y 1 2 (1 + y)−κ dy + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd = ∫ v2,··· ,vd ((ηκ)−1λ1) − 3 2 T −κ+ 3 2 2 B ( 3 2 , κ− 3 2 ) + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd =( λ1 ηκ )− 3 2B ( 3 2 , κ− 3 2 )∫ v2,··· ,vd T −κ+ 3 2 2 dv2··· ,vd + ∫ v2,··· ,vd Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd For term ∫ v2,··· ,vd T − 1κ+ 3 2 2 dv2··· ,vd in above equation, we have ∫ v2,··· ,vd T −κ+ 32 2 dv2··· ,vd = ∫ v3,··· ,vd T−κ+23 ((ηκ) −1λ2) − 12B ( 1 2 , κ− 2 ) dv3,··· ,vd = ∫ v4,··· ,vd T −κ+ 52 4 ((ηκ) −1λ2) − 12 ((ηκ)−1λ3) − 12B ( 1 2 , κ− 5 2 ) B ( 1 2 , κ− 2 ) dv4,··· ,vd = ∫ vd T −κ+ 12 + 1 2×d d d−1∏ j=2 ((ηκ)−1λj) − 12 d−1∏ j=2 B ( 1 2 , κ− ( j 2 + 1) ) dvd = d∏ j=2 ((ηκ)−1λj) − 12 d∏ j=2 B ( 1 2 , κ− ( j 2 + 1) ) Let Aj = ((ηκ)−1λj)− 3 2B ( 3 2 , κ− ( j 2 + 1) ) . According to the above two equations, we can get the recursion 2Z ∫ vT vT−κ1 dv =A1 · ∫ T −κ+ 32 2 + Z1 ∫ v2,··· ,vd d∑ j=2 v2j T−κ+ 122 dv2··· ,vd =A1 · ∫ T −κ+ 3−12 2 dv2···vd + Z1 ·A2 ∫ T −κ+ 42 3 dv3··· ,vd + Z1Z2 ∫ d∑ j=3 v2j T−κ+ 123 dv3··· ,vd = d−1∑ j=1 Aj j−1∏ k=1 Zk ∫ T −κ+ j+1+12 j+1 dvj+1,··· ,vd + d−1∏ k=1 Zk ∫ v2dT −κ+ d−12 d dvd = d−1∑ j=1 ( λj ηκ )− 3 2B ( 3 2 , κ− ( j 2 + 1) ) j−1∏ k=1 ( λk ηκ )− 1 2B ( 1 2 , κ− k 2 ) d∏ s=j+1 (( λs ηκ )− 1 2 d∏ s=j+1 B ( 1 2 , κ− (s 2 + 1) ) + d−1∏ j=1 ( λj ηκ )− 1 2B( 1 2 , κ− j 2 − 1) · (λd ηκ )− 3 2B( 3 2 , κ− (d 2 + 1)) = √ πdΓ(κ− d2 − 1)Tr(H −1Σg) 2Γ(κ) √ (ηκ)−(d+2) det(H−1Σg) We have III = √ πdΓ(κ− d2 − 1)Tr(H −1Σg) 4Γ(κ) √ (ηκ)−(d+2) det(H−1Σg) · d∏ j=1 ((ηκ)−1λj) 1 2 · Γ(κ)√ πdΓ(κ− d2 ) = ηκTr(H−1Σg) 4(κ− d2 − 1) Similarly, for the fourth term in Eq.(30), we have IV = κd 2(κ− d2−1) . Combining all the results together, we can get KL(p||p′) = 12 log det(H) (ηκ)d det(Σg) + log Γ(κ) Γ(κ− d2 ) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2. Using the fact that log Γ(κ) Γ(κ− d2 ) ≤ d2 log κ, we have KL(p||p ′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η . 7.5 IMPLEMENTATION DETAILS OF THE EXPERIMENTS 7.5.1 OBSERVATIONS ON THE COVARIANCE MATRIX In this section, we introduce the settings on experiments of the quadratic approximation of covariance of the stochastic gradient on plain convolutional neural network (CNN) and ResNet. For each model, we use gradient descent with small constant learning rate to train the network till it converges. The converged point can be regarded as a local minimum, denoted as w∗. As for the detailed settings of the CNN model, the structure for plain CNN model is input → Conv1→ maxpool → Conv2→ maxpool → fc1→ Relu→ fc2→ output. Both Conv1 and Conv2 use 5 × 5 kernels with 10 channels and no padding. Dimensions of full connected layer fc1 and fc2 are 1600 × 50 and 50 × 10 respectively. We randomly sample 1000 images from FashionMNIST (Xiao et al., 2017) dataset as training set. The initialization method is the Kaiming initialization (He et al., 2015) in PyTorch. The learning rate of gradient descent is set to be 0.1. After 3000 iterations, GD converges with almost 100% training accuracy and the training loss being 1e−3. As for ResNet, we use the ResNet-18 model (He et al., 2016b) and randomly sample 1000 images from Kaggle’s dogs-vs-cats dataset as training set. The initialization method is the Kaiming initialization (He et al., 2015) in PyTorch. The learning rate of gradient descent is set to be 0.001. After 10000 iterations, GD converges with 100% training accuracy and the training loss being 1e−3. We then calculate the covariance matrix of the stochastic gradient at some points belonging to the local region around w∗. The points are selected according to the formula: w∗layerL ± (i× Scale), where w∗layerL denotes the parameters at layer L, and i × Scale, i ∈ [N ] determines the distance away from w∗layerL. When we select points according to this formula by changing the parameters at layer L, we fixed the parameters at other layers. For both CNN model and ResNet18 model, we select 20 points by setting i = 1, · · · , 10. For example, for CNN model, we choose the 20 points by changing the parameters at the Conv1 layer with Scale = 0.001 and Conv2 layer with Scale = 0.0001, respectively. For ResNet18, we choose the 20 points by changing the parameters for a convolutional layer at the first residual block with Scale = 0.0001 and second residual block with Scale = 0.0001, respectively. The results are shown in Figure.1. The x-axis denotes the distance of the point away from the local minimum and the y-axis shows the value of the trace of covariance matrix at each point. The results show that the covariance of noise in SGD is indeed not constant and it can be well approximated by quadratic function of state (the blue line in the figures), which is consistent with our theoretical results in Section 3.1. 7.5.2 SUPPLEMENTARY EXPERIMENTS ON PARAMETER DISTRIBUTIONS OF DEEP NEURAL NETWORKS For Figure. 3(a), we train LeNet-5 on MNIST dataset using SGD with constant learning rate η = 0.03 for each batchsize till it converges. Parameters are conv2.weight in LeNet-5. For Figure 3(b), we train ResNet-18 on CIFAR10 using SGD with momentum. We do a RandomCrop on training set scaling to 32× 32 with padding = 4 and then a RandomHorizontalF lip. In training, momentum is set to be 0.9 and weight decay is set to be 5e− 4. Initial learning rate in SGD is set to be 0.1 and we using a learning rate decay of 0.1 on {150, 250}-th epoch respectively. We train it until converges after 250 epoch. Parameters are layer1.1.conv2.weight in ResNet-18. We also observe the parameter distribution on many pretrained models. Details for pre-trained models can be found on https://pytorch.org/docs/stable/torchvision/models.html. Figure.7 shows the distribution of parameters trained by SGD can be well fitted by powerlaw distribution. Parameters in this figure are all randomly selected to be features.10.weight, features.14.weight, features.5.expand3 × 3.weight, Mixed_6d.branch7 × 7_3.conv.weight, layer4.2.conv3.weight and features.denseblock2.denselayer1.conv2.weight for VGG-16, AlexNet, SqueezeNet 1.0, Inception v3, Wide ResNet-50-2 and DenseNet-121 respectively. A Q-Q plot is created by plotting quantiles of two probability distributions against one another, which can provide an assessment of "goodness of fit" by how much the solid line close to the dashed line. From Figure.8, it is clear that the solid lines in bottom pictures are closer to dashed lines on most cases, which indicates network parameters can be better fitted by power-law distribution. Moreover, solid lines in the upper plots severely deviate from dashed lines on the tail of distribution but those in the bottom plot do not, which means the distribution of parameters is indeed heavy-tailed. 7.5.3 FURTHER EXPLANATION ON EXPERIMENTS IN SECTION 5.2 As for the experiments for 2-D model, we also calculate coefficient of the second-order term for the quadratic curve shown in Figure.4(b), and its value is roughly 30, which matches the result in Figure.4(c) in the sense that the result for SGD is similar with the result for power-law dynamic with λ1 ≈ 32. 7.5.4 ESCAPING EFFICIENCY ON NEURAL NETWORK We follow the settings in (Zhu et al., 2019). For convenience of the readers, here we give the details of this setting again. We use corrupted FashionMNIST dataset which contains 1000 images with correct labels and another 200 images with random labels to be training data. A small LeNet-like network with 11,330 parameters is used. Firstly we run the full gradient decent to reach the parameters w∗ near the global minima. Then we continue training using both Langevin dynamic(GLD) and power-law dynamic(PLD). Following Zhu’s setting, the learning rates for GD, GLD and PLD are ηGD = 0.1, ηGLD = 0.07 and ηPLD = 0.07, respectively. For GLD, noise std σ = 10−4 as Zhu already tuned. For our PLD, wt+1 = wt − η∇L(wt) + η · α∇L(wt) √ 1 + β(wt − w∗)2 ξ, where α, β are hyperparameters, ξ ∼ N (0, I), and stands for Hadamard product. Here we select α = 2.4, β = 2 after grid search. Expected sharpness is measured as Eν∼
1. What is the main contribution of the paper, and how does it relate to previous works? 2. What are the concerns regarding the proof of theorem 2, and how does it relate to the selection of w*? 3. How does the power-law dynamic perform in high-dimensional spaces, and how does it compare to other dynamics such as Langevin/alpha-stable in terms of benefits and computational cost? 4. Are there any minor comments or suggestions for improvement that can be addressed?
Review
Review This paper proposes power-law dynamic of SGD which considers state-dependent noise. The power-law distributed derived from this dynamic explains the heavy-tailed distribution of parameters trained by SGD. Besides, this dynamic also shows efficiency of escaping local minima. Concerns: The proof of theorem 2 is not provided in the appendix. But I doubt if C(w) is well-defined. It is not clear how w* is selected considering there are multiple local minima. It does not make sense to me if w* is fixed when taking x-->\infty, as the quadratic approximation should be used in the neighborhood of w*. The escaping efficiency of the power-law dynamic is only analyzed in low-dimension case. I wonder how if performs in high-dimensional space. Does it provide more benefits than Langevin/alpha-stable dynamic in the expense of calculating sigma_g and sigma_H. Minor comments: I think [Li et al., 2017] also proposed state-dependent noise in Theorem 1.
ICLR
Title Dynamic of Stochastic Gradient Descent with State-dependent Noise Abstract Stochastic gradient descent (SGD) and its variants are mainstream methods to train deep neural networks. Since neural networks are non-convex, more and more works study the dynamic behavior of SGD and its impact to generalization, especially the escaping efficiency from local minima. However, these works make the over-simplified assumption that the distribution of gradient noise is stateindependent, although it is state-dependent. In this work, we propose a novel power-law dynamic with state-dependent diffusion to approximate the dynamic of SGD. Then, we prove that the stationary distribution of power-law dynamic is heavy-tailed, which matches the existing empirical observations. Next, we study the escaping efficiency from local minimum of power-law dynamic and prove that the mean escaping time is in polynomial order of the barrier height of the basin, much faster than exponential order of previous dynamics. It indicates that SGD can escape deep sharp minima efficiently and tends to stop at flat minima that have lower generalization error. Finally, we conduct experiments to compare SGD and power-law dynamic, and the results verify our theoretical findings. 1 INTRODUCTION Deep learning has achieved great success in various AI applications, such as computer vision, natural language processing, and speech recognition (He et al., 2016b; Vaswani et al., 2017; He et al., 2016a). Stochastic gradient descent (SGD) and its variants are the mainstream methods to train deep neural networks, since they can deal with the computational bottleneck of the training over large-scale datasets (Bottou & Bousquet, 2008). Although SGD can converge to the minimum in convex optimization (Rakhlin et al., 2012), neural networks are highly non-convex. To understand the behavior of SGD on non-convex optimization landscape, on one hand, researchers are investigating the loss surface of the neural networks with variant architectures (Choromanska et al., 2015; Li et al., 2018b; He et al., 2019b; Draxler et al., 2018; Li et al., 2018a); on the other hand, researchers illustrate that the noise in stochastic algorithm may make it escape from local minima (Keskar et al., 2016; He et al., 2019a; Zhu et al., 2019; Wu et al., 2019a; HaoChen et al., 2020). It is clear that whether stochastic algorithms can escape from poor local minima and finally stop at a minimum with low generalization error is crucial to its test performance. In this work, we focus on the dynamic of SGD and its impact to generalization, especially the escaping efficiency from local minima. To study the dynamic behavior of SGD, most of the works consider SGD as the discretization of a continuous-time dynamic system and investigate its dynamic properties. There are two typical types of models to approximate dynamic of SGD. (Li et al., 2017; Zhou et al., 2019; Liu et al., 2018; Chaudhari & Soatto, 2018; He et al., 2019a; Zhu et al., 2019; Hu et al., 2019; Xie et al., 2020) approximate the dynamic of SGD by Langevin dynamic with constant diffusion coefficient and proved its escaping efficiency from local minima.These works make over-simplified assumption that the covariance matrix of gradient noise is constant, although it is state-dependent in general. The simplified assumption makes the proposed dynamic unable to explain the empirical observation that the distribution of parameters trained by SGD is heavy-tailed (Mahoney & Martin, 2019). To model the heavy-tailed phenomenon, Simsekli et al. (2019); Şimşekli et al. (2019) point that the variance of stochastic gradient may be infinite, and they propose to approximate SGD by dynamic driven by α-stable process with the strong infinite variance condition. However, as shown in the work (Xie et al., 2020; Mandt et al., 2017), the gradient noise follows Gaussian distribution and the infinite variance condition does not satisfied. Therefore it is still lack of suitable theoretical explanation on the implicit regularization of dynamic of SGD. In this work, we conduct a formal study on the (state-dependent) noise structure of SGD and its dynamic behavior. First, we show that the covariance of the noise of SGD in the quadratic basin surrounding the local minima is a quadratic function of the state (i.e., the model parameter). Thus, we propose approximating the dynamic of SGD near the local minimum using a stochastic differential equation whose diffusion coefficient is a quadratic function of state. We call the new dynamic power-law dynamic. We prove that its stationary distribution is power-law κ distribution, where κ is the signal to noise ratio of the second order derivatives at local minimum. Compared with Gaussian distribution, power-law κ distribution is heavy-tailed with tail-index κ. It matches the empirical observation that the distribution of parameters becomes heavy-tailed after SGD training without assuming infinite variance of stochastic gradient in (Simsekli et al., 2019). Second, we analyze the escaping efficiency of power-law dynamic from local minima and its relation to generalization. By using the random perturbation theory for diffused dynamic systems, we analyze the mean escaping time for power-law dynamic. Our results show that: (1) Power-law dynamic can escape from sharp minima faster than flat minima. (2) The mean escaping time for power-law dynamic is only in the polynomial order of the barrier height, much faster than the exponential order for dynamic with constant diffusion coefficient. Furthermore, we provide a PAC-Bayes generalization bound and show power-law dynamic can generalize better than dynamic with constant diffusion coefficient. Therefore, our results indicate that the state-dependent noise helps SGD to escape from sharp minima quickly and implicitly learn well-generalized model. Finally, we corroborate our theory by experiments. We investigate the distributions of parameters trained by SGD on various types of deep neural networks and show that they are well fitted by power-law κ distribution. Then, we compare the escaping efficiency of dynamics with constant diffusion or state-dependent diffusion to that of SGD. Results show that the behavior of power-law dynamic is more consistent with SGD. Our contributions are summarized as follows: (1) We propose a novel power-law dynamic with state-dependent diffusion to approximate dynamic of SGD based on both theoretical derivation and empirical evidence. The power-law dynamic can explain the heavy-tailed phenomenon of parameters trained by SGD without assuming infinite variance of gradient noise. (2) We analyze the mean escaping time and PAC-Bayes generalization bound for power-law dynamic and results show that power-law dynamic can escape sharp local minima faster and generalize better compared with the dynamics with constant diffusion. Our experimental results can support the theoretical findings. 2 BACKGROUND In empirical risk minimization problem, the objective is L(w) = 1n ∑n i=1 `(xi, w), where xi, i = 1, · · · , n are n i.i.d. training samples, w ∈ Rd is the model parameter, and ` is the loss function. Stochastic gradient descent (SGD) is a popular optimization algorithm to minimize L(w). The update rule is wt+1 = wt − η · g̃(wt), where g̃(wt) = 1b ∑ x∈Sb ∇w`(x,wt) is the minibatch gradient calculated by a randomly sampled minibatch Sb of size b and η is the learning rate. The minibatch gradient g̃(wt) is an unbiased estimator of the full gradient g(wt) = ∇L(wt), and the term (g(wt)− g̃(wt)) is called gradient noise in SGD. Langevin Dynamic In (He et al., 2019a; Zhu et al., 2019), the gradient noise is assumed to be drawn from Gaussian distribution according to central limit theorem (CLT), i.e., g(w)− g̃(w) ∼ N (0, C), where covariance matrix C is a constant matrix for all w. Then SGD can be regarded as the numerical discretization of the following Langevin dynamic, dwt = −g(wt)dt+ √ ηC1/2dBt, (1) where Bt is a standard Brownian motion in Rd and √ ηC1/2dBt is called the diffusion term. α-stable Process Simsekli et al. (2019) assume the variance of gradient noise is unbounded. By generalized CLT, the distribution of gradient noise is α-stable distribution S(α, σ), where σ is the α-th moment of gradient noise for given α with α ∈ (0, 2]. Under this assumption, SGD is approximated by the stochastic differential equation (SDE) driven by an α-stable process. 2.1 RELATED WORK There are many works that approximate SGD by Langevin dynamic and most of the theoretical results are obtained for Langevin dynamic with constant diffusion coefficient. From the aspect of optimization, the convergence rate of SGD and its optimal hyper-parameters have been studied in (Li et al., 2017; He et al., 2018; Liu et al., 2018; He et al., 2018) via optimal control theory. From the aspect of generalization, Chaudhari & Soatto (2018); Zhang et al. (2018); Smith & Le (2017) show that SGD implicitly regularizes the negative entropy of the learned distribution. Recently, the escaping efficiency from local minima of Langevin dynamic has been studied (Zhu et al., 2019; Hu et al., 2019; Xie et al., 2020). He et al. (2019a) analyze the PAC-Bayes generalization error of Langevin dynamic to explain the generalization of SGD. The solution of Langevin dynamic with constant diffusion coefficient is Gaussian process, which does not match the empirical observations that the distribution of parameters trained by SGD is a heavy-tailed (Mahoney & Martin, 2019; Hodgkinson & Mahoney, 2020; Gurbuzbalaban et al., 2020). Simsekli et al. (2019); Şimşekli et al. (2019) assume the variance of stochastic gradient is infinite and regard SGD as discretization of a stochastic differential equation (SDE) driven by an α-stable process. The escaping efficiency for the SDE is also shown in (Simsekli et al., 2019). However, these theoretical results are derived for dynamics with constant diffusion term, although the gradient noise in SGD is state-dependent. There are some related works analyze state-dependent noise structure in SGD, such as label noise in (HaoChen et al., 2020) and multiplicative noise in (Wu et al., 2019b). These works propose new algorithms motivated by the noise structure, but they do not analyze the escaping behavior of dynamic of SGD and the impact to generalization. Wu et al. (2018) analyze the escaping behavior of SGD with considering the fluctuations of the second order derivatives and propose the concept linearly stability. In our work, we propose power-law dynamic to approximate SGD and analyze the stationary distribution and the mean escaping time for it. 3 APPROXIMATING SGD BY POWER-LAW DYNAMIC In this section, we study the (state-dependent) noise structure of SGD (in Section 3.1) and propose power-law dynamic to approximate the dynamic of SGD. We first study 1-dimensional power-law dynamic in Section 3.2 and extend it to high dimensional case in Section 3.3. 3.1 NOISE STRUCTURE OF STOCHASTIC GRADIENT DESCENT For non-convex optimization, we investigate the noise structure of SGD around local minima so that we can analyze the escaping efficiency from it. We first describe the quadratic basin where the local minimum is located. Suppose w∗ is a local minimum of the training loss L(w) and g(w∗) = 0. We name the -ball B(w∗, ) with center w∗ and radius as a quadratic basin if the loss function for w ∈ B(w∗, ) is equal to its second-order Taylor expansion as L(w) = L(w∗) + 12 (w − w ∗)TH(w∗)(w − w∗). Here, H(w∗) is the Hessian matrix of loss at w∗, which is (semi) positive definite. Then we start to analyze the gradient noise of SGD. The full gradient of training loss is g(w) = H(w∗)(w − w∗). The stochastic gradient is g̃(w) = g̃(w∗) + H̃(w∗)(w − w∗) by Taylor expansion where g̃(·) and H̃(·) are stochastic version of gradient and Hessian calculated by the minibatch. The randomness of gradient noise comes from two parts: g̃(w∗) and H̃(w∗), which reflects the fluctuations of the first-order and second-order derivatives of the model at w∗ over different minibatches, respectively. The following proposition gives the variance of the gradient noise. Proposition 1 For w ∈ B(w∗, ) ⊂ R, the variance of gradient noise is σ(g(w) − g̃(w)) = σ(g̃(w∗)) + 2ρ(g̃(w∗), H̃(w∗))(w − w∗) + σ(H̃(w∗))(w − w∗)2, where σ(·) and ρ(·, ·) are the variance and covariance in terms of the minibatch. From Proposition 1, we can conclude that: (1) The variance of noise is finite if g̃(w∗) and H̃(w∗) have finite variance because ρ(g̃(w∗), H̃(w∗)) ≤ √ σ(g̃(w∗)) · σ(H̃(w∗)) according to Cauchy–Schwarz inequality. For fixed w∗, a sufficient condition for that g̃(w∗) and H̃(w∗) have finite variance is that the training data x are sampled from bounded domain. This condition is easy to be satisfied because the domain of training data are usually normalized to be bounded before training. In this case, the infinite variance assumption about the stochastic gradient in α-stable process is not satisfied. (2) The variance of noise is state-dependent, which contradicts the assumption in Langevin dynamic. Notations: For ease of the presentation, we use C(w), σg, σH , ρg,H to denote σ(g(w) − g̃(w∗)), σ(g̃(w∗)), σ(H̃(w∗)), ρ(g̃(w∗), H̃(w∗)) in the following context, respectively. 1 3.2 POWER-LAW DYNAMIC According to CLT, the gradient noise follows Gaussian distribution if it has finite variance, i.e., g(w)− g̃(w)→d N (0, C(w)) as b→∞, (2) where→d means “converge in distribution”. Using Gaussian distribution to model the gradient noise in SGD, the update rule of SGD can be written as: wt+1 = wt − ηg(wt) + ηξt, ξt ∼ N (0, C(w)). (3) Eq.3 can be treated as the discretization of the following SDE, which we call it power-law dynamic: dwt = −g(wt)dt+ √ ηC(w)dBt. (4) Power-law dynamic characterizes how the distribution of w changes as time goes on. The distribution density of parameterw at time t (i.e., p(w, t)) is determined by the Fokker-Planck equation (Zwanzig’s type (Guo & Du, 2014)): ∂ ∂t p(w, t) = ∇p(w, t)g(w) + η 2 · ∇ (C(w) · ∇p(w, t)) . (5) The stationary distribution of power-law dynamic can be obtained if we let the left side of FokkerPlanck equation be zero. The following theorem shows the analytic form of the stationary distribution of power-law dynamic, which is heavy-tailed and the tail of the distribution density decays at polynomial order of w − w∗. This is the reason why we call the stochastic differential equation in Eq.4 power-law dynamic. Theorem 2 The stationary distribution density for 1-dimensional power-law dynamic (Eq.4) is p(w) = 1 Z (C(w)) − H ησH exp H ( 4ρg,H ·ArcTan ( C′(w)/ √ 4σHσg − 4ρ2g,H )) ησH √ 4σHσg − 4ρ2g,H , (6) whereC(w) = σg+2ρg,H(w−w∗)+σH(w−w∗)2, Z is the normalization constant andArcTan(·) is the arctangent function. We make discussions on property of p(w). The decreasing rate of p(w) as w goes away from the center w∗ is mainly determined by the term C(w)− H ησH (because the function ArcTan(·) is bounded) which is a polynomial function about w − w∗. Compared with Gaussian distribution the probability density which follows exponential decreasing rate, power-law distribution is less concentrated in the quadratic basin B(w∗, ) and heavy-tailed. We call HησH the tail-index of p(w) and denote it as κ in the following context. We can conclude that the state-dependent noise results in heavy-tailed distribution of parameters, which matches the observations in (Mahoney & Martin, 2019). Langevin dynamic with constant diffusion can be regarded as special case of power-law dynamic when ρH,g = 0 and σH = 0. In this case, p(w) degenerates to Gaussian distribution. Compared with α-stable process, we do not assume infinite variance on gradient noise and demonstrate another mechanism that results in heavy-tailed distribution of parameters. We empirically observe the covariance matrix around the local minimum of training loss on deep neural networks. The results are shown in Figure.1. Readers can refer more details in Appendix 7.1. We have the following observations: (1) The traces of covariance matrices for the deep neural 1In the following context, we assume σg is positive number. networks can be well approximated by quadratic curves, which supports Proposition 1. (2) The minimum of the quadratic curve is nearly located at the local minimum w∗. It indicates that the coefficient of the first-order term ρg,H ≈ 0. Based on the fact that ρg,H is not the determinant factor of the tail of the distribution in Eq.6 and the observations in Figure.1, we consider a simplified form of C(w) that C(w) = σg + σH(w − w∗)2. Corollary 3 If C(w) = σg + σH(w−w∗)2, the stationary distribution of 1-dimensional power-law dynamic (Eq.4) is p(w) = 1 Z (1 + σHσ −1 g (w − w∗)2)−κ, (7) where Z is the normalization constant and κ = HησH is the tail-index. The distribution density in Eq.7 is known as the power-law κ distribution (Zhou & Du, 2014) (It is also named as q-Gaussian distribution in (Tsallis & Bukman, 1996)). As κ→∞, the distribution density tends to be Gaussian, i.e., p(w) ∝ exp(−H(w−w ∗)2 ησg ). Power-law κ distribution becomes more heavy-tailed as κ becomes smaller. Meanwhile, it produces higher probability to appear values far away from the center w∗. Intuitively, smaller κ helps the dynamic to escape from local minima faster. In the approximation of dynamic of SGD, κ equals the signal (i.e., H(w∗)) to noise (i.e., ησH ) ratio of second-order derivative at w∗ in SGD, and κ is linked with three factors: (1) the curvature H(w∗); (2) the fluctuation of the curvature over training data; (3) the hyper-parameters including η and minibatch size b. Please note that σH linearly decreases as the batch size b increases. 3.3 MULTIVARIATE POWER-LAW DYNAMIC In this section, we extend the power-law dynamic to d-dimensional case. We first illustrate the covariance matrix C(w) of gradient noise in SGD. We use the subscripts to denote the element in a vector or a matrix. We use Σg to denote the covariance matrix of g̃(w∗) and assume that Σg is isotropic (i.e., Σg = σg · I). We also assume that Cov(H̃i(w∗), H̃j(w∗)) are equal for all i, j. It can be shown that C(w) = Σg(1 + (w−w∗)TΣHΣ−1g (w−w∗)). Similarly as 1-dimensional case, we omit the first-order term (w − w∗) in C(w). Readers can refer Proposition 10 in Appendix 7.2 for the detailed derivation. We suppose that the signal to noise ratio of H̃(w∗) can be characterized by a scalar κ, i.e., ηΣH = 1 κ ·H(w ∗). Then C(w) can be written as C(w) = Σg(1 + 1 ηκ (w − w∗)TH(w∗)Σ−1g (w − w∗)). (8) Theorem 4 Ifw ∈ Rd andC(w) has the form in Eq.(8) forw ∈ B(w∗, ). The stationary distribution density of power-law dynamic is p(w) = 1 Z [1 + 1 ηκ (w − w∗)TH(w∗)Σ−1g (w − w∗)]−κ (9) for w ∈ B(w∗, ), where Z is the normalization constant and κ satisfies ηΣH = 1κ ·H(w ∗). Remark: The multivariate power-law κ distribution (Eq.9) is a natural extension of the 1-dimensional case. Actually, the assumptions on Σg and κ can be replaced by just assuming Σg, H(w∗),ΣH are codiagonalized. Readers can refer Proposition 11 in Appendix 7.2 for the derivation. 4 ESCAPING EFFICIENCY OF POWER-LAW DYNAMIC In this section, we analyze the escaping efficiency of power-law dynamic from local minima and its relation to generalization. Specifically, we analyze the mean escaping time for wt to escape from a basin. As shown in Figure.2, we suppose that there are two basins whose bottoms are denoted as a and c respectively and the saddle point b is the barrier between two basins. The barrier height is denoted as ∆L = L(b)−L(a). Definition 5 Suppose wt starts at the local minimum a, we denote the time for wt to first reach the saddle point b as inf{t > 0|w0 = a,wt = b}. The mean escaping time τ is defined as τ = Ewt [inf{t > 0|w0 = a,wt = b}]. We first give the mean escaping time for 1-dimensional case in Lemma 6 and then we give the mean escaping time for high-dimensional power-law dynamic in Theorem 7. To analyze the mean escaping time, we take the following assumptions. Assumption 1: The loss function around critical points can be written as L(w) = L(w∗) + 12 (w − w∗)TH(w∗)(w − w∗), where w∗ is a critical point. Assumption 2: The system is in equilibrium near minima, i.e., ∂p(w,t)∂t = 0. Assumption 3: (Low temperature assumption) The gradient noise is small, i.e., ησg ∆L. These three assumptions are commonly used in analyzing escaping time (Xie et al., 2020; Zhou & Du, 2014) for a dynamic. Because both a and b are critical points, we can apply Assumption 1 to get the loss surface around them. We put more discussions about the assumptions in Appendix 7.3.2. We suppose the basin a is quadratic and the variance of noise has the form that C(w) = σga +σHa(w− a)2, which can also be written as C(w) = σga + 2σHa Ha (L(w)− L(a)). Furthermore, we suppose that C(w) = σga + 2σHa Ha (L(w) − L(a)) on the whole escaping path from a to b (not just near the local minimum a). It means that the variance of gradient noise becomes larger as the loss becomes larger. The following lemma gives the mean escaping time of power-law dynamic for 1-dimensional case. Lemma 6 Suppose that Assumption 1-3 are satisfied and C(w) = σga + 2σHa Ha (L(w)− L(a)) on the whole escaping path from a to b. The mean escaping time of 1-dimensional power-law dynamic is, τ = 2π (1− 1 2κ ) √ Ha|Hb| ( 1 + 2 κησga ∆L )κ− 1 2 , (10) where κ = HaησHa > 1 2 , Ha and Hb are the second-order derivatives of training loss at local minimum a and at saddle point b, respectively. The proof of Lemma 6 is based on the results in (Zhou & Du, 2014). We provide a full proof in Appendix 7.3.1. For the dynamic near the saddle point, we just assume that its dynamic is the same as that near the local minimum for simplicity. This assumption is not necessary and we put the extension to more complex dynamic in Appendix 7.3.3. We summarize the mean escaping time of power-law dynamic and dynamics in previous works in Table 1. Based on the results, we have the following discussions. Comparison with other dynamics: (1) Both power-law dynamic and Langevin dynamic can escape sharp minima faster than flat minima, where the sharpness is measured by Ha and larger Ha corresponds to sharper minimum. Power-law dynamic improves the order of barrier height (i.e., ∆L) from exponential to polynomial compared with Langevin dynamic, which implies a faster escaping efficiency of SGD to escape from deep basin. (2) The mean escaping time for α-stable process is independent with the barrier height, but it is in polynomial order of the width of the basin (i.e., width=|b− a|). Compared with α-stable process, the result for power-law dynamic is superior in the sense that it is also in polynomial order of the width (if ∆L ≈ O(|b− a|2)) and power-law dynamic does not rely on the infinite variance assumption. Based on Lemma 6, we analyze the mean escaping time for d-dimensional case. Under the low temperature condition, the probability density concentrates only along the most possible escaping paths in the high-dimensional landscape. For rigorous definition of most possible escaping paths, readers can refer section 3 in (Xie et al., 2020). For simplicity, we consider the case that there is only one most possible escaping path between basin a and basin c. Specifically, the Hessian at saddle point b has only one negative eigenvalue and the most possible escaping direction is the direction corresponding to the negative eigenvalue of the Hessian at b. Theorem 7 Suppose that Assumption 1-3 are satisfied. For w ∈ Rd, we suppose C(w) = Σga + 2 ηκ (L(w) − L(a)) on the whole escaping path from a to b and there is only one most possible path path between basin a and basin c. The mean escaping time for power-law dynamic escaping from basin a to basin c is τ = 2π √ −det(Hb) (1− d 2κ ) √ det(Ha) 1 |Hbe| ( 1 + 1 ηκσe ∆L )κ− 1 2 , (11) where e indicates the most possible escaping direction, Hbe is the only negative eigenvalue of Hb, σe is the eigenvalue of Σga that corresponds to the escaping direction, ∆L = L(b)− L(a), and det(·) is the determinant of a matrix. Remark: In d-dimensional case, the flatness is measured by det(Ha). IfHa has zero eigenvalues, we can replace Ha by H+a in above theorem, where H + a is obtained by projecting Ha onto the subspace composed by the eigenvectors corresponding to the positive eigenvalues of Ha. This is because by Taylor expansion, the loss L(w) only depends on the positive eigenvalues and the corresponding eigenvectors of Ha, i.e., L(w) = L(a) + 12 (w − a) THa(w − a) = L(a) + 12 (P(w − a)) TΛ H+a P(w − a), where Λ H+a is a diagonal matrix composed by non-zero eigenvalues of Ha and the operator P(·) operates the vector to the subspace corresponding to non-zero eigenvalues of Ha. Therefore, the dimension d in Theorem 7 can be regarded as the dimension of subspace that is composed by directions with large eigenvalues. It has been observed that most of the eigenvalues in H is very small (Sagun et al., 2016). Therefore, d will not be a large number and power-law dynamic in multi-dimensional case will inherit the benefit of that in 1-dimensional case compared with Langevin dynamic and α-stable process. The next theorem give an upper bound of the generalization error of the stationary distribution of power-law dynamic, which shows that flatter minimum has smaller generalization error. Theorem 8 Suppose that w ∈ Rd and κ > d2 . For δ > 0, with probability at least 1 − δ, the stationary distribution of power-law dynamic has the following generalization error bound, Ew∼p(w),x∼P(x)`(w, x) ≤ Ew∼p(w)L(w) + √ KL(p||p′) + log 1δ + log n+ 2 n− 1 , where KL(p||p′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η , p(w) is the stationary distribution of d-dimensional power-law dynamic, p′(w) is a prior distribution which is selected to be standard Gaussian distribution, and P(x) is the underlying distribution of data x, det(·) and Tr(·) are the determinant and trace of a matrix, respectively. We make the following discussions on results in Theorem 8. For 1-dimensional case, we have if H > η 2(1+ 1 2κ ) , KL divergence is decreasing as H decreases. For d > 1 and fixed Tr(ΣgH−1) and det(Σg), the generalization error (i.e., Ew∼p(w),x∼P(x)`(w, x)− Ew∼p(w)L(w)) is decreasing as det(H) decreases, which indicates that flatter minimum has smaller generalization error. Moreover, if 2d > Tr(ηΣgH−1), the generalization error is decreasing as κ increases. When κ → ∞, the generalization error tends to that for Langevin dynamic. Combining the mean escaping time and the generalization error bound, we can conclude that state-dependent noise makes SGD escape from sharp minima faster and implicitly tend to learn a flatter model which generalizes better. 5 EXPERIMENTS In this section, we conduct experiments to verify the theoretical results. We first study the fitness between parameter distribution trained by SGD and power-law κ distribution. Then we compare the escaping behavior for power-law dynamic, Langevin dynamic and SGD. 5.1 FITTING PARAMETER DISTRIBUTION USING POWER-LAW DISTRIBUTION We investigate the distribution of parameters trained by SGD on deep neural networks and use power-law κ distribution to fit the parameter distribution. We first use SGD to train various types of deep neural networks till it converge. For each network, we run SGD with different minibatch sizes over the range {64, 256, 1024}. For the settings of other hyper-parameters, readers can refer Appendix 7.5.2. We plot the distribution of model parameters at the same layer using histogram. Next, we use power-law κ distribution to fit the distribution of the parameters and estimate the value of κ via the embedded function "TsallisQGaussianDistribution[]" in Mathematica software. We show results for LeNet-5 with MNIST dataset and ResNet-18 with CIFAR10 dataset (LeCun et al., 2015; He et al., 2016b) in this section, and put results for other network architectures in Appendix 7.5.2. In Figure 3, we report the generalization error (i.e., Test error - Training error) and the values of κ that best fit the histogram. 2 We have the following observations: (1) The distribution of the parameter trained by SGD can be well fitted by power-law κ distribution (blue curve). (2) As the minibatch size becomes larger, κ becomes larger. It is because the noise σH linearly decreases as minibatch size becomes larger and κ = HησH . (3) As κ becomes smaller, the generalization error becomes lower. It indicates that κ also plays a role as indicator of generalization. These results are consistent with the theory in Section 4. 5.2 COMPARISON ON ESCAPING EFFICIENCY We use a 2-dimensional model to simulate the escaping efficiency from minima for power-law dynamic, Langevin dynamic and SGD. We design a non-convex 2-dimensional function written as L(w) = 1n ∑n i=1 `(w − xi), where `(w) = 15 ∑2 j=1 |wj − 1|2.5 · |wj + 1|3 and training data xi ∼ N (0, 0.01I2). We regard the following optimization iterates as the numerical discretization of the power-law dynamic, wt+1 = wt − ηg(wt) + ηλ2 √ 1 + λ1(wt − w∗)2 ξ, where ξ ∼ N (0, I2), λ1, λ2 are two hyper-parameters and stands for Hadamard product. Note that if we set λ1 = 0, it can be regarded as discretization of Langevin dynamic. We set learning rate η = 0.025, and we take 500 iterations in each training. In order to match the trace of covariance matrix of stochastic gradient at minimum point w∗ with the methods above, λ2 is chosen to satisfy Tr(Cov(λ2ξ)) = Tr(Cov(g(w∗))). We compare the success rate of escaping for power-law dynamic, Langevin dynamic and SGD by repeating the experiments 100 times. To analyze the noise term λ1, we choose different λ1 and evaluate corresponding success rate of escaping, as shown in Figure.4(c). The results show that: (1) there is a positive correlation between λ1 and the success rate of escaping; (2) power-law dynamic can mimic the escaping efficiency of SGD, while Langevin dynamic can not. We then scale the loss 2The training errors under the six settings are almost zero. function by 0.9 to make the minima flatter and repeat all the algorithms under the same setting. The success rate for the scaled loss function is shown in Figure.4(d). We can observe that all dynamics escape flatter minima slower. 6 CONCLUSION In this work, we study the dynamic of SGD via investigating state-dependent variance of the stochastic gradient. We propose power-law dynamic with state-dependent diffusion to approximate the dynamic of SGD. We analyze the escaping efficiency from local minima and the PAC-Bayes generalization error bound for power-law dynamic. Results indicate that state-dependent noise helps SGD escape from poor local minima faster and generalize better. We present direct empirical evidence to support our theoretical findings.This work may motivate many interesting research topics, for example, nonGaussian state-dependent noise, new types of state-dependent regularization tricks in deep learning algorithms and more accurate characterization about the loss surface of deep neural networks. We will investigate these topics in future work. 7 APPENDIX 7.1 POWER-LAW DYNAMIC AND STATIONARY DISTRIBUTION Theorem 9 (Theorem 2 in main paper) The stationary distribution density for 1-dimensional powerlaw dynamic (Eq.4) is p(w) = 1 Z (C(w)) − H ησH exp H ( 4ρg,H ·ArcTan ( C′(w)/ √ 4σHσg − 4ρ2g,H )) ησH √ 4σHσg − 4ρ2g,H , whereC(w) = σg+2ρg,H(w−w∗)+σH(w−w∗)2, Z is the normalization constant andArcTan(·) is the arctangent function. Proof: We denote the function H(4ρg,H ·ArcTan(C′(w)/ √ 4σHσg−4ρg,H)) ησH √ 4σHσg−4ρ2g,H as h(w). According to the Fokker-Planck equation, p(w) satisfies 0 = ∇p(w)g(w) + η 2 · ∇ · (C(w)∇p(w)) = ∇ · [ (p(w) · ∇L(w)) + η 2 C(w)∇p(w) ] = ∇ · [η 2 C(w) − HησH +1eh(w)∇(C(w) H ησH · e−h(w) · p(w)) ] Readers can check the third equality by calculating∇(C(w) H ησH · e−h(w) · p(w)) with C(w) = σg + 2ρg,H(w−w∗)+σH(w−w∗)2. Because the left side equals zero, we have C(w) H ησH ·e−h(w) ·p(w) equals to constant. So p(w) ∝ C(w)− H ησH ·eh(w) ·p(w). So we can get the conclusion in the theorem. Theorem 10 (Corollary 3 in main paper) If C(w) = σg + σH(w−w∗)2, the stationary distribution density of power-law dynamic is p(w) = 1 Z (1 + σHσ −1 g (w − w∗)2)−κ, (12) where Z = ∫ w (1 + σHσ −1 g (w − w∗)2)−κdw is the normalization constant and κ = HησH is the tail-index. Proof: According to the Fokker-Planck equation, p(w) satisfies 0 = ∇p(w)g(w) + η 2 · ∇ · (C(w)∇p(w)) = ∇(p(w) · ∇L(w)) + η 2 ∇ · (σg + 2σH H (L(w)− L(w∗)))∇p(w) = ∇ · η 2 C(w)(1 + 2σH Hσg (L(w)− L(w∗))) H −ησH ∇(1 + 2σH Hσg (L(w)− L(w∗))) H ησH p(w) Because the left side equals zero, we have (1 + 2σHHσg (L(w)− L(w ∗))) H ησH p(w) equals to constant. So p(w) ∝ (1 + 2σHHσg (L(w)− L(w ∗))) H −ησH . So we can get the conclusion in the theorem. We plot the un-normalized distribution density for 1-dimensional power-law dynamics with different κ in Figure 5. For the four curves, we set β = 10. We set κ = 1, 0.5, 0.1, 0 and use green, red, purple and blue line to illustrate their corresponding density function, respectively. When κ = 0, it is Gaussian distribution. From the figure, we can see that the tail for power-law κ-distribution is heavier than Gaussian distribution. Actually, for any given time t, the distribution p(w, t) for wt that satisfies power-law dynamic has analytic form, i.e., p(w, t) ∝ (1 + Hηκσ(t) (w −w(t)) 2)−κ, where w(t) = w∗ + (w0 −w∗)e−Ht and σ(t) is a function of σg and t. Readers can refer Eq.18 - Eq.23 in (Tsallis & Bukman, 1995) for the detailed expression. 7.2 SGD AND MULTIVARIATE POWER-LAW DYNAMIC The following proposition shows the covariance of stochastic gradient in SGD in d-dimensional case. We use the subscripts to denote the elements in a vector or a matrix. Proposition 11 For w ∈ Rd, we use C(w) to denote the covariance matrix of stochastic gradient g̃(w) = g̃(w∗)+H̃(w−w∗) and Σ to denote the covariance matrix of g̃(w∗). IfCov(g̃i(w∗), H̃jk) = 0,∀i, j, k, we have Cij(w) = Σij + (w − w∗)TA(ij)(w − w∗), (13) where Σij = Cov(g̃i(w∗), g̃j(w∗)), A(ij) is a d × d matrix with elements A(ij)ab = Cov(H̃ia, H̃jb) with a ∈ [d], b ∈ [d]. Eq.13 can be obtained by directly calculating the covariance of g̃i(w) and g̃j(w) where g̃i(w) = g̃i(w ∗) + ∑d a=1 H̃ia(wa − w∗a), g̃j(w) = g̃j(w∗) + ∑d b=1 H̃jb(wb − w∗b ). In order to get a analytic tractable form of C(w), we make the following assumptions: (1) If Σij = 0, A(ij) is a zero matrix; (2) For Σij 6= 0, A (ij) Σij are equal for all i ∈ [d], j ∈ [d]. The first assumption is reasonable because both Σij andA(ij) reflect the dependence of the derivatives along the i-th direction and j-th direction. Let ΣH = A (ij) Σij ,C(w) can be written asC(w) = Σg(1+(w−w∗)TΣH(w−w∗)). The d-dimensional power-law dynamic is written as dwt = −H(w − w∗)dt+ √ ηC(w)dBt, (14) where C(w) = Σg(1 + (w − w∗)TΣH(w − w∗)) which is a symmetric positive definite matrix that C(w)1/2 exists. The following proposition shows the stationary distribution of the d-dimensional power-law dynamic. Proposition 12 Suppose Σg,ΣH , H are codiagonalizable, i.e., there exist orthogonal matrix Q and diagonal matrices Λ,Γ,Π to satisfy Σg = QTΛQ,ΣH = QTΓQ,H = QTΠQ. Then, the stationary distribution of power-law dynamic is p(w) = 1 Z (1 + (w − w∗)TΣH(w − w∗))−κ, (15) where Z is the normalization constant and κ = Tr(H)ηTr(ΣHΣg) . Proof: Under the codiagonalization assumption on Σg,ΣH , H , Eq.15 can be rewritten as dvt = −Πvtdt+ √ ηΛ(1 + vTt Γvt)dBt if we let vt = Q(wt − w∗). We use φ(v) = ηC(v)2 = η 2 Λ(1 + v TΓv), the stationary probability density p(v) satisfies the Smoluchowski equation: 0 = d∑ i=1 ∂ ∂vi (Πivi · p(v)) + d∑ i=1 ∂ ∂vi · ( φi(w) ∂ ∂vi p(v) ) (16) = d∑ i=1 ∂ ∂vi (Πi·vi · p(v)) + d∑ i=1 ∂ ∂vi · ( ηΛi 2 (1 + vTΓv) ∂ ∂vi p(v) ) . (17) According to the result for 1-dimensional case, we have the expression of p(v) is p(v) ∝ (1 + vTΓv)−κ. To determine the value of κ, we put p(v) in the Smoluchowski equation to obtain d∑ i=1 Πip(v)− 2κ d∑ i=1 Πivi · Γivi · (1 + vTΓv)−κ−1 = d∑ i=1 ∂ ∂vi ( ηΛiκ(1 + v TΓv)−κ · Γivi ) = d∑ i=1 ( ηΛiκ(1 + v TΓv)−κ · Γi ) − 2 d∑ i=1 ( ηΛiκ 2(1 + vTΓv)−κ−1 · (Γivi)2 ) . The we have ∑d i=1 Πi = ηκ ∑d i=1 ΛiΓi. So we have κ = Tr(H) ηTr(ΣHΣg) . According to Proposition 11, we can also consider another assumption on Σg,ΣH , H without assuming their codiagonalization. Instead, we assume (1) If Σij = 0, A(ij) is a zero matrix; (2) For Σij 6= 0,A(ij) are equal for all i ∈ [d], j ∈ [d] and we denoteA(ij) = ΣH . We suppose η ·ΣH = κH . (3) Σg = σg · Id which is isotropic. Under these assumptions, we can get the following theorem. Theorem 13 (Theorem 4 in main paper) If w is d-dimensional and C(w) has the form in Eq.(8). The stationary distribution density of multivariate power-law dynamic is p(w) = 1 Z [1 + 1 ηκ (w − w∗)THΣ−1g (w − w∗)]−κ (18) where Z = ∫∞ −∞[1 + 1 ηκ (w − w ∗)THΣ−1g (w − w∗)]−κdw is the normalization constant. The proof for Theorem 12 is similar to that for Proposition 11. Readers can check that p(w) satisfies the Smoluchowski equation. An example to illustrate why C(w) is diagonally dominant. In Theorem 13, C(w) is assumed to be diagonally dominant. Diagonally dominant indicates that the variance of each dimension of g̃(w) is significantly larger than the covariance of two different dimensions of g̃(w). Consider a two layer fully-connected linear neural network fw,v(x) = wvx where w ∈ R1×m, v ∈ Rm×d, x ∈ Rd and h(·) is the ReLU activation. We consider the regression loss `(w, v) = 12 (y − fw,v(x)) 2. The gradient of wi and vjk can be written as ∂`(w, v) ∂wi = (fw,v(x)− y) · vix (19) ∂`(w, v) ∂vjk = (fw,v(x)− y) · wjxk, (20) where vi denotes the i-th row of matrix v. Suppose that the initialization of w and v is: wi i.i.d∼ N(0, δ1) and vij i.i.d∼ N(0, δ2) . We also assume that Exi = Exj = 0 and xi, xj are independent with each other for i 6= j where xi is the i-th dimension. We have Ew,v ∂`(w, v) ∂wi ∂`(w, v) ∂wj = Ew,v(fw,v(x)− y)2 · vix · vjx (21) = Ew,vy2 · vix · vjx+ Ew,v m∑ i=1 (wivix) 2 · vix · vjx− 2Ew,v( m∑ i=1 ywivix) · vix · vjx (22) Because the independence of vi, vj and their expectations are zero, we can obtain Ew,v ∂`(w,v)∂wi ∂`(w,v) ∂wj = 0 for i 6= j. Similarly, we can get Ew,v ∂`(w,v)∂wi ∂`(w,v) ∂vjk = 0 and Ew,v ∂`(w,v)∂vj′k′ ∂`(w,v) ∂vjk = 0 for (j, k) 6= (j′, k′). The above analyses show that the gradients for different dimensions are independent at initialization. It has been observed that many weights are kept random during training because of the over-parameterization Balduzzi et al. (2017). So, diagonalization dominant property of C(w) is reasonable. 7.3 SUPPLEMENTARY MATERIALS FOR RESULTS IN SECTION 4 7.3.1 PROOF FOR MEAN ESCAPING TIME Lemma 14 (Lemma 6 in main paper) We suppose C(w) = σga + 2σHa Ha (L(w)− L(a)) on the whole escaping path from a to b. The mean escaping time of the 1-dimensional power-law dynamic is, τ = 2π (1− 1 2κ ) √ Ha|Hb| ( 1 + 2 κησga ∆L )κ− 1 2 , (23) where κ = HaησHa , Ha, Hb are the second-order derivatives of training loss at local minimum a and saddle point b. Proof: According to (Van Kampen, 1992), the mean escaping time τ is expressed as τ = P (w∈Va)∫ Ω JdΩ , where Va is the volume of basin a, J is the probability current that satisfies −∇J(w, t) = ∂ ∂w (g(w) · p(w, t)) + ∂ ∂w ( φ(w) ∂p(w, t) ∂w ) = ∂ ∂w φ(w) · (1 + µ σg ∆L(w) )−κ ∂ ((1 + µ σg ∆L(w) )κ p(w, t) ) ∂w , where φ(w) = η2C(w) and µ = 2σHa Ha , σg = σga and ∆L(w) = L(w) − L(a). Integrating both sides, we obtain J(w) = −φ(w) · ( 1 + µ σg ∆L(w) )−κ ∂((1+ µσg ∆L(w))κp(w,t)) ∂w . Because there is no field source on the escape path, J(w) is fixed constant on the escape path. Multiplying φ(w)−1 · ( 1 + µσg ∆L(w) )κ on both sizes, we have J · ∫ c a φ(w)−1 · ( 1 + µ σg ∆L(w) )κ dw = − ∫ c a ∂ (( 1 + µσg ∆L(w) )κ p(w, t) ) ∂w dw = −0 + p(a). Then we get J = p(a)∫ c a φ(w)−1· ( 1+ µσg ∆L(w) )κ dw . As for the term ∫ c a φ(w)−1 · ( 1 + µσg ∆L(w) ) 1 κ dw, we have ∫ c a φ(w)−1 · ( 1 + µ σg ∆L(w) )κ dw (24) = 2 ησg ∫ c a ( 1 + µ σg ∆L(w) )−1+κ dw = 2 ησg ∫ b c ( 1 + µ σg (∆L(b)− 1 2 |Hb|(w − b)2) )−1+κ dw = 2 ησg ∫ b c ( 1 + µ σg (∆L(b)− 1 2 |Hb|(w − b)2) )−1+κ dw = 2 ησg (1 + µ σg ∆L(b))−1+κ ∫ b c ( 1− µ σg · 1 2 |Hb|(w − b)2 1 + µ σg ∆L(b) )−1+κ dw = 2 ησg (1 + µ σg ∆L(b))−1+κ · ( 1 2 µ σg |Hb| 1 + µ σg ∆L(b) )−1/2 ∫ 1 0 y−1/2(1− y)−1+κdy = 2 ησg (1 + µ σg ∆L(b))− 1 2 +κ √ 2σg µ|Hb| B( 1 2 , κ), where the third formula is based on the second order Taylor expansion. Under the low temperature assumption, we can use the second-order Taylor expansion around the saddle point b. As for the term P (w ∈ Va), we have P (w ∈ Va) = ∫ Va p(w)dV = ∫ w∈Va p(a)(1 + µ σg ∆L(w))−κ = p(a) √ 2σg µHa B( 1 2 , κ − 1 2 ), where we use Taylor expansion of L(w) near local minimum a. Then we have τ = P (w∈Va)∫ Ω JdΩ = P (w∈Va)J because J is a constant. Combining all the results, we can get the result in the lemma. Theorem 15 (Theorem 7 in main paper) Suppose w ∈ Rd and there is only one most possible path path between basin a and the outside of basin a. The mean escaping time for power-law dynamic escaping from basin a to the outside of basin a is τ = 2π √ −det(Hb) (1− d 2κ ) √ det(Ha) 1 |Hbe| ( 1 + 1 ηκσe ∆L )κ− 1 2 , (25) where e indicates the most possible escape direction, Hbe is the only negative eigenvalue of Hb, σe is the eigenvalue of Σga corresponding to the escape direction and ∆L = L(b)− L(a). Proof: According to (Van Kampen, 1992), the mean escaping time τ is expressed as τ = P (w∈Va)∫ Ω JdΩ , where Va is the volume of basin a, J is the probability current that satisfies −∇ · J(w, t) = ∂p(w,t)∂t . Under the low temperature assumption, the probability current J concentrates along the direction corresponding the negative eigenvalue of Hbe, and the probability flux of other directions can be ignored. Then we have∫ Ω JdΩ = Je · ∫ Ω ( 1 + 1 ηκ (w − b)T (HbΣ−1g )⊥e(w − b) )−κ+ 12 dΩ, (26) where Je = p(a) · η(1+µσe∆L(b)) −κ+ 1 2 √ µσe|Hbe| 2 √ 2B( 12 ,κ) which is obtained by the calculation of Je for 1-dimensional case in the proof of Lemma 13, and (·)⊥e denotes the directions perpendicular to the escape direction e. Suppose HbΣ−1g are symmetric matrix. Then there exist orthogonal matrix Q and diagonal matrix Λ = diag(λ1, · · · , λd) that satisfy HbΣ−1g = QTΛQ. We also denote v = Q(w − b). We define a sequence as Tk = 1 + 1ηκ · ∑d j=k λjv 2 j for k = 1, · · · , d. As for the term∫ Ω ( 1 + 1ηκ (w − b) T (HbΣ −1 g ) ⊥e(w − b) )−κ+ 12 dΩ, we have∫ Ω ( 1 + 1 ηκ (w − b)T (HbΣ−1g )⊥e(w − b) )−κ+ 12 dΩ = ∫ (1 + 1 ηκ · vTΛv)−κ+ 12 dw = ∫ (1 + 1 ηκ · d∑ j 6=e λjv 2 j ) −κ+ 12 dv =((ηκ)−1λ1) − 12 ∫ T −κ+ 12 2 B( 1 2 , κ)dv = d−2∏ j=0 ((ηκ)−1λj) − 12B( 1 2 , κ− j 2 ) = d−2∏ j=0 ((ηκ)−1λj) − 12 · √ πdΓ(κ− d2 ) Γ(κ) = √ (ηκπ)d−1 · Γ(κ− d−22 ) Γ(κ+ 12 ) √ det((HbΣ −1 g )⊥e) . As for the term P (w ∈ Va), we have P (w ∈ Va) = ∫ Va p(w)dV = p(a) ∫ w∈Va ( 1 + (w − w∗)THaΣ−1g (w − w∗) ) dw (27) =p(a) · √ (ηκπ)d · Γ(κ− d2 ) Γ(κ) √ det((HaΣ −1 g )) (28) where we use Taylor expansion of L(w) near local minimum a. Combined the results for P (w ∈ Va) and J , we can get the result. 7.3.2 FURTHER EXPLANATION ABOUT ASSUMPTION 1-3 We adopt the commonly used assumptions to analyze mean escaping time for dynamic system (Xie et al., 2020; Smith & Le, 2017; Zhou & Du, 2014). Assumption 2 can be replaced by weaker assumption that the system is quasi-equilibrium which is adopted in (Xie et al., 2020). For the differences between quasi-equilibrium and equilibrium, readers can refer to (Xie et al., 2020) for detailed discussions. Assumption 3 is commonly used (Xie et al., 2020; Zhou & Du, 2014). Under Assumption 3, the probability densities will concentrate around minima and the most possible paths. Assumption 3 will make the second order Taylor approximation more reasonable. 7.3.3 EXTENSION TO MORE COMPLEX DYNAMIC ON THE ESCAPING PATH In Lemma 6, we assume that C(w) = σga + 2σHa Ha (L(w) − L(a)) on the whole escaping path from a to b for ease of comparison and presentation. This assumption is not necessary and we can assume a different dynamic near saddle point b. Specially, we can assume the point z is the midpoint on the most possible path beween a and b, where L(z) = (1 − z)L(a) + zL(b). The dynamic with C(w) = σga + 2σHa Ha (L(w) − L(a)) dominates the path a → z and the dynamic with C(w) = σgb + 2σHb Hb (L(b)−L(w)) dominates the path z → b. Then only two things will be changed in proof of Lemma 6. First, we need to change the stationary distribution near saddle points according to its own dynamic in Eq.20. Second, we need to change the integral about probability density on the whole path to sum of integrals on these two sub-paths. Similar proof techniques are adopted for analyzing escaping time of Langevin dynamic in proof of Theorem 4.1 in the work Xie et al. (2020). Since the proof is analogous, we omit the details here. 7.4 PAC-BAYES GENERALIZATION BOUND We briefly introduce the basic settings for PAC-Bayes generalization error. The expected risk is defined as Ex∼P(x)`(w, x). Suppose the parameter follows a distribution with density p(w), the expected risk in terms of p(w) is defined as Ew∼p(w),x∼P(x)`(w, x). The empirical risk in terms of p(w) is defined as Ew∼p(w)L(w) = Ew∼p(w) 1n ∑n i=1 `(w, xi). Suppose the prior distribution over the parameter space is p′(w) and p(w) is the distribution on the parameter space expressing the learned hypothesis function. For power-law dynamic, p(w) is its stationary distribution and we choose p′(w) to be Gaussian distribution with center w∗ and covariance matrix I . Then we can get the following theorem. Theorem 16 (Theorem 8 in main paper) For w ∈ Rd, we select the prior distribution p′(w) to be standard Gaussian distribution. For δ > 0, with probability at least 1− δ, the stationary distribution of power-law dynamic has the following generalization error bound, Ew∼p(w),x∼P(x)`(w, x) ≤ Ew∼p(w)L(w) + √ KL(p||p′) + log 1δ + log n+ 2 n− 1 , (29) whereKL(p||p′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η andP(x) is the underlying distribution of data x. Proof: Eq.(29) directly follows the results in (McAllester, 1999). Here we calculate the Kullback–Leibler (KL) divergence between prior distribution and the stationary distribution of power-law dynamic. The prior distribution is selected to be standard Gaussion distribution with distribution density p′(w) = 1√ (2π)d det (I) exp{− 12 (w−w ∗)T I(w−w∗)}. The posterior distribution density is the stationary distribution for power-law dynamic, i.e., p(w) = 1Z ·(1+ 1 ηκ ·(w−w ∗)THΣ−1g (w−w∗))−κ. Suppose HΣ−1g are symmetric matrix. Then there exist orthogonal matrix Q and diagonal matrix Λ = diag(λ1, · · · , λd) that satisfy HΣ−1g = QTΛQ. We also denote v = Q(w − w∗). We have log ( p(w) p′(w) ) = −κ log(1 + 1 ηκ · (w − w∗)THΣ−1g (w − w∗))− logZ + 1 2 (w − w∗)T I(w − w∗) + d 2 log 2π The KL-divergence is defined as KL(p(w)||p′(w)) = ∫ w p(w) log ( p(w) p′(w) ) dw. Putting v = Q(w − w∗) in the integral, we have KL(p(w)||p′(w)) = d 2 log 2π − logZ + 1 2Z ∫ v vT v ( 1 + 1 ηκ · vTΛv )−κ dv − 1 Zη ∫ v vTΛv · (1 + 1 ηκ · vTΛv)−κdv, (30) where we use the approximation that log(1 + x) ≈ x. We define a sequence as Tk = 1 + 1ηκ ·∑d j=k λjv 2 j for k = 1, · · · , d. We first calculate the normalization constant Z. Z = ∫ (1 + 1 ηκ · vTΛv)−κdw = ∫ (1 + 1 ηκ · d∑ j=1 λjv 2 j ) −κdv =((ηκ)−1λ1) − 12 ∫ T −κ+ 12 2 B( 1 2 , κ− 1 2 )dv = d∏ j=1 ((ηκ)−1λj) − 12B( 1 2 , κ− j 2 ) = d∏ j=1 ((ηκ)−1λj) − 12 · √ πdΓ(κ− d2 ) Γ(κ) We define Zj = ((ηκ)−1λj)− 1 2B ( 1 2 , κ− j 2 ) . For the third term in Eq.(30), we have 2Z · III = ∫ v vT v(1 + 1 ηκ vTΛv)−κdv = ∫ v2,···vd ∫ v1 v21 ( 1 + 1 ηκ · vTΛv )−κ dv1 + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd = ∫ v2,···vd T−κ2 ∫ v1 v21 ( 1 + (ηκ)−1λ1v 2 1 T2 )−κ dv1 + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd = ∫ v2,··· ,vd T−κ2 ∫ ( T2 (ηκ)−1λ1 ) 3 2 y 1 2 (1 + y)−κ dy + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd = ∫ v2,··· ,vd ((ηκ)−1λ1) − 3 2 T −κ+ 3 2 2 B ( 3 2 , κ− 3 2 ) + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd =( λ1 ηκ )− 3 2B ( 3 2 , κ− 3 2 )∫ v2,··· ,vd T −κ+ 3 2 2 dv2··· ,vd + ∫ v2,··· ,vd Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd For term ∫ v2,··· ,vd T − 1κ+ 3 2 2 dv2··· ,vd in above equation, we have ∫ v2,··· ,vd T −κ+ 32 2 dv2··· ,vd = ∫ v3,··· ,vd T−κ+23 ((ηκ) −1λ2) − 12B ( 1 2 , κ− 2 ) dv3,··· ,vd = ∫ v4,··· ,vd T −κ+ 52 4 ((ηκ) −1λ2) − 12 ((ηκ)−1λ3) − 12B ( 1 2 , κ− 5 2 ) B ( 1 2 , κ− 2 ) dv4,··· ,vd = ∫ vd T −κ+ 12 + 1 2×d d d−1∏ j=2 ((ηκ)−1λj) − 12 d−1∏ j=2 B ( 1 2 , κ− ( j 2 + 1) ) dvd = d∏ j=2 ((ηκ)−1λj) − 12 d∏ j=2 B ( 1 2 , κ− ( j 2 + 1) ) Let Aj = ((ηκ)−1λj)− 3 2B ( 3 2 , κ− ( j 2 + 1) ) . According to the above two equations, we can get the recursion 2Z ∫ vT vT−κ1 dv =A1 · ∫ T −κ+ 32 2 + Z1 ∫ v2,··· ,vd d∑ j=2 v2j T−κ+ 122 dv2··· ,vd =A1 · ∫ T −κ+ 3−12 2 dv2···vd + Z1 ·A2 ∫ T −κ+ 42 3 dv3··· ,vd + Z1Z2 ∫ d∑ j=3 v2j T−κ+ 123 dv3··· ,vd = d−1∑ j=1 Aj j−1∏ k=1 Zk ∫ T −κ+ j+1+12 j+1 dvj+1,··· ,vd + d−1∏ k=1 Zk ∫ v2dT −κ+ d−12 d dvd = d−1∑ j=1 ( λj ηκ )− 3 2B ( 3 2 , κ− ( j 2 + 1) ) j−1∏ k=1 ( λk ηκ )− 1 2B ( 1 2 , κ− k 2 ) d∏ s=j+1 (( λs ηκ )− 1 2 d∏ s=j+1 B ( 1 2 , κ− (s 2 + 1) ) + d−1∏ j=1 ( λj ηκ )− 1 2B( 1 2 , κ− j 2 − 1) · (λd ηκ )− 3 2B( 3 2 , κ− (d 2 + 1)) = √ πdΓ(κ− d2 − 1)Tr(H −1Σg) 2Γ(κ) √ (ηκ)−(d+2) det(H−1Σg) We have III = √ πdΓ(κ− d2 − 1)Tr(H −1Σg) 4Γ(κ) √ (ηκ)−(d+2) det(H−1Σg) · d∏ j=1 ((ηκ)−1λj) 1 2 · Γ(κ)√ πdΓ(κ− d2 ) = ηκTr(H−1Σg) 4(κ− d2 − 1) Similarly, for the fourth term in Eq.(30), we have IV = κd 2(κ− d2−1) . Combining all the results together, we can get KL(p||p′) = 12 log det(H) (ηκ)d det(Σg) + log Γ(κ) Γ(κ− d2 ) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2. Using the fact that log Γ(κ) Γ(κ− d2 ) ≤ d2 log κ, we have KL(p||p ′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η . 7.5 IMPLEMENTATION DETAILS OF THE EXPERIMENTS 7.5.1 OBSERVATIONS ON THE COVARIANCE MATRIX In this section, we introduce the settings on experiments of the quadratic approximation of covariance of the stochastic gradient on plain convolutional neural network (CNN) and ResNet. For each model, we use gradient descent with small constant learning rate to train the network till it converges. The converged point can be regarded as a local minimum, denoted as w∗. As for the detailed settings of the CNN model, the structure for plain CNN model is input → Conv1→ maxpool → Conv2→ maxpool → fc1→ Relu→ fc2→ output. Both Conv1 and Conv2 use 5 × 5 kernels with 10 channels and no padding. Dimensions of full connected layer fc1 and fc2 are 1600 × 50 and 50 × 10 respectively. We randomly sample 1000 images from FashionMNIST (Xiao et al., 2017) dataset as training set. The initialization method is the Kaiming initialization (He et al., 2015) in PyTorch. The learning rate of gradient descent is set to be 0.1. After 3000 iterations, GD converges with almost 100% training accuracy and the training loss being 1e−3. As for ResNet, we use the ResNet-18 model (He et al., 2016b) and randomly sample 1000 images from Kaggle’s dogs-vs-cats dataset as training set. The initialization method is the Kaiming initialization (He et al., 2015) in PyTorch. The learning rate of gradient descent is set to be 0.001. After 10000 iterations, GD converges with 100% training accuracy and the training loss being 1e−3. We then calculate the covariance matrix of the stochastic gradient at some points belonging to the local region around w∗. The points are selected according to the formula: w∗layerL ± (i× Scale), where w∗layerL denotes the parameters at layer L, and i × Scale, i ∈ [N ] determines the distance away from w∗layerL. When we select points according to this formula by changing the parameters at layer L, we fixed the parameters at other layers. For both CNN model and ResNet18 model, we select 20 points by setting i = 1, · · · , 10. For example, for CNN model, we choose the 20 points by changing the parameters at the Conv1 layer with Scale = 0.001 and Conv2 layer with Scale = 0.0001, respectively. For ResNet18, we choose the 20 points by changing the parameters for a convolutional layer at the first residual block with Scale = 0.0001 and second residual block with Scale = 0.0001, respectively. The results are shown in Figure.1. The x-axis denotes the distance of the point away from the local minimum and the y-axis shows the value of the trace of covariance matrix at each point. The results show that the covariance of noise in SGD is indeed not constant and it can be well approximated by quadratic function of state (the blue line in the figures), which is consistent with our theoretical results in Section 3.1. 7.5.2 SUPPLEMENTARY EXPERIMENTS ON PARAMETER DISTRIBUTIONS OF DEEP NEURAL NETWORKS For Figure. 3(a), we train LeNet-5 on MNIST dataset using SGD with constant learning rate η = 0.03 for each batchsize till it converges. Parameters are conv2.weight in LeNet-5. For Figure 3(b), we train ResNet-18 on CIFAR10 using SGD with momentum. We do a RandomCrop on training set scaling to 32× 32 with padding = 4 and then a RandomHorizontalF lip. In training, momentum is set to be 0.9 and weight decay is set to be 5e− 4. Initial learning rate in SGD is set to be 0.1 and we using a learning rate decay of 0.1 on {150, 250}-th epoch respectively. We train it until converges after 250 epoch. Parameters are layer1.1.conv2.weight in ResNet-18. We also observe the parameter distribution on many pretrained models. Details for pre-trained models can be found on https://pytorch.org/docs/stable/torchvision/models.html. Figure.7 shows the distribution of parameters trained by SGD can be well fitted by powerlaw distribution. Parameters in this figure are all randomly selected to be features.10.weight, features.14.weight, features.5.expand3 × 3.weight, Mixed_6d.branch7 × 7_3.conv.weight, layer4.2.conv3.weight and features.denseblock2.denselayer1.conv2.weight for VGG-16, AlexNet, SqueezeNet 1.0, Inception v3, Wide ResNet-50-2 and DenseNet-121 respectively. A Q-Q plot is created by plotting quantiles of two probability distributions against one another, which can provide an assessment of "goodness of fit" by how much the solid line close to the dashed line. From Figure.8, it is clear that the solid lines in bottom pictures are closer to dashed lines on most cases, which indicates network parameters can be better fitted by power-law distribution. Moreover, solid lines in the upper plots severely deviate from dashed lines on the tail of distribution but those in the bottom plot do not, which means the distribution of parameters is indeed heavy-tailed. 7.5.3 FURTHER EXPLANATION ON EXPERIMENTS IN SECTION 5.2 As for the experiments for 2-D model, we also calculate coefficient of the second-order term for the quadratic curve shown in Figure.4(b), and its value is roughly 30, which matches the result in Figure.4(c) in the sense that the result for SGD is similar with the result for power-law dynamic with λ1 ≈ 32. 7.5.4 ESCAPING EFFICIENCY ON NEURAL NETWORK We follow the settings in (Zhu et al., 2019). For convenience of the readers, here we give the details of this setting again. We use corrupted FashionMNIST dataset which contains 1000 images with correct labels and another 200 images with random labels to be training data. A small LeNet-like network with 11,330 parameters is used. Firstly we run the full gradient decent to reach the parameters w∗ near the global minima. Then we continue training using both Langevin dynamic(GLD) and power-law dynamic(PLD). Following Zhu’s setting, the learning rates for GD, GLD and PLD are ηGD = 0.1, ηGLD = 0.07 and ηPLD = 0.07, respectively. For GLD, noise std σ = 10−4 as Zhu already tuned. For our PLD, wt+1 = wt − η∇L(wt) + η · α∇L(wt) √ 1 + β(wt − w∗)2 ξ, where α, β are hyperparameters, ξ ∼ N (0, I), and stands for Hadamard product. Here we select α = 2.4, β = 2 after grid search. Expected sharpness is measured as Eν∼
1. What is the main contribution of the paper regarding the analysis of SGD dynamics? 2. What are the strengths and weaknesses of the proposed approach? 3. Do you have any concerns regarding the assumptions made in the paper? 4. How does the reviewer assess the clarity and validity of the arguments presented in the paper? 5. Are there any specific points in the paper that the reviewer finds questionable or unclear?
Review
Review Summary This paper proposes an analysis for an approximate dynamic of SGD which captures the heavy-tailed noise distributions seen practically at local minima. The authors derive this new dynamic (which they call Power-law dynamic) using basic principles and the assumption that the noise variance depends on the state. The dynamics becomes a modified Langevin equation. They prove also that the expected time to escape a barrier is polynomial in the parameters, as well as a generalization error bound. Strong/Weak points The paper is built up on simple principles It first gives a one-dimensional analysis then generalize, helping the reader to understand Except a few points, in general the paper is well-written. The assumptions made are somewhat strong and may not hold in some cases, see below. In general I have a tendency to accept this paper. Even though there are crucial assumptions that are made, it can be considered as a first step towards a more rigorous and general argument. Here are a few points that I have problems with in the paper: On page 3, paragraph 2, it is written that the solution of Langevin equation is Gaussian distribution. What does it mean? The solution of a SDE is a Markov process, and considering the distribution of the process at time t , it is not necessarily Gaussian; the Fokker-Planck equation governs the change of distribution, having the Gibbs distribution as its stationary distribution, which is not Gaussian in general. The whole argument is made through assuming that near the basin, everything is quadratic (not approximately, equal!). This is completely reflected in Proposition 1 and the further analysis. In Theorem 4, it is not stated that H = H ( w ∗ ) , and I don't see why should one be interested when w → ∞ ? This is because we are talking about an ϵ ball around w ∗ and tending w → ∞ has no meaning.... Maybe I am missing something here? Also, it seems that the distribution is defined only for positive w . In the argument for Section 4 (escaping), it is assumed that the basin is quadratic and stays quadratic, even when it reaches the saddle point. I find this assumption flawed, or I am missing something. At the bottom of page 3, it is said "in this case, .... is satisfied", while I think it should be "not satisfied". On page 4, the notion → p is used for convergence in distribution, which is not usual and is reserved for convergence in probability.
ICLR
Title Dynamic of Stochastic Gradient Descent with State-dependent Noise Abstract Stochastic gradient descent (SGD) and its variants are mainstream methods to train deep neural networks. Since neural networks are non-convex, more and more works study the dynamic behavior of SGD and its impact to generalization, especially the escaping efficiency from local minima. However, these works make the over-simplified assumption that the distribution of gradient noise is stateindependent, although it is state-dependent. In this work, we propose a novel power-law dynamic with state-dependent diffusion to approximate the dynamic of SGD. Then, we prove that the stationary distribution of power-law dynamic is heavy-tailed, which matches the existing empirical observations. Next, we study the escaping efficiency from local minimum of power-law dynamic and prove that the mean escaping time is in polynomial order of the barrier height of the basin, much faster than exponential order of previous dynamics. It indicates that SGD can escape deep sharp minima efficiently and tends to stop at flat minima that have lower generalization error. Finally, we conduct experiments to compare SGD and power-law dynamic, and the results verify our theoretical findings. 1 INTRODUCTION Deep learning has achieved great success in various AI applications, such as computer vision, natural language processing, and speech recognition (He et al., 2016b; Vaswani et al., 2017; He et al., 2016a). Stochastic gradient descent (SGD) and its variants are the mainstream methods to train deep neural networks, since they can deal with the computational bottleneck of the training over large-scale datasets (Bottou & Bousquet, 2008). Although SGD can converge to the minimum in convex optimization (Rakhlin et al., 2012), neural networks are highly non-convex. To understand the behavior of SGD on non-convex optimization landscape, on one hand, researchers are investigating the loss surface of the neural networks with variant architectures (Choromanska et al., 2015; Li et al., 2018b; He et al., 2019b; Draxler et al., 2018; Li et al., 2018a); on the other hand, researchers illustrate that the noise in stochastic algorithm may make it escape from local minima (Keskar et al., 2016; He et al., 2019a; Zhu et al., 2019; Wu et al., 2019a; HaoChen et al., 2020). It is clear that whether stochastic algorithms can escape from poor local minima and finally stop at a minimum with low generalization error is crucial to its test performance. In this work, we focus on the dynamic of SGD and its impact to generalization, especially the escaping efficiency from local minima. To study the dynamic behavior of SGD, most of the works consider SGD as the discretization of a continuous-time dynamic system and investigate its dynamic properties. There are two typical types of models to approximate dynamic of SGD. (Li et al., 2017; Zhou et al., 2019; Liu et al., 2018; Chaudhari & Soatto, 2018; He et al., 2019a; Zhu et al., 2019; Hu et al., 2019; Xie et al., 2020) approximate the dynamic of SGD by Langevin dynamic with constant diffusion coefficient and proved its escaping efficiency from local minima.These works make over-simplified assumption that the covariance matrix of gradient noise is constant, although it is state-dependent in general. The simplified assumption makes the proposed dynamic unable to explain the empirical observation that the distribution of parameters trained by SGD is heavy-tailed (Mahoney & Martin, 2019). To model the heavy-tailed phenomenon, Simsekli et al. (2019); Şimşekli et al. (2019) point that the variance of stochastic gradient may be infinite, and they propose to approximate SGD by dynamic driven by α-stable process with the strong infinite variance condition. However, as shown in the work (Xie et al., 2020; Mandt et al., 2017), the gradient noise follows Gaussian distribution and the infinite variance condition does not satisfied. Therefore it is still lack of suitable theoretical explanation on the implicit regularization of dynamic of SGD. In this work, we conduct a formal study on the (state-dependent) noise structure of SGD and its dynamic behavior. First, we show that the covariance of the noise of SGD in the quadratic basin surrounding the local minima is a quadratic function of the state (i.e., the model parameter). Thus, we propose approximating the dynamic of SGD near the local minimum using a stochastic differential equation whose diffusion coefficient is a quadratic function of state. We call the new dynamic power-law dynamic. We prove that its stationary distribution is power-law κ distribution, where κ is the signal to noise ratio of the second order derivatives at local minimum. Compared with Gaussian distribution, power-law κ distribution is heavy-tailed with tail-index κ. It matches the empirical observation that the distribution of parameters becomes heavy-tailed after SGD training without assuming infinite variance of stochastic gradient in (Simsekli et al., 2019). Second, we analyze the escaping efficiency of power-law dynamic from local minima and its relation to generalization. By using the random perturbation theory for diffused dynamic systems, we analyze the mean escaping time for power-law dynamic. Our results show that: (1) Power-law dynamic can escape from sharp minima faster than flat minima. (2) The mean escaping time for power-law dynamic is only in the polynomial order of the barrier height, much faster than the exponential order for dynamic with constant diffusion coefficient. Furthermore, we provide a PAC-Bayes generalization bound and show power-law dynamic can generalize better than dynamic with constant diffusion coefficient. Therefore, our results indicate that the state-dependent noise helps SGD to escape from sharp minima quickly and implicitly learn well-generalized model. Finally, we corroborate our theory by experiments. We investigate the distributions of parameters trained by SGD on various types of deep neural networks and show that they are well fitted by power-law κ distribution. Then, we compare the escaping efficiency of dynamics with constant diffusion or state-dependent diffusion to that of SGD. Results show that the behavior of power-law dynamic is more consistent with SGD. Our contributions are summarized as follows: (1) We propose a novel power-law dynamic with state-dependent diffusion to approximate dynamic of SGD based on both theoretical derivation and empirical evidence. The power-law dynamic can explain the heavy-tailed phenomenon of parameters trained by SGD without assuming infinite variance of gradient noise. (2) We analyze the mean escaping time and PAC-Bayes generalization bound for power-law dynamic and results show that power-law dynamic can escape sharp local minima faster and generalize better compared with the dynamics with constant diffusion. Our experimental results can support the theoretical findings. 2 BACKGROUND In empirical risk minimization problem, the objective is L(w) = 1n ∑n i=1 `(xi, w), where xi, i = 1, · · · , n are n i.i.d. training samples, w ∈ Rd is the model parameter, and ` is the loss function. Stochastic gradient descent (SGD) is a popular optimization algorithm to minimize L(w). The update rule is wt+1 = wt − η · g̃(wt), where g̃(wt) = 1b ∑ x∈Sb ∇w`(x,wt) is the minibatch gradient calculated by a randomly sampled minibatch Sb of size b and η is the learning rate. The minibatch gradient g̃(wt) is an unbiased estimator of the full gradient g(wt) = ∇L(wt), and the term (g(wt)− g̃(wt)) is called gradient noise in SGD. Langevin Dynamic In (He et al., 2019a; Zhu et al., 2019), the gradient noise is assumed to be drawn from Gaussian distribution according to central limit theorem (CLT), i.e., g(w)− g̃(w) ∼ N (0, C), where covariance matrix C is a constant matrix for all w. Then SGD can be regarded as the numerical discretization of the following Langevin dynamic, dwt = −g(wt)dt+ √ ηC1/2dBt, (1) where Bt is a standard Brownian motion in Rd and √ ηC1/2dBt is called the diffusion term. α-stable Process Simsekli et al. (2019) assume the variance of gradient noise is unbounded. By generalized CLT, the distribution of gradient noise is α-stable distribution S(α, σ), where σ is the α-th moment of gradient noise for given α with α ∈ (0, 2]. Under this assumption, SGD is approximated by the stochastic differential equation (SDE) driven by an α-stable process. 2.1 RELATED WORK There are many works that approximate SGD by Langevin dynamic and most of the theoretical results are obtained for Langevin dynamic with constant diffusion coefficient. From the aspect of optimization, the convergence rate of SGD and its optimal hyper-parameters have been studied in (Li et al., 2017; He et al., 2018; Liu et al., 2018; He et al., 2018) via optimal control theory. From the aspect of generalization, Chaudhari & Soatto (2018); Zhang et al. (2018); Smith & Le (2017) show that SGD implicitly regularizes the negative entropy of the learned distribution. Recently, the escaping efficiency from local minima of Langevin dynamic has been studied (Zhu et al., 2019; Hu et al., 2019; Xie et al., 2020). He et al. (2019a) analyze the PAC-Bayes generalization error of Langevin dynamic to explain the generalization of SGD. The solution of Langevin dynamic with constant diffusion coefficient is Gaussian process, which does not match the empirical observations that the distribution of parameters trained by SGD is a heavy-tailed (Mahoney & Martin, 2019; Hodgkinson & Mahoney, 2020; Gurbuzbalaban et al., 2020). Simsekli et al. (2019); Şimşekli et al. (2019) assume the variance of stochastic gradient is infinite and regard SGD as discretization of a stochastic differential equation (SDE) driven by an α-stable process. The escaping efficiency for the SDE is also shown in (Simsekli et al., 2019). However, these theoretical results are derived for dynamics with constant diffusion term, although the gradient noise in SGD is state-dependent. There are some related works analyze state-dependent noise structure in SGD, such as label noise in (HaoChen et al., 2020) and multiplicative noise in (Wu et al., 2019b). These works propose new algorithms motivated by the noise structure, but they do not analyze the escaping behavior of dynamic of SGD and the impact to generalization. Wu et al. (2018) analyze the escaping behavior of SGD with considering the fluctuations of the second order derivatives and propose the concept linearly stability. In our work, we propose power-law dynamic to approximate SGD and analyze the stationary distribution and the mean escaping time for it. 3 APPROXIMATING SGD BY POWER-LAW DYNAMIC In this section, we study the (state-dependent) noise structure of SGD (in Section 3.1) and propose power-law dynamic to approximate the dynamic of SGD. We first study 1-dimensional power-law dynamic in Section 3.2 and extend it to high dimensional case in Section 3.3. 3.1 NOISE STRUCTURE OF STOCHASTIC GRADIENT DESCENT For non-convex optimization, we investigate the noise structure of SGD around local minima so that we can analyze the escaping efficiency from it. We first describe the quadratic basin where the local minimum is located. Suppose w∗ is a local minimum of the training loss L(w) and g(w∗) = 0. We name the -ball B(w∗, ) with center w∗ and radius as a quadratic basin if the loss function for w ∈ B(w∗, ) is equal to its second-order Taylor expansion as L(w) = L(w∗) + 12 (w − w ∗)TH(w∗)(w − w∗). Here, H(w∗) is the Hessian matrix of loss at w∗, which is (semi) positive definite. Then we start to analyze the gradient noise of SGD. The full gradient of training loss is g(w) = H(w∗)(w − w∗). The stochastic gradient is g̃(w) = g̃(w∗) + H̃(w∗)(w − w∗) by Taylor expansion where g̃(·) and H̃(·) are stochastic version of gradient and Hessian calculated by the minibatch. The randomness of gradient noise comes from two parts: g̃(w∗) and H̃(w∗), which reflects the fluctuations of the first-order and second-order derivatives of the model at w∗ over different minibatches, respectively. The following proposition gives the variance of the gradient noise. Proposition 1 For w ∈ B(w∗, ) ⊂ R, the variance of gradient noise is σ(g(w) − g̃(w)) = σ(g̃(w∗)) + 2ρ(g̃(w∗), H̃(w∗))(w − w∗) + σ(H̃(w∗))(w − w∗)2, where σ(·) and ρ(·, ·) are the variance and covariance in terms of the minibatch. From Proposition 1, we can conclude that: (1) The variance of noise is finite if g̃(w∗) and H̃(w∗) have finite variance because ρ(g̃(w∗), H̃(w∗)) ≤ √ σ(g̃(w∗)) · σ(H̃(w∗)) according to Cauchy–Schwarz inequality. For fixed w∗, a sufficient condition for that g̃(w∗) and H̃(w∗) have finite variance is that the training data x are sampled from bounded domain. This condition is easy to be satisfied because the domain of training data are usually normalized to be bounded before training. In this case, the infinite variance assumption about the stochastic gradient in α-stable process is not satisfied. (2) The variance of noise is state-dependent, which contradicts the assumption in Langevin dynamic. Notations: For ease of the presentation, we use C(w), σg, σH , ρg,H to denote σ(g(w) − g̃(w∗)), σ(g̃(w∗)), σ(H̃(w∗)), ρ(g̃(w∗), H̃(w∗)) in the following context, respectively. 1 3.2 POWER-LAW DYNAMIC According to CLT, the gradient noise follows Gaussian distribution if it has finite variance, i.e., g(w)− g̃(w)→d N (0, C(w)) as b→∞, (2) where→d means “converge in distribution”. Using Gaussian distribution to model the gradient noise in SGD, the update rule of SGD can be written as: wt+1 = wt − ηg(wt) + ηξt, ξt ∼ N (0, C(w)). (3) Eq.3 can be treated as the discretization of the following SDE, which we call it power-law dynamic: dwt = −g(wt)dt+ √ ηC(w)dBt. (4) Power-law dynamic characterizes how the distribution of w changes as time goes on. The distribution density of parameterw at time t (i.e., p(w, t)) is determined by the Fokker-Planck equation (Zwanzig’s type (Guo & Du, 2014)): ∂ ∂t p(w, t) = ∇p(w, t)g(w) + η 2 · ∇ (C(w) · ∇p(w, t)) . (5) The stationary distribution of power-law dynamic can be obtained if we let the left side of FokkerPlanck equation be zero. The following theorem shows the analytic form of the stationary distribution of power-law dynamic, which is heavy-tailed and the tail of the distribution density decays at polynomial order of w − w∗. This is the reason why we call the stochastic differential equation in Eq.4 power-law dynamic. Theorem 2 The stationary distribution density for 1-dimensional power-law dynamic (Eq.4) is p(w) = 1 Z (C(w)) − H ησH exp H ( 4ρg,H ·ArcTan ( C′(w)/ √ 4σHσg − 4ρ2g,H )) ησH √ 4σHσg − 4ρ2g,H , (6) whereC(w) = σg+2ρg,H(w−w∗)+σH(w−w∗)2, Z is the normalization constant andArcTan(·) is the arctangent function. We make discussions on property of p(w). The decreasing rate of p(w) as w goes away from the center w∗ is mainly determined by the term C(w)− H ησH (because the function ArcTan(·) is bounded) which is a polynomial function about w − w∗. Compared with Gaussian distribution the probability density which follows exponential decreasing rate, power-law distribution is less concentrated in the quadratic basin B(w∗, ) and heavy-tailed. We call HησH the tail-index of p(w) and denote it as κ in the following context. We can conclude that the state-dependent noise results in heavy-tailed distribution of parameters, which matches the observations in (Mahoney & Martin, 2019). Langevin dynamic with constant diffusion can be regarded as special case of power-law dynamic when ρH,g = 0 and σH = 0. In this case, p(w) degenerates to Gaussian distribution. Compared with α-stable process, we do not assume infinite variance on gradient noise and demonstrate another mechanism that results in heavy-tailed distribution of parameters. We empirically observe the covariance matrix around the local minimum of training loss on deep neural networks. The results are shown in Figure.1. Readers can refer more details in Appendix 7.1. We have the following observations: (1) The traces of covariance matrices for the deep neural 1In the following context, we assume σg is positive number. networks can be well approximated by quadratic curves, which supports Proposition 1. (2) The minimum of the quadratic curve is nearly located at the local minimum w∗. It indicates that the coefficient of the first-order term ρg,H ≈ 0. Based on the fact that ρg,H is not the determinant factor of the tail of the distribution in Eq.6 and the observations in Figure.1, we consider a simplified form of C(w) that C(w) = σg + σH(w − w∗)2. Corollary 3 If C(w) = σg + σH(w−w∗)2, the stationary distribution of 1-dimensional power-law dynamic (Eq.4) is p(w) = 1 Z (1 + σHσ −1 g (w − w∗)2)−κ, (7) where Z is the normalization constant and κ = HησH is the tail-index. The distribution density in Eq.7 is known as the power-law κ distribution (Zhou & Du, 2014) (It is also named as q-Gaussian distribution in (Tsallis & Bukman, 1996)). As κ→∞, the distribution density tends to be Gaussian, i.e., p(w) ∝ exp(−H(w−w ∗)2 ησg ). Power-law κ distribution becomes more heavy-tailed as κ becomes smaller. Meanwhile, it produces higher probability to appear values far away from the center w∗. Intuitively, smaller κ helps the dynamic to escape from local minima faster. In the approximation of dynamic of SGD, κ equals the signal (i.e., H(w∗)) to noise (i.e., ησH ) ratio of second-order derivative at w∗ in SGD, and κ is linked with three factors: (1) the curvature H(w∗); (2) the fluctuation of the curvature over training data; (3) the hyper-parameters including η and minibatch size b. Please note that σH linearly decreases as the batch size b increases. 3.3 MULTIVARIATE POWER-LAW DYNAMIC In this section, we extend the power-law dynamic to d-dimensional case. We first illustrate the covariance matrix C(w) of gradient noise in SGD. We use the subscripts to denote the element in a vector or a matrix. We use Σg to denote the covariance matrix of g̃(w∗) and assume that Σg is isotropic (i.e., Σg = σg · I). We also assume that Cov(H̃i(w∗), H̃j(w∗)) are equal for all i, j. It can be shown that C(w) = Σg(1 + (w−w∗)TΣHΣ−1g (w−w∗)). Similarly as 1-dimensional case, we omit the first-order term (w − w∗) in C(w). Readers can refer Proposition 10 in Appendix 7.2 for the detailed derivation. We suppose that the signal to noise ratio of H̃(w∗) can be characterized by a scalar κ, i.e., ηΣH = 1 κ ·H(w ∗). Then C(w) can be written as C(w) = Σg(1 + 1 ηκ (w − w∗)TH(w∗)Σ−1g (w − w∗)). (8) Theorem 4 Ifw ∈ Rd andC(w) has the form in Eq.(8) forw ∈ B(w∗, ). The stationary distribution density of power-law dynamic is p(w) = 1 Z [1 + 1 ηκ (w − w∗)TH(w∗)Σ−1g (w − w∗)]−κ (9) for w ∈ B(w∗, ), where Z is the normalization constant and κ satisfies ηΣH = 1κ ·H(w ∗). Remark: The multivariate power-law κ distribution (Eq.9) is a natural extension of the 1-dimensional case. Actually, the assumptions on Σg and κ can be replaced by just assuming Σg, H(w∗),ΣH are codiagonalized. Readers can refer Proposition 11 in Appendix 7.2 for the derivation. 4 ESCAPING EFFICIENCY OF POWER-LAW DYNAMIC In this section, we analyze the escaping efficiency of power-law dynamic from local minima and its relation to generalization. Specifically, we analyze the mean escaping time for wt to escape from a basin. As shown in Figure.2, we suppose that there are two basins whose bottoms are denoted as a and c respectively and the saddle point b is the barrier between two basins. The barrier height is denoted as ∆L = L(b)−L(a). Definition 5 Suppose wt starts at the local minimum a, we denote the time for wt to first reach the saddle point b as inf{t > 0|w0 = a,wt = b}. The mean escaping time τ is defined as τ = Ewt [inf{t > 0|w0 = a,wt = b}]. We first give the mean escaping time for 1-dimensional case in Lemma 6 and then we give the mean escaping time for high-dimensional power-law dynamic in Theorem 7. To analyze the mean escaping time, we take the following assumptions. Assumption 1: The loss function around critical points can be written as L(w) = L(w∗) + 12 (w − w∗)TH(w∗)(w − w∗), where w∗ is a critical point. Assumption 2: The system is in equilibrium near minima, i.e., ∂p(w,t)∂t = 0. Assumption 3: (Low temperature assumption) The gradient noise is small, i.e., ησg ∆L. These three assumptions are commonly used in analyzing escaping time (Xie et al., 2020; Zhou & Du, 2014) for a dynamic. Because both a and b are critical points, we can apply Assumption 1 to get the loss surface around them. We put more discussions about the assumptions in Appendix 7.3.2. We suppose the basin a is quadratic and the variance of noise has the form that C(w) = σga +σHa(w− a)2, which can also be written as C(w) = σga + 2σHa Ha (L(w)− L(a)). Furthermore, we suppose that C(w) = σga + 2σHa Ha (L(w) − L(a)) on the whole escaping path from a to b (not just near the local minimum a). It means that the variance of gradient noise becomes larger as the loss becomes larger. The following lemma gives the mean escaping time of power-law dynamic for 1-dimensional case. Lemma 6 Suppose that Assumption 1-3 are satisfied and C(w) = σga + 2σHa Ha (L(w)− L(a)) on the whole escaping path from a to b. The mean escaping time of 1-dimensional power-law dynamic is, τ = 2π (1− 1 2κ ) √ Ha|Hb| ( 1 + 2 κησga ∆L )κ− 1 2 , (10) where κ = HaησHa > 1 2 , Ha and Hb are the second-order derivatives of training loss at local minimum a and at saddle point b, respectively. The proof of Lemma 6 is based on the results in (Zhou & Du, 2014). We provide a full proof in Appendix 7.3.1. For the dynamic near the saddle point, we just assume that its dynamic is the same as that near the local minimum for simplicity. This assumption is not necessary and we put the extension to more complex dynamic in Appendix 7.3.3. We summarize the mean escaping time of power-law dynamic and dynamics in previous works in Table 1. Based on the results, we have the following discussions. Comparison with other dynamics: (1) Both power-law dynamic and Langevin dynamic can escape sharp minima faster than flat minima, where the sharpness is measured by Ha and larger Ha corresponds to sharper minimum. Power-law dynamic improves the order of barrier height (i.e., ∆L) from exponential to polynomial compared with Langevin dynamic, which implies a faster escaping efficiency of SGD to escape from deep basin. (2) The mean escaping time for α-stable process is independent with the barrier height, but it is in polynomial order of the width of the basin (i.e., width=|b− a|). Compared with α-stable process, the result for power-law dynamic is superior in the sense that it is also in polynomial order of the width (if ∆L ≈ O(|b− a|2)) and power-law dynamic does not rely on the infinite variance assumption. Based on Lemma 6, we analyze the mean escaping time for d-dimensional case. Under the low temperature condition, the probability density concentrates only along the most possible escaping paths in the high-dimensional landscape. For rigorous definition of most possible escaping paths, readers can refer section 3 in (Xie et al., 2020). For simplicity, we consider the case that there is only one most possible escaping path between basin a and basin c. Specifically, the Hessian at saddle point b has only one negative eigenvalue and the most possible escaping direction is the direction corresponding to the negative eigenvalue of the Hessian at b. Theorem 7 Suppose that Assumption 1-3 are satisfied. For w ∈ Rd, we suppose C(w) = Σga + 2 ηκ (L(w) − L(a)) on the whole escaping path from a to b and there is only one most possible path path between basin a and basin c. The mean escaping time for power-law dynamic escaping from basin a to basin c is τ = 2π √ −det(Hb) (1− d 2κ ) √ det(Ha) 1 |Hbe| ( 1 + 1 ηκσe ∆L )κ− 1 2 , (11) where e indicates the most possible escaping direction, Hbe is the only negative eigenvalue of Hb, σe is the eigenvalue of Σga that corresponds to the escaping direction, ∆L = L(b)− L(a), and det(·) is the determinant of a matrix. Remark: In d-dimensional case, the flatness is measured by det(Ha). IfHa has zero eigenvalues, we can replace Ha by H+a in above theorem, where H + a is obtained by projecting Ha onto the subspace composed by the eigenvectors corresponding to the positive eigenvalues of Ha. This is because by Taylor expansion, the loss L(w) only depends on the positive eigenvalues and the corresponding eigenvectors of Ha, i.e., L(w) = L(a) + 12 (w − a) THa(w − a) = L(a) + 12 (P(w − a)) TΛ H+a P(w − a), where Λ H+a is a diagonal matrix composed by non-zero eigenvalues of Ha and the operator P(·) operates the vector to the subspace corresponding to non-zero eigenvalues of Ha. Therefore, the dimension d in Theorem 7 can be regarded as the dimension of subspace that is composed by directions with large eigenvalues. It has been observed that most of the eigenvalues in H is very small (Sagun et al., 2016). Therefore, d will not be a large number and power-law dynamic in multi-dimensional case will inherit the benefit of that in 1-dimensional case compared with Langevin dynamic and α-stable process. The next theorem give an upper bound of the generalization error of the stationary distribution of power-law dynamic, which shows that flatter minimum has smaller generalization error. Theorem 8 Suppose that w ∈ Rd and κ > d2 . For δ > 0, with probability at least 1 − δ, the stationary distribution of power-law dynamic has the following generalization error bound, Ew∼p(w),x∼P(x)`(w, x) ≤ Ew∼p(w)L(w) + √ KL(p||p′) + log 1δ + log n+ 2 n− 1 , where KL(p||p′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η , p(w) is the stationary distribution of d-dimensional power-law dynamic, p′(w) is a prior distribution which is selected to be standard Gaussian distribution, and P(x) is the underlying distribution of data x, det(·) and Tr(·) are the determinant and trace of a matrix, respectively. We make the following discussions on results in Theorem 8. For 1-dimensional case, we have if H > η 2(1+ 1 2κ ) , KL divergence is decreasing as H decreases. For d > 1 and fixed Tr(ΣgH−1) and det(Σg), the generalization error (i.e., Ew∼p(w),x∼P(x)`(w, x)− Ew∼p(w)L(w)) is decreasing as det(H) decreases, which indicates that flatter minimum has smaller generalization error. Moreover, if 2d > Tr(ηΣgH−1), the generalization error is decreasing as κ increases. When κ → ∞, the generalization error tends to that for Langevin dynamic. Combining the mean escaping time and the generalization error bound, we can conclude that state-dependent noise makes SGD escape from sharp minima faster and implicitly tend to learn a flatter model which generalizes better. 5 EXPERIMENTS In this section, we conduct experiments to verify the theoretical results. We first study the fitness between parameter distribution trained by SGD and power-law κ distribution. Then we compare the escaping behavior for power-law dynamic, Langevin dynamic and SGD. 5.1 FITTING PARAMETER DISTRIBUTION USING POWER-LAW DISTRIBUTION We investigate the distribution of parameters trained by SGD on deep neural networks and use power-law κ distribution to fit the parameter distribution. We first use SGD to train various types of deep neural networks till it converge. For each network, we run SGD with different minibatch sizes over the range {64, 256, 1024}. For the settings of other hyper-parameters, readers can refer Appendix 7.5.2. We plot the distribution of model parameters at the same layer using histogram. Next, we use power-law κ distribution to fit the distribution of the parameters and estimate the value of κ via the embedded function "TsallisQGaussianDistribution[]" in Mathematica software. We show results for LeNet-5 with MNIST dataset and ResNet-18 with CIFAR10 dataset (LeCun et al., 2015; He et al., 2016b) in this section, and put results for other network architectures in Appendix 7.5.2. In Figure 3, we report the generalization error (i.e., Test error - Training error) and the values of κ that best fit the histogram. 2 We have the following observations: (1) The distribution of the parameter trained by SGD can be well fitted by power-law κ distribution (blue curve). (2) As the minibatch size becomes larger, κ becomes larger. It is because the noise σH linearly decreases as minibatch size becomes larger and κ = HησH . (3) As κ becomes smaller, the generalization error becomes lower. It indicates that κ also plays a role as indicator of generalization. These results are consistent with the theory in Section 4. 5.2 COMPARISON ON ESCAPING EFFICIENCY We use a 2-dimensional model to simulate the escaping efficiency from minima for power-law dynamic, Langevin dynamic and SGD. We design a non-convex 2-dimensional function written as L(w) = 1n ∑n i=1 `(w − xi), where `(w) = 15 ∑2 j=1 |wj − 1|2.5 · |wj + 1|3 and training data xi ∼ N (0, 0.01I2). We regard the following optimization iterates as the numerical discretization of the power-law dynamic, wt+1 = wt − ηg(wt) + ηλ2 √ 1 + λ1(wt − w∗)2 ξ, where ξ ∼ N (0, I2), λ1, λ2 are two hyper-parameters and stands for Hadamard product. Note that if we set λ1 = 0, it can be regarded as discretization of Langevin dynamic. We set learning rate η = 0.025, and we take 500 iterations in each training. In order to match the trace of covariance matrix of stochastic gradient at minimum point w∗ with the methods above, λ2 is chosen to satisfy Tr(Cov(λ2ξ)) = Tr(Cov(g(w∗))). We compare the success rate of escaping for power-law dynamic, Langevin dynamic and SGD by repeating the experiments 100 times. To analyze the noise term λ1, we choose different λ1 and evaluate corresponding success rate of escaping, as shown in Figure.4(c). The results show that: (1) there is a positive correlation between λ1 and the success rate of escaping; (2) power-law dynamic can mimic the escaping efficiency of SGD, while Langevin dynamic can not. We then scale the loss 2The training errors under the six settings are almost zero. function by 0.9 to make the minima flatter and repeat all the algorithms under the same setting. The success rate for the scaled loss function is shown in Figure.4(d). We can observe that all dynamics escape flatter minima slower. 6 CONCLUSION In this work, we study the dynamic of SGD via investigating state-dependent variance of the stochastic gradient. We propose power-law dynamic with state-dependent diffusion to approximate the dynamic of SGD. We analyze the escaping efficiency from local minima and the PAC-Bayes generalization error bound for power-law dynamic. Results indicate that state-dependent noise helps SGD escape from poor local minima faster and generalize better. We present direct empirical evidence to support our theoretical findings.This work may motivate many interesting research topics, for example, nonGaussian state-dependent noise, new types of state-dependent regularization tricks in deep learning algorithms and more accurate characterization about the loss surface of deep neural networks. We will investigate these topics in future work. 7 APPENDIX 7.1 POWER-LAW DYNAMIC AND STATIONARY DISTRIBUTION Theorem 9 (Theorem 2 in main paper) The stationary distribution density for 1-dimensional powerlaw dynamic (Eq.4) is p(w) = 1 Z (C(w)) − H ησH exp H ( 4ρg,H ·ArcTan ( C′(w)/ √ 4σHσg − 4ρ2g,H )) ησH √ 4σHσg − 4ρ2g,H , whereC(w) = σg+2ρg,H(w−w∗)+σH(w−w∗)2, Z is the normalization constant andArcTan(·) is the arctangent function. Proof: We denote the function H(4ρg,H ·ArcTan(C′(w)/ √ 4σHσg−4ρg,H)) ησH √ 4σHσg−4ρ2g,H as h(w). According to the Fokker-Planck equation, p(w) satisfies 0 = ∇p(w)g(w) + η 2 · ∇ · (C(w)∇p(w)) = ∇ · [ (p(w) · ∇L(w)) + η 2 C(w)∇p(w) ] = ∇ · [η 2 C(w) − HησH +1eh(w)∇(C(w) H ησH · e−h(w) · p(w)) ] Readers can check the third equality by calculating∇(C(w) H ησH · e−h(w) · p(w)) with C(w) = σg + 2ρg,H(w−w∗)+σH(w−w∗)2. Because the left side equals zero, we have C(w) H ησH ·e−h(w) ·p(w) equals to constant. So p(w) ∝ C(w)− H ησH ·eh(w) ·p(w). So we can get the conclusion in the theorem. Theorem 10 (Corollary 3 in main paper) If C(w) = σg + σH(w−w∗)2, the stationary distribution density of power-law dynamic is p(w) = 1 Z (1 + σHσ −1 g (w − w∗)2)−κ, (12) where Z = ∫ w (1 + σHσ −1 g (w − w∗)2)−κdw is the normalization constant and κ = HησH is the tail-index. Proof: According to the Fokker-Planck equation, p(w) satisfies 0 = ∇p(w)g(w) + η 2 · ∇ · (C(w)∇p(w)) = ∇(p(w) · ∇L(w)) + η 2 ∇ · (σg + 2σH H (L(w)− L(w∗)))∇p(w) = ∇ · η 2 C(w)(1 + 2σH Hσg (L(w)− L(w∗))) H −ησH ∇(1 + 2σH Hσg (L(w)− L(w∗))) H ησH p(w) Because the left side equals zero, we have (1 + 2σHHσg (L(w)− L(w ∗))) H ησH p(w) equals to constant. So p(w) ∝ (1 + 2σHHσg (L(w)− L(w ∗))) H −ησH . So we can get the conclusion in the theorem. We plot the un-normalized distribution density for 1-dimensional power-law dynamics with different κ in Figure 5. For the four curves, we set β = 10. We set κ = 1, 0.5, 0.1, 0 and use green, red, purple and blue line to illustrate their corresponding density function, respectively. When κ = 0, it is Gaussian distribution. From the figure, we can see that the tail for power-law κ-distribution is heavier than Gaussian distribution. Actually, for any given time t, the distribution p(w, t) for wt that satisfies power-law dynamic has analytic form, i.e., p(w, t) ∝ (1 + Hηκσ(t) (w −w(t)) 2)−κ, where w(t) = w∗ + (w0 −w∗)e−Ht and σ(t) is a function of σg and t. Readers can refer Eq.18 - Eq.23 in (Tsallis & Bukman, 1995) for the detailed expression. 7.2 SGD AND MULTIVARIATE POWER-LAW DYNAMIC The following proposition shows the covariance of stochastic gradient in SGD in d-dimensional case. We use the subscripts to denote the elements in a vector or a matrix. Proposition 11 For w ∈ Rd, we use C(w) to denote the covariance matrix of stochastic gradient g̃(w) = g̃(w∗)+H̃(w−w∗) and Σ to denote the covariance matrix of g̃(w∗). IfCov(g̃i(w∗), H̃jk) = 0,∀i, j, k, we have Cij(w) = Σij + (w − w∗)TA(ij)(w − w∗), (13) where Σij = Cov(g̃i(w∗), g̃j(w∗)), A(ij) is a d × d matrix with elements A(ij)ab = Cov(H̃ia, H̃jb) with a ∈ [d], b ∈ [d]. Eq.13 can be obtained by directly calculating the covariance of g̃i(w) and g̃j(w) where g̃i(w) = g̃i(w ∗) + ∑d a=1 H̃ia(wa − w∗a), g̃j(w) = g̃j(w∗) + ∑d b=1 H̃jb(wb − w∗b ). In order to get a analytic tractable form of C(w), we make the following assumptions: (1) If Σij = 0, A(ij) is a zero matrix; (2) For Σij 6= 0, A (ij) Σij are equal for all i ∈ [d], j ∈ [d]. The first assumption is reasonable because both Σij andA(ij) reflect the dependence of the derivatives along the i-th direction and j-th direction. Let ΣH = A (ij) Σij ,C(w) can be written asC(w) = Σg(1+(w−w∗)TΣH(w−w∗)). The d-dimensional power-law dynamic is written as dwt = −H(w − w∗)dt+ √ ηC(w)dBt, (14) where C(w) = Σg(1 + (w − w∗)TΣH(w − w∗)) which is a symmetric positive definite matrix that C(w)1/2 exists. The following proposition shows the stationary distribution of the d-dimensional power-law dynamic. Proposition 12 Suppose Σg,ΣH , H are codiagonalizable, i.e., there exist orthogonal matrix Q and diagonal matrices Λ,Γ,Π to satisfy Σg = QTΛQ,ΣH = QTΓQ,H = QTΠQ. Then, the stationary distribution of power-law dynamic is p(w) = 1 Z (1 + (w − w∗)TΣH(w − w∗))−κ, (15) where Z is the normalization constant and κ = Tr(H)ηTr(ΣHΣg) . Proof: Under the codiagonalization assumption on Σg,ΣH , H , Eq.15 can be rewritten as dvt = −Πvtdt+ √ ηΛ(1 + vTt Γvt)dBt if we let vt = Q(wt − w∗). We use φ(v) = ηC(v)2 = η 2 Λ(1 + v TΓv), the stationary probability density p(v) satisfies the Smoluchowski equation: 0 = d∑ i=1 ∂ ∂vi (Πivi · p(v)) + d∑ i=1 ∂ ∂vi · ( φi(w) ∂ ∂vi p(v) ) (16) = d∑ i=1 ∂ ∂vi (Πi·vi · p(v)) + d∑ i=1 ∂ ∂vi · ( ηΛi 2 (1 + vTΓv) ∂ ∂vi p(v) ) . (17) According to the result for 1-dimensional case, we have the expression of p(v) is p(v) ∝ (1 + vTΓv)−κ. To determine the value of κ, we put p(v) in the Smoluchowski equation to obtain d∑ i=1 Πip(v)− 2κ d∑ i=1 Πivi · Γivi · (1 + vTΓv)−κ−1 = d∑ i=1 ∂ ∂vi ( ηΛiκ(1 + v TΓv)−κ · Γivi ) = d∑ i=1 ( ηΛiκ(1 + v TΓv)−κ · Γi ) − 2 d∑ i=1 ( ηΛiκ 2(1 + vTΓv)−κ−1 · (Γivi)2 ) . The we have ∑d i=1 Πi = ηκ ∑d i=1 ΛiΓi. So we have κ = Tr(H) ηTr(ΣHΣg) . According to Proposition 11, we can also consider another assumption on Σg,ΣH , H without assuming their codiagonalization. Instead, we assume (1) If Σij = 0, A(ij) is a zero matrix; (2) For Σij 6= 0,A(ij) are equal for all i ∈ [d], j ∈ [d] and we denoteA(ij) = ΣH . We suppose η ·ΣH = κH . (3) Σg = σg · Id which is isotropic. Under these assumptions, we can get the following theorem. Theorem 13 (Theorem 4 in main paper) If w is d-dimensional and C(w) has the form in Eq.(8). The stationary distribution density of multivariate power-law dynamic is p(w) = 1 Z [1 + 1 ηκ (w − w∗)THΣ−1g (w − w∗)]−κ (18) where Z = ∫∞ −∞[1 + 1 ηκ (w − w ∗)THΣ−1g (w − w∗)]−κdw is the normalization constant. The proof for Theorem 12 is similar to that for Proposition 11. Readers can check that p(w) satisfies the Smoluchowski equation. An example to illustrate why C(w) is diagonally dominant. In Theorem 13, C(w) is assumed to be diagonally dominant. Diagonally dominant indicates that the variance of each dimension of g̃(w) is significantly larger than the covariance of two different dimensions of g̃(w). Consider a two layer fully-connected linear neural network fw,v(x) = wvx where w ∈ R1×m, v ∈ Rm×d, x ∈ Rd and h(·) is the ReLU activation. We consider the regression loss `(w, v) = 12 (y − fw,v(x)) 2. The gradient of wi and vjk can be written as ∂`(w, v) ∂wi = (fw,v(x)− y) · vix (19) ∂`(w, v) ∂vjk = (fw,v(x)− y) · wjxk, (20) where vi denotes the i-th row of matrix v. Suppose that the initialization of w and v is: wi i.i.d∼ N(0, δ1) and vij i.i.d∼ N(0, δ2) . We also assume that Exi = Exj = 0 and xi, xj are independent with each other for i 6= j where xi is the i-th dimension. We have Ew,v ∂`(w, v) ∂wi ∂`(w, v) ∂wj = Ew,v(fw,v(x)− y)2 · vix · vjx (21) = Ew,vy2 · vix · vjx+ Ew,v m∑ i=1 (wivix) 2 · vix · vjx− 2Ew,v( m∑ i=1 ywivix) · vix · vjx (22) Because the independence of vi, vj and their expectations are zero, we can obtain Ew,v ∂`(w,v)∂wi ∂`(w,v) ∂wj = 0 for i 6= j. Similarly, we can get Ew,v ∂`(w,v)∂wi ∂`(w,v) ∂vjk = 0 and Ew,v ∂`(w,v)∂vj′k′ ∂`(w,v) ∂vjk = 0 for (j, k) 6= (j′, k′). The above analyses show that the gradients for different dimensions are independent at initialization. It has been observed that many weights are kept random during training because of the over-parameterization Balduzzi et al. (2017). So, diagonalization dominant property of C(w) is reasonable. 7.3 SUPPLEMENTARY MATERIALS FOR RESULTS IN SECTION 4 7.3.1 PROOF FOR MEAN ESCAPING TIME Lemma 14 (Lemma 6 in main paper) We suppose C(w) = σga + 2σHa Ha (L(w)− L(a)) on the whole escaping path from a to b. The mean escaping time of the 1-dimensional power-law dynamic is, τ = 2π (1− 1 2κ ) √ Ha|Hb| ( 1 + 2 κησga ∆L )κ− 1 2 , (23) where κ = HaησHa , Ha, Hb are the second-order derivatives of training loss at local minimum a and saddle point b. Proof: According to (Van Kampen, 1992), the mean escaping time τ is expressed as τ = P (w∈Va)∫ Ω JdΩ , where Va is the volume of basin a, J is the probability current that satisfies −∇J(w, t) = ∂ ∂w (g(w) · p(w, t)) + ∂ ∂w ( φ(w) ∂p(w, t) ∂w ) = ∂ ∂w φ(w) · (1 + µ σg ∆L(w) )−κ ∂ ((1 + µ σg ∆L(w) )κ p(w, t) ) ∂w , where φ(w) = η2C(w) and µ = 2σHa Ha , σg = σga and ∆L(w) = L(w) − L(a). Integrating both sides, we obtain J(w) = −φ(w) · ( 1 + µ σg ∆L(w) )−κ ∂((1+ µσg ∆L(w))κp(w,t)) ∂w . Because there is no field source on the escape path, J(w) is fixed constant on the escape path. Multiplying φ(w)−1 · ( 1 + µσg ∆L(w) )κ on both sizes, we have J · ∫ c a φ(w)−1 · ( 1 + µ σg ∆L(w) )κ dw = − ∫ c a ∂ (( 1 + µσg ∆L(w) )κ p(w, t) ) ∂w dw = −0 + p(a). Then we get J = p(a)∫ c a φ(w)−1· ( 1+ µσg ∆L(w) )κ dw . As for the term ∫ c a φ(w)−1 · ( 1 + µσg ∆L(w) ) 1 κ dw, we have ∫ c a φ(w)−1 · ( 1 + µ σg ∆L(w) )κ dw (24) = 2 ησg ∫ c a ( 1 + µ σg ∆L(w) )−1+κ dw = 2 ησg ∫ b c ( 1 + µ σg (∆L(b)− 1 2 |Hb|(w − b)2) )−1+κ dw = 2 ησg ∫ b c ( 1 + µ σg (∆L(b)− 1 2 |Hb|(w − b)2) )−1+κ dw = 2 ησg (1 + µ σg ∆L(b))−1+κ ∫ b c ( 1− µ σg · 1 2 |Hb|(w − b)2 1 + µ σg ∆L(b) )−1+κ dw = 2 ησg (1 + µ σg ∆L(b))−1+κ · ( 1 2 µ σg |Hb| 1 + µ σg ∆L(b) )−1/2 ∫ 1 0 y−1/2(1− y)−1+κdy = 2 ησg (1 + µ σg ∆L(b))− 1 2 +κ √ 2σg µ|Hb| B( 1 2 , κ), where the third formula is based on the second order Taylor expansion. Under the low temperature assumption, we can use the second-order Taylor expansion around the saddle point b. As for the term P (w ∈ Va), we have P (w ∈ Va) = ∫ Va p(w)dV = ∫ w∈Va p(a)(1 + µ σg ∆L(w))−κ = p(a) √ 2σg µHa B( 1 2 , κ − 1 2 ), where we use Taylor expansion of L(w) near local minimum a. Then we have τ = P (w∈Va)∫ Ω JdΩ = P (w∈Va)J because J is a constant. Combining all the results, we can get the result in the lemma. Theorem 15 (Theorem 7 in main paper) Suppose w ∈ Rd and there is only one most possible path path between basin a and the outside of basin a. The mean escaping time for power-law dynamic escaping from basin a to the outside of basin a is τ = 2π √ −det(Hb) (1− d 2κ ) √ det(Ha) 1 |Hbe| ( 1 + 1 ηκσe ∆L )κ− 1 2 , (25) where e indicates the most possible escape direction, Hbe is the only negative eigenvalue of Hb, σe is the eigenvalue of Σga corresponding to the escape direction and ∆L = L(b)− L(a). Proof: According to (Van Kampen, 1992), the mean escaping time τ is expressed as τ = P (w∈Va)∫ Ω JdΩ , where Va is the volume of basin a, J is the probability current that satisfies −∇ · J(w, t) = ∂p(w,t)∂t . Under the low temperature assumption, the probability current J concentrates along the direction corresponding the negative eigenvalue of Hbe, and the probability flux of other directions can be ignored. Then we have∫ Ω JdΩ = Je · ∫ Ω ( 1 + 1 ηκ (w − b)T (HbΣ−1g )⊥e(w − b) )−κ+ 12 dΩ, (26) where Je = p(a) · η(1+µσe∆L(b)) −κ+ 1 2 √ µσe|Hbe| 2 √ 2B( 12 ,κ) which is obtained by the calculation of Je for 1-dimensional case in the proof of Lemma 13, and (·)⊥e denotes the directions perpendicular to the escape direction e. Suppose HbΣ−1g are symmetric matrix. Then there exist orthogonal matrix Q and diagonal matrix Λ = diag(λ1, · · · , λd) that satisfy HbΣ−1g = QTΛQ. We also denote v = Q(w − b). We define a sequence as Tk = 1 + 1ηκ · ∑d j=k λjv 2 j for k = 1, · · · , d. As for the term∫ Ω ( 1 + 1ηκ (w − b) T (HbΣ −1 g ) ⊥e(w − b) )−κ+ 12 dΩ, we have∫ Ω ( 1 + 1 ηκ (w − b)T (HbΣ−1g )⊥e(w − b) )−κ+ 12 dΩ = ∫ (1 + 1 ηκ · vTΛv)−κ+ 12 dw = ∫ (1 + 1 ηκ · d∑ j 6=e λjv 2 j ) −κ+ 12 dv =((ηκ)−1λ1) − 12 ∫ T −κ+ 12 2 B( 1 2 , κ)dv = d−2∏ j=0 ((ηκ)−1λj) − 12B( 1 2 , κ− j 2 ) = d−2∏ j=0 ((ηκ)−1λj) − 12 · √ πdΓ(κ− d2 ) Γ(κ) = √ (ηκπ)d−1 · Γ(κ− d−22 ) Γ(κ+ 12 ) √ det((HbΣ −1 g )⊥e) . As for the term P (w ∈ Va), we have P (w ∈ Va) = ∫ Va p(w)dV = p(a) ∫ w∈Va ( 1 + (w − w∗)THaΣ−1g (w − w∗) ) dw (27) =p(a) · √ (ηκπ)d · Γ(κ− d2 ) Γ(κ) √ det((HaΣ −1 g )) (28) where we use Taylor expansion of L(w) near local minimum a. Combined the results for P (w ∈ Va) and J , we can get the result. 7.3.2 FURTHER EXPLANATION ABOUT ASSUMPTION 1-3 We adopt the commonly used assumptions to analyze mean escaping time for dynamic system (Xie et al., 2020; Smith & Le, 2017; Zhou & Du, 2014). Assumption 2 can be replaced by weaker assumption that the system is quasi-equilibrium which is adopted in (Xie et al., 2020). For the differences between quasi-equilibrium and equilibrium, readers can refer to (Xie et al., 2020) for detailed discussions. Assumption 3 is commonly used (Xie et al., 2020; Zhou & Du, 2014). Under Assumption 3, the probability densities will concentrate around minima and the most possible paths. Assumption 3 will make the second order Taylor approximation more reasonable. 7.3.3 EXTENSION TO MORE COMPLEX DYNAMIC ON THE ESCAPING PATH In Lemma 6, we assume that C(w) = σga + 2σHa Ha (L(w) − L(a)) on the whole escaping path from a to b for ease of comparison and presentation. This assumption is not necessary and we can assume a different dynamic near saddle point b. Specially, we can assume the point z is the midpoint on the most possible path beween a and b, where L(z) = (1 − z)L(a) + zL(b). The dynamic with C(w) = σga + 2σHa Ha (L(w) − L(a)) dominates the path a → z and the dynamic with C(w) = σgb + 2σHb Hb (L(b)−L(w)) dominates the path z → b. Then only two things will be changed in proof of Lemma 6. First, we need to change the stationary distribution near saddle points according to its own dynamic in Eq.20. Second, we need to change the integral about probability density on the whole path to sum of integrals on these two sub-paths. Similar proof techniques are adopted for analyzing escaping time of Langevin dynamic in proof of Theorem 4.1 in the work Xie et al. (2020). Since the proof is analogous, we omit the details here. 7.4 PAC-BAYES GENERALIZATION BOUND We briefly introduce the basic settings for PAC-Bayes generalization error. The expected risk is defined as Ex∼P(x)`(w, x). Suppose the parameter follows a distribution with density p(w), the expected risk in terms of p(w) is defined as Ew∼p(w),x∼P(x)`(w, x). The empirical risk in terms of p(w) is defined as Ew∼p(w)L(w) = Ew∼p(w) 1n ∑n i=1 `(w, xi). Suppose the prior distribution over the parameter space is p′(w) and p(w) is the distribution on the parameter space expressing the learned hypothesis function. For power-law dynamic, p(w) is its stationary distribution and we choose p′(w) to be Gaussian distribution with center w∗ and covariance matrix I . Then we can get the following theorem. Theorem 16 (Theorem 8 in main paper) For w ∈ Rd, we select the prior distribution p′(w) to be standard Gaussian distribution. For δ > 0, with probability at least 1− δ, the stationary distribution of power-law dynamic has the following generalization error bound, Ew∼p(w),x∼P(x)`(w, x) ≤ Ew∼p(w)L(w) + √ KL(p||p′) + log 1δ + log n+ 2 n− 1 , (29) whereKL(p||p′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η andP(x) is the underlying distribution of data x. Proof: Eq.(29) directly follows the results in (McAllester, 1999). Here we calculate the Kullback–Leibler (KL) divergence between prior distribution and the stationary distribution of power-law dynamic. The prior distribution is selected to be standard Gaussion distribution with distribution density p′(w) = 1√ (2π)d det (I) exp{− 12 (w−w ∗)T I(w−w∗)}. The posterior distribution density is the stationary distribution for power-law dynamic, i.e., p(w) = 1Z ·(1+ 1 ηκ ·(w−w ∗)THΣ−1g (w−w∗))−κ. Suppose HΣ−1g are symmetric matrix. Then there exist orthogonal matrix Q and diagonal matrix Λ = diag(λ1, · · · , λd) that satisfy HΣ−1g = QTΛQ. We also denote v = Q(w − w∗). We have log ( p(w) p′(w) ) = −κ log(1 + 1 ηκ · (w − w∗)THΣ−1g (w − w∗))− logZ + 1 2 (w − w∗)T I(w − w∗) + d 2 log 2π The KL-divergence is defined as KL(p(w)||p′(w)) = ∫ w p(w) log ( p(w) p′(w) ) dw. Putting v = Q(w − w∗) in the integral, we have KL(p(w)||p′(w)) = d 2 log 2π − logZ + 1 2Z ∫ v vT v ( 1 + 1 ηκ · vTΛv )−κ dv − 1 Zη ∫ v vTΛv · (1 + 1 ηκ · vTΛv)−κdv, (30) where we use the approximation that log(1 + x) ≈ x. We define a sequence as Tk = 1 + 1ηκ ·∑d j=k λjv 2 j for k = 1, · · · , d. We first calculate the normalization constant Z. Z = ∫ (1 + 1 ηκ · vTΛv)−κdw = ∫ (1 + 1 ηκ · d∑ j=1 λjv 2 j ) −κdv =((ηκ)−1λ1) − 12 ∫ T −κ+ 12 2 B( 1 2 , κ− 1 2 )dv = d∏ j=1 ((ηκ)−1λj) − 12B( 1 2 , κ− j 2 ) = d∏ j=1 ((ηκ)−1λj) − 12 · √ πdΓ(κ− d2 ) Γ(κ) We define Zj = ((ηκ)−1λj)− 1 2B ( 1 2 , κ− j 2 ) . For the third term in Eq.(30), we have 2Z · III = ∫ v vT v(1 + 1 ηκ vTΛv)−κdv = ∫ v2,···vd ∫ v1 v21 ( 1 + 1 ηκ · vTΛv )−κ dv1 + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd = ∫ v2,···vd T−κ2 ∫ v1 v21 ( 1 + (ηκ)−1λ1v 2 1 T2 )−κ dv1 + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd = ∫ v2,··· ,vd T−κ2 ∫ ( T2 (ηκ)−1λ1 ) 3 2 y 1 2 (1 + y)−κ dy + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd = ∫ v2,··· ,vd ((ηκ)−1λ1) − 3 2 T −κ+ 3 2 2 B ( 3 2 , κ− 3 2 ) + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd =( λ1 ηκ )− 3 2B ( 3 2 , κ− 3 2 )∫ v2,··· ,vd T −κ+ 3 2 2 dv2··· ,vd + ∫ v2,··· ,vd Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd For term ∫ v2,··· ,vd T − 1κ+ 3 2 2 dv2··· ,vd in above equation, we have ∫ v2,··· ,vd T −κ+ 32 2 dv2··· ,vd = ∫ v3,··· ,vd T−κ+23 ((ηκ) −1λ2) − 12B ( 1 2 , κ− 2 ) dv3,··· ,vd = ∫ v4,··· ,vd T −κ+ 52 4 ((ηκ) −1λ2) − 12 ((ηκ)−1λ3) − 12B ( 1 2 , κ− 5 2 ) B ( 1 2 , κ− 2 ) dv4,··· ,vd = ∫ vd T −κ+ 12 + 1 2×d d d−1∏ j=2 ((ηκ)−1λj) − 12 d−1∏ j=2 B ( 1 2 , κ− ( j 2 + 1) ) dvd = d∏ j=2 ((ηκ)−1λj) − 12 d∏ j=2 B ( 1 2 , κ− ( j 2 + 1) ) Let Aj = ((ηκ)−1λj)− 3 2B ( 3 2 , κ− ( j 2 + 1) ) . According to the above two equations, we can get the recursion 2Z ∫ vT vT−κ1 dv =A1 · ∫ T −κ+ 32 2 + Z1 ∫ v2,··· ,vd d∑ j=2 v2j T−κ+ 122 dv2··· ,vd =A1 · ∫ T −κ+ 3−12 2 dv2···vd + Z1 ·A2 ∫ T −κ+ 42 3 dv3··· ,vd + Z1Z2 ∫ d∑ j=3 v2j T−κ+ 123 dv3··· ,vd = d−1∑ j=1 Aj j−1∏ k=1 Zk ∫ T −κ+ j+1+12 j+1 dvj+1,··· ,vd + d−1∏ k=1 Zk ∫ v2dT −κ+ d−12 d dvd = d−1∑ j=1 ( λj ηκ )− 3 2B ( 3 2 , κ− ( j 2 + 1) ) j−1∏ k=1 ( λk ηκ )− 1 2B ( 1 2 , κ− k 2 ) d∏ s=j+1 (( λs ηκ )− 1 2 d∏ s=j+1 B ( 1 2 , κ− (s 2 + 1) ) + d−1∏ j=1 ( λj ηκ )− 1 2B( 1 2 , κ− j 2 − 1) · (λd ηκ )− 3 2B( 3 2 , κ− (d 2 + 1)) = √ πdΓ(κ− d2 − 1)Tr(H −1Σg) 2Γ(κ) √ (ηκ)−(d+2) det(H−1Σg) We have III = √ πdΓ(κ− d2 − 1)Tr(H −1Σg) 4Γ(κ) √ (ηκ)−(d+2) det(H−1Σg) · d∏ j=1 ((ηκ)−1λj) 1 2 · Γ(κ)√ πdΓ(κ− d2 ) = ηκTr(H−1Σg) 4(κ− d2 − 1) Similarly, for the fourth term in Eq.(30), we have IV = κd 2(κ− d2−1) . Combining all the results together, we can get KL(p||p′) = 12 log det(H) (ηκ)d det(Σg) + log Γ(κ) Γ(κ− d2 ) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2. Using the fact that log Γ(κ) Γ(κ− d2 ) ≤ d2 log κ, we have KL(p||p ′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η . 7.5 IMPLEMENTATION DETAILS OF THE EXPERIMENTS 7.5.1 OBSERVATIONS ON THE COVARIANCE MATRIX In this section, we introduce the settings on experiments of the quadratic approximation of covariance of the stochastic gradient on plain convolutional neural network (CNN) and ResNet. For each model, we use gradient descent with small constant learning rate to train the network till it converges. The converged point can be regarded as a local minimum, denoted as w∗. As for the detailed settings of the CNN model, the structure for plain CNN model is input → Conv1→ maxpool → Conv2→ maxpool → fc1→ Relu→ fc2→ output. Both Conv1 and Conv2 use 5 × 5 kernels with 10 channels and no padding. Dimensions of full connected layer fc1 and fc2 are 1600 × 50 and 50 × 10 respectively. We randomly sample 1000 images from FashionMNIST (Xiao et al., 2017) dataset as training set. The initialization method is the Kaiming initialization (He et al., 2015) in PyTorch. The learning rate of gradient descent is set to be 0.1. After 3000 iterations, GD converges with almost 100% training accuracy and the training loss being 1e−3. As for ResNet, we use the ResNet-18 model (He et al., 2016b) and randomly sample 1000 images from Kaggle’s dogs-vs-cats dataset as training set. The initialization method is the Kaiming initialization (He et al., 2015) in PyTorch. The learning rate of gradient descent is set to be 0.001. After 10000 iterations, GD converges with 100% training accuracy and the training loss being 1e−3. We then calculate the covariance matrix of the stochastic gradient at some points belonging to the local region around w∗. The points are selected according to the formula: w∗layerL ± (i× Scale), where w∗layerL denotes the parameters at layer L, and i × Scale, i ∈ [N ] determines the distance away from w∗layerL. When we select points according to this formula by changing the parameters at layer L, we fixed the parameters at other layers. For both CNN model and ResNet18 model, we select 20 points by setting i = 1, · · · , 10. For example, for CNN model, we choose the 20 points by changing the parameters at the Conv1 layer with Scale = 0.001 and Conv2 layer with Scale = 0.0001, respectively. For ResNet18, we choose the 20 points by changing the parameters for a convolutional layer at the first residual block with Scale = 0.0001 and second residual block with Scale = 0.0001, respectively. The results are shown in Figure.1. The x-axis denotes the distance of the point away from the local minimum and the y-axis shows the value of the trace of covariance matrix at each point. The results show that the covariance of noise in SGD is indeed not constant and it can be well approximated by quadratic function of state (the blue line in the figures), which is consistent with our theoretical results in Section 3.1. 7.5.2 SUPPLEMENTARY EXPERIMENTS ON PARAMETER DISTRIBUTIONS OF DEEP NEURAL NETWORKS For Figure. 3(a), we train LeNet-5 on MNIST dataset using SGD with constant learning rate η = 0.03 for each batchsize till it converges. Parameters are conv2.weight in LeNet-5. For Figure 3(b), we train ResNet-18 on CIFAR10 using SGD with momentum. We do a RandomCrop on training set scaling to 32× 32 with padding = 4 and then a RandomHorizontalF lip. In training, momentum is set to be 0.9 and weight decay is set to be 5e− 4. Initial learning rate in SGD is set to be 0.1 and we using a learning rate decay of 0.1 on {150, 250}-th epoch respectively. We train it until converges after 250 epoch. Parameters are layer1.1.conv2.weight in ResNet-18. We also observe the parameter distribution on many pretrained models. Details for pre-trained models can be found on https://pytorch.org/docs/stable/torchvision/models.html. Figure.7 shows the distribution of parameters trained by SGD can be well fitted by powerlaw distribution. Parameters in this figure are all randomly selected to be features.10.weight, features.14.weight, features.5.expand3 × 3.weight, Mixed_6d.branch7 × 7_3.conv.weight, layer4.2.conv3.weight and features.denseblock2.denselayer1.conv2.weight for VGG-16, AlexNet, SqueezeNet 1.0, Inception v3, Wide ResNet-50-2 and DenseNet-121 respectively. A Q-Q plot is created by plotting quantiles of two probability distributions against one another, which can provide an assessment of "goodness of fit" by how much the solid line close to the dashed line. From Figure.8, it is clear that the solid lines in bottom pictures are closer to dashed lines on most cases, which indicates network parameters can be better fitted by power-law distribution. Moreover, solid lines in the upper plots severely deviate from dashed lines on the tail of distribution but those in the bottom plot do not, which means the distribution of parameters is indeed heavy-tailed. 7.5.3 FURTHER EXPLANATION ON EXPERIMENTS IN SECTION 5.2 As for the experiments for 2-D model, we also calculate coefficient of the second-order term for the quadratic curve shown in Figure.4(b), and its value is roughly 30, which matches the result in Figure.4(c) in the sense that the result for SGD is similar with the result for power-law dynamic with λ1 ≈ 32. 7.5.4 ESCAPING EFFICIENCY ON NEURAL NETWORK We follow the settings in (Zhu et al., 2019). For convenience of the readers, here we give the details of this setting again. We use corrupted FashionMNIST dataset which contains 1000 images with correct labels and another 200 images with random labels to be training data. A small LeNet-like network with 11,330 parameters is used. Firstly we run the full gradient decent to reach the parameters w∗ near the global minima. Then we continue training using both Langevin dynamic(GLD) and power-law dynamic(PLD). Following Zhu’s setting, the learning rates for GD, GLD and PLD are ηGD = 0.1, ηGLD = 0.07 and ηPLD = 0.07, respectively. For GLD, noise std σ = 10−4 as Zhu already tuned. For our PLD, wt+1 = wt − η∇L(wt) + η · α∇L(wt) √ 1 + β(wt − w∗)2 ξ, where α, β are hyperparameters, ξ ∼ N (0, I), and stands for Hadamard product. Here we select α = 2.4, β = 2 after grid search. Expected sharpness is measured as Eν∼
1. What is the focus of the paper regarding SGD noise and its impact on escaping times and generalization? 2. What are the strengths of the paper, particularly in terms of mathematical analysis and novelty? 3. What are the weaknesses of the paper regarding assumptions, precision, and clarity? 4. How does the reviewer assess the relationship between kappa and generalization, and how does Figure 3 relate to this aspect? 5. What are the concerns regarding the value of kappa in Figure 3, and how is it computed? 6. Is there any concern about the dependency on w - w* in Equation 6, and what happens when g(w) depends on (w - w*)? 7. What is the intuition behind \sigma_H, and how does it relate to local minima? 8. Are there any concerns regarding the definition of \Sigma_H in the multivariate case? 9. Why is the assumption that the signal-to-noise ratio of \tilde{H} can be characterized by a scalar considered reasonable?
Review
Review Summary: the paper studies the effect of SGD noise near a local minimum of the loss by using a novel Taylor expansion to estimate the distribution of gradient noise in the neighborhood of that minimum. They use this to derive closed-form equations describing the distribution of the iterates, which they use to characterize properties such escaping times and generalization. Pros: To my knowledge, the mathematical analysis appears to be quite novel and insightful. It is particularly interesting that the authors show that the second order effect of the SGD noise in the Hessian induces a power law distribution over the iterates. Some empirical support is provided for the theory. Cons: In general, a clearer statement (and justification) of the assumptions is required. For example, it appears to be implicit throughout the paper that we only consider the neighborhood of a local minimum, so the analysis is essentially for a quadratic in this neighborhood. This should be stated more explicitly. I also have some concerns about mathematical precision in the theorem statements. It is sometimes unclear which computations are rigorous equalities and which are not - for example, in Lemma 6 about escaping times, exact equality is used. However, the proof relies on Taylor expansion and uses approximate equalities in the steps. This is potentially misleading. In general, the results seem interesting, and it is understandable that certain assumptions/heuristics must be used because this area of research is technically challenging. However, I would like to see the clarity of the presentation improved before recommending acceptance. I have more specific questions regarding the details in the paper below: Could the authors elaborate on the relationship between kappa and generalization? From the paper my understanding was that smaller kappa meant flatter curvature and better generalization, but this doesn't seem to be supported by Figure 3. How is the value of kappa contained in Figure 3? Is it computed via computing the Hessian and its covariance or chosen to best fit the histograms in the figure? Eqn 6: It seems like this closed form computation is specifically for the case when the function is quadratic (e.g. we take 2nd order Taylor approximation around a local min. Can the authors confirm?) If this is the case, what happens to the dependency on w - w^* and why is there no such explicit term in eqn 6? It would appear that g(w) should depend on (w - w^*). In the overparameterized regime, it would appear that \sigma_g could go to 0 if each training example is overfit by the model. It appears that plugging in \sigma_g = 0 would introduce some degeneracy in equation 6 and 7, however. Can the authors comment on this? Intuition on the term \sigma_H: what do we expect this to look like in practice and do the authors have a sense on whether this term only matters around local minimum? The definition of \Sigma_H in the multivariate case: in the first paragraph of section 3, the definition on the LHS has no mention of i, j but the RHS does. Assuming the signal to noise ratio of \tilde{H} can be characterized by a scalar - why is this assumption reasonable? EDIT: Changed my score from 5 to 6 after the author response/revision.
ICLR
Title Dynamic of Stochastic Gradient Descent with State-dependent Noise Abstract Stochastic gradient descent (SGD) and its variants are mainstream methods to train deep neural networks. Since neural networks are non-convex, more and more works study the dynamic behavior of SGD and its impact to generalization, especially the escaping efficiency from local minima. However, these works make the over-simplified assumption that the distribution of gradient noise is stateindependent, although it is state-dependent. In this work, we propose a novel power-law dynamic with state-dependent diffusion to approximate the dynamic of SGD. Then, we prove that the stationary distribution of power-law dynamic is heavy-tailed, which matches the existing empirical observations. Next, we study the escaping efficiency from local minimum of power-law dynamic and prove that the mean escaping time is in polynomial order of the barrier height of the basin, much faster than exponential order of previous dynamics. It indicates that SGD can escape deep sharp minima efficiently and tends to stop at flat minima that have lower generalization error. Finally, we conduct experiments to compare SGD and power-law dynamic, and the results verify our theoretical findings. 1 INTRODUCTION Deep learning has achieved great success in various AI applications, such as computer vision, natural language processing, and speech recognition (He et al., 2016b; Vaswani et al., 2017; He et al., 2016a). Stochastic gradient descent (SGD) and its variants are the mainstream methods to train deep neural networks, since they can deal with the computational bottleneck of the training over large-scale datasets (Bottou & Bousquet, 2008). Although SGD can converge to the minimum in convex optimization (Rakhlin et al., 2012), neural networks are highly non-convex. To understand the behavior of SGD on non-convex optimization landscape, on one hand, researchers are investigating the loss surface of the neural networks with variant architectures (Choromanska et al., 2015; Li et al., 2018b; He et al., 2019b; Draxler et al., 2018; Li et al., 2018a); on the other hand, researchers illustrate that the noise in stochastic algorithm may make it escape from local minima (Keskar et al., 2016; He et al., 2019a; Zhu et al., 2019; Wu et al., 2019a; HaoChen et al., 2020). It is clear that whether stochastic algorithms can escape from poor local minima and finally stop at a minimum with low generalization error is crucial to its test performance. In this work, we focus on the dynamic of SGD and its impact to generalization, especially the escaping efficiency from local minima. To study the dynamic behavior of SGD, most of the works consider SGD as the discretization of a continuous-time dynamic system and investigate its dynamic properties. There are two typical types of models to approximate dynamic of SGD. (Li et al., 2017; Zhou et al., 2019; Liu et al., 2018; Chaudhari & Soatto, 2018; He et al., 2019a; Zhu et al., 2019; Hu et al., 2019; Xie et al., 2020) approximate the dynamic of SGD by Langevin dynamic with constant diffusion coefficient and proved its escaping efficiency from local minima.These works make over-simplified assumption that the covariance matrix of gradient noise is constant, although it is state-dependent in general. The simplified assumption makes the proposed dynamic unable to explain the empirical observation that the distribution of parameters trained by SGD is heavy-tailed (Mahoney & Martin, 2019). To model the heavy-tailed phenomenon, Simsekli et al. (2019); Şimşekli et al. (2019) point that the variance of stochastic gradient may be infinite, and they propose to approximate SGD by dynamic driven by α-stable process with the strong infinite variance condition. However, as shown in the work (Xie et al., 2020; Mandt et al., 2017), the gradient noise follows Gaussian distribution and the infinite variance condition does not satisfied. Therefore it is still lack of suitable theoretical explanation on the implicit regularization of dynamic of SGD. In this work, we conduct a formal study on the (state-dependent) noise structure of SGD and its dynamic behavior. First, we show that the covariance of the noise of SGD in the quadratic basin surrounding the local minima is a quadratic function of the state (i.e., the model parameter). Thus, we propose approximating the dynamic of SGD near the local minimum using a stochastic differential equation whose diffusion coefficient is a quadratic function of state. We call the new dynamic power-law dynamic. We prove that its stationary distribution is power-law κ distribution, where κ is the signal to noise ratio of the second order derivatives at local minimum. Compared with Gaussian distribution, power-law κ distribution is heavy-tailed with tail-index κ. It matches the empirical observation that the distribution of parameters becomes heavy-tailed after SGD training without assuming infinite variance of stochastic gradient in (Simsekli et al., 2019). Second, we analyze the escaping efficiency of power-law dynamic from local minima and its relation to generalization. By using the random perturbation theory for diffused dynamic systems, we analyze the mean escaping time for power-law dynamic. Our results show that: (1) Power-law dynamic can escape from sharp minima faster than flat minima. (2) The mean escaping time for power-law dynamic is only in the polynomial order of the barrier height, much faster than the exponential order for dynamic with constant diffusion coefficient. Furthermore, we provide a PAC-Bayes generalization bound and show power-law dynamic can generalize better than dynamic with constant diffusion coefficient. Therefore, our results indicate that the state-dependent noise helps SGD to escape from sharp minima quickly and implicitly learn well-generalized model. Finally, we corroborate our theory by experiments. We investigate the distributions of parameters trained by SGD on various types of deep neural networks and show that they are well fitted by power-law κ distribution. Then, we compare the escaping efficiency of dynamics with constant diffusion or state-dependent diffusion to that of SGD. Results show that the behavior of power-law dynamic is more consistent with SGD. Our contributions are summarized as follows: (1) We propose a novel power-law dynamic with state-dependent diffusion to approximate dynamic of SGD based on both theoretical derivation and empirical evidence. The power-law dynamic can explain the heavy-tailed phenomenon of parameters trained by SGD without assuming infinite variance of gradient noise. (2) We analyze the mean escaping time and PAC-Bayes generalization bound for power-law dynamic and results show that power-law dynamic can escape sharp local minima faster and generalize better compared with the dynamics with constant diffusion. Our experimental results can support the theoretical findings. 2 BACKGROUND In empirical risk minimization problem, the objective is L(w) = 1n ∑n i=1 `(xi, w), where xi, i = 1, · · · , n are n i.i.d. training samples, w ∈ Rd is the model parameter, and ` is the loss function. Stochastic gradient descent (SGD) is a popular optimization algorithm to minimize L(w). The update rule is wt+1 = wt − η · g̃(wt), where g̃(wt) = 1b ∑ x∈Sb ∇w`(x,wt) is the minibatch gradient calculated by a randomly sampled minibatch Sb of size b and η is the learning rate. The minibatch gradient g̃(wt) is an unbiased estimator of the full gradient g(wt) = ∇L(wt), and the term (g(wt)− g̃(wt)) is called gradient noise in SGD. Langevin Dynamic In (He et al., 2019a; Zhu et al., 2019), the gradient noise is assumed to be drawn from Gaussian distribution according to central limit theorem (CLT), i.e., g(w)− g̃(w) ∼ N (0, C), where covariance matrix C is a constant matrix for all w. Then SGD can be regarded as the numerical discretization of the following Langevin dynamic, dwt = −g(wt)dt+ √ ηC1/2dBt, (1) where Bt is a standard Brownian motion in Rd and √ ηC1/2dBt is called the diffusion term. α-stable Process Simsekli et al. (2019) assume the variance of gradient noise is unbounded. By generalized CLT, the distribution of gradient noise is α-stable distribution S(α, σ), where σ is the α-th moment of gradient noise for given α with α ∈ (0, 2]. Under this assumption, SGD is approximated by the stochastic differential equation (SDE) driven by an α-stable process. 2.1 RELATED WORK There are many works that approximate SGD by Langevin dynamic and most of the theoretical results are obtained for Langevin dynamic with constant diffusion coefficient. From the aspect of optimization, the convergence rate of SGD and its optimal hyper-parameters have been studied in (Li et al., 2017; He et al., 2018; Liu et al., 2018; He et al., 2018) via optimal control theory. From the aspect of generalization, Chaudhari & Soatto (2018); Zhang et al. (2018); Smith & Le (2017) show that SGD implicitly regularizes the negative entropy of the learned distribution. Recently, the escaping efficiency from local minima of Langevin dynamic has been studied (Zhu et al., 2019; Hu et al., 2019; Xie et al., 2020). He et al. (2019a) analyze the PAC-Bayes generalization error of Langevin dynamic to explain the generalization of SGD. The solution of Langevin dynamic with constant diffusion coefficient is Gaussian process, which does not match the empirical observations that the distribution of parameters trained by SGD is a heavy-tailed (Mahoney & Martin, 2019; Hodgkinson & Mahoney, 2020; Gurbuzbalaban et al., 2020). Simsekli et al. (2019); Şimşekli et al. (2019) assume the variance of stochastic gradient is infinite and regard SGD as discretization of a stochastic differential equation (SDE) driven by an α-stable process. The escaping efficiency for the SDE is also shown in (Simsekli et al., 2019). However, these theoretical results are derived for dynamics with constant diffusion term, although the gradient noise in SGD is state-dependent. There are some related works analyze state-dependent noise structure in SGD, such as label noise in (HaoChen et al., 2020) and multiplicative noise in (Wu et al., 2019b). These works propose new algorithms motivated by the noise structure, but they do not analyze the escaping behavior of dynamic of SGD and the impact to generalization. Wu et al. (2018) analyze the escaping behavior of SGD with considering the fluctuations of the second order derivatives and propose the concept linearly stability. In our work, we propose power-law dynamic to approximate SGD and analyze the stationary distribution and the mean escaping time for it. 3 APPROXIMATING SGD BY POWER-LAW DYNAMIC In this section, we study the (state-dependent) noise structure of SGD (in Section 3.1) and propose power-law dynamic to approximate the dynamic of SGD. We first study 1-dimensional power-law dynamic in Section 3.2 and extend it to high dimensional case in Section 3.3. 3.1 NOISE STRUCTURE OF STOCHASTIC GRADIENT DESCENT For non-convex optimization, we investigate the noise structure of SGD around local minima so that we can analyze the escaping efficiency from it. We first describe the quadratic basin where the local minimum is located. Suppose w∗ is a local minimum of the training loss L(w) and g(w∗) = 0. We name the -ball B(w∗, ) with center w∗ and radius as a quadratic basin if the loss function for w ∈ B(w∗, ) is equal to its second-order Taylor expansion as L(w) = L(w∗) + 12 (w − w ∗)TH(w∗)(w − w∗). Here, H(w∗) is the Hessian matrix of loss at w∗, which is (semi) positive definite. Then we start to analyze the gradient noise of SGD. The full gradient of training loss is g(w) = H(w∗)(w − w∗). The stochastic gradient is g̃(w) = g̃(w∗) + H̃(w∗)(w − w∗) by Taylor expansion where g̃(·) and H̃(·) are stochastic version of gradient and Hessian calculated by the minibatch. The randomness of gradient noise comes from two parts: g̃(w∗) and H̃(w∗), which reflects the fluctuations of the first-order and second-order derivatives of the model at w∗ over different minibatches, respectively. The following proposition gives the variance of the gradient noise. Proposition 1 For w ∈ B(w∗, ) ⊂ R, the variance of gradient noise is σ(g(w) − g̃(w)) = σ(g̃(w∗)) + 2ρ(g̃(w∗), H̃(w∗))(w − w∗) + σ(H̃(w∗))(w − w∗)2, where σ(·) and ρ(·, ·) are the variance and covariance in terms of the minibatch. From Proposition 1, we can conclude that: (1) The variance of noise is finite if g̃(w∗) and H̃(w∗) have finite variance because ρ(g̃(w∗), H̃(w∗)) ≤ √ σ(g̃(w∗)) · σ(H̃(w∗)) according to Cauchy–Schwarz inequality. For fixed w∗, a sufficient condition for that g̃(w∗) and H̃(w∗) have finite variance is that the training data x are sampled from bounded domain. This condition is easy to be satisfied because the domain of training data are usually normalized to be bounded before training. In this case, the infinite variance assumption about the stochastic gradient in α-stable process is not satisfied. (2) The variance of noise is state-dependent, which contradicts the assumption in Langevin dynamic. Notations: For ease of the presentation, we use C(w), σg, σH , ρg,H to denote σ(g(w) − g̃(w∗)), σ(g̃(w∗)), σ(H̃(w∗)), ρ(g̃(w∗), H̃(w∗)) in the following context, respectively. 1 3.2 POWER-LAW DYNAMIC According to CLT, the gradient noise follows Gaussian distribution if it has finite variance, i.e., g(w)− g̃(w)→d N (0, C(w)) as b→∞, (2) where→d means “converge in distribution”. Using Gaussian distribution to model the gradient noise in SGD, the update rule of SGD can be written as: wt+1 = wt − ηg(wt) + ηξt, ξt ∼ N (0, C(w)). (3) Eq.3 can be treated as the discretization of the following SDE, which we call it power-law dynamic: dwt = −g(wt)dt+ √ ηC(w)dBt. (4) Power-law dynamic characterizes how the distribution of w changes as time goes on. The distribution density of parameterw at time t (i.e., p(w, t)) is determined by the Fokker-Planck equation (Zwanzig’s type (Guo & Du, 2014)): ∂ ∂t p(w, t) = ∇p(w, t)g(w) + η 2 · ∇ (C(w) · ∇p(w, t)) . (5) The stationary distribution of power-law dynamic can be obtained if we let the left side of FokkerPlanck equation be zero. The following theorem shows the analytic form of the stationary distribution of power-law dynamic, which is heavy-tailed and the tail of the distribution density decays at polynomial order of w − w∗. This is the reason why we call the stochastic differential equation in Eq.4 power-law dynamic. Theorem 2 The stationary distribution density for 1-dimensional power-law dynamic (Eq.4) is p(w) = 1 Z (C(w)) − H ησH exp H ( 4ρg,H ·ArcTan ( C′(w)/ √ 4σHσg − 4ρ2g,H )) ησH √ 4σHσg − 4ρ2g,H , (6) whereC(w) = σg+2ρg,H(w−w∗)+σH(w−w∗)2, Z is the normalization constant andArcTan(·) is the arctangent function. We make discussions on property of p(w). The decreasing rate of p(w) as w goes away from the center w∗ is mainly determined by the term C(w)− H ησH (because the function ArcTan(·) is bounded) which is a polynomial function about w − w∗. Compared with Gaussian distribution the probability density which follows exponential decreasing rate, power-law distribution is less concentrated in the quadratic basin B(w∗, ) and heavy-tailed. We call HησH the tail-index of p(w) and denote it as κ in the following context. We can conclude that the state-dependent noise results in heavy-tailed distribution of parameters, which matches the observations in (Mahoney & Martin, 2019). Langevin dynamic with constant diffusion can be regarded as special case of power-law dynamic when ρH,g = 0 and σH = 0. In this case, p(w) degenerates to Gaussian distribution. Compared with α-stable process, we do not assume infinite variance on gradient noise and demonstrate another mechanism that results in heavy-tailed distribution of parameters. We empirically observe the covariance matrix around the local minimum of training loss on deep neural networks. The results are shown in Figure.1. Readers can refer more details in Appendix 7.1. We have the following observations: (1) The traces of covariance matrices for the deep neural 1In the following context, we assume σg is positive number. networks can be well approximated by quadratic curves, which supports Proposition 1. (2) The minimum of the quadratic curve is nearly located at the local minimum w∗. It indicates that the coefficient of the first-order term ρg,H ≈ 0. Based on the fact that ρg,H is not the determinant factor of the tail of the distribution in Eq.6 and the observations in Figure.1, we consider a simplified form of C(w) that C(w) = σg + σH(w − w∗)2. Corollary 3 If C(w) = σg + σH(w−w∗)2, the stationary distribution of 1-dimensional power-law dynamic (Eq.4) is p(w) = 1 Z (1 + σHσ −1 g (w − w∗)2)−κ, (7) where Z is the normalization constant and κ = HησH is the tail-index. The distribution density in Eq.7 is known as the power-law κ distribution (Zhou & Du, 2014) (It is also named as q-Gaussian distribution in (Tsallis & Bukman, 1996)). As κ→∞, the distribution density tends to be Gaussian, i.e., p(w) ∝ exp(−H(w−w ∗)2 ησg ). Power-law κ distribution becomes more heavy-tailed as κ becomes smaller. Meanwhile, it produces higher probability to appear values far away from the center w∗. Intuitively, smaller κ helps the dynamic to escape from local minima faster. In the approximation of dynamic of SGD, κ equals the signal (i.e., H(w∗)) to noise (i.e., ησH ) ratio of second-order derivative at w∗ in SGD, and κ is linked with three factors: (1) the curvature H(w∗); (2) the fluctuation of the curvature over training data; (3) the hyper-parameters including η and minibatch size b. Please note that σH linearly decreases as the batch size b increases. 3.3 MULTIVARIATE POWER-LAW DYNAMIC In this section, we extend the power-law dynamic to d-dimensional case. We first illustrate the covariance matrix C(w) of gradient noise in SGD. We use the subscripts to denote the element in a vector or a matrix. We use Σg to denote the covariance matrix of g̃(w∗) and assume that Σg is isotropic (i.e., Σg = σg · I). We also assume that Cov(H̃i(w∗), H̃j(w∗)) are equal for all i, j. It can be shown that C(w) = Σg(1 + (w−w∗)TΣHΣ−1g (w−w∗)). Similarly as 1-dimensional case, we omit the first-order term (w − w∗) in C(w). Readers can refer Proposition 10 in Appendix 7.2 for the detailed derivation. We suppose that the signal to noise ratio of H̃(w∗) can be characterized by a scalar κ, i.e., ηΣH = 1 κ ·H(w ∗). Then C(w) can be written as C(w) = Σg(1 + 1 ηκ (w − w∗)TH(w∗)Σ−1g (w − w∗)). (8) Theorem 4 Ifw ∈ Rd andC(w) has the form in Eq.(8) forw ∈ B(w∗, ). The stationary distribution density of power-law dynamic is p(w) = 1 Z [1 + 1 ηκ (w − w∗)TH(w∗)Σ−1g (w − w∗)]−κ (9) for w ∈ B(w∗, ), where Z is the normalization constant and κ satisfies ηΣH = 1κ ·H(w ∗). Remark: The multivariate power-law κ distribution (Eq.9) is a natural extension of the 1-dimensional case. Actually, the assumptions on Σg and κ can be replaced by just assuming Σg, H(w∗),ΣH are codiagonalized. Readers can refer Proposition 11 in Appendix 7.2 for the derivation. 4 ESCAPING EFFICIENCY OF POWER-LAW DYNAMIC In this section, we analyze the escaping efficiency of power-law dynamic from local minima and its relation to generalization. Specifically, we analyze the mean escaping time for wt to escape from a basin. As shown in Figure.2, we suppose that there are two basins whose bottoms are denoted as a and c respectively and the saddle point b is the barrier between two basins. The barrier height is denoted as ∆L = L(b)−L(a). Definition 5 Suppose wt starts at the local minimum a, we denote the time for wt to first reach the saddle point b as inf{t > 0|w0 = a,wt = b}. The mean escaping time τ is defined as τ = Ewt [inf{t > 0|w0 = a,wt = b}]. We first give the mean escaping time for 1-dimensional case in Lemma 6 and then we give the mean escaping time for high-dimensional power-law dynamic in Theorem 7. To analyze the mean escaping time, we take the following assumptions. Assumption 1: The loss function around critical points can be written as L(w) = L(w∗) + 12 (w − w∗)TH(w∗)(w − w∗), where w∗ is a critical point. Assumption 2: The system is in equilibrium near minima, i.e., ∂p(w,t)∂t = 0. Assumption 3: (Low temperature assumption) The gradient noise is small, i.e., ησg ∆L. These three assumptions are commonly used in analyzing escaping time (Xie et al., 2020; Zhou & Du, 2014) for a dynamic. Because both a and b are critical points, we can apply Assumption 1 to get the loss surface around them. We put more discussions about the assumptions in Appendix 7.3.2. We suppose the basin a is quadratic and the variance of noise has the form that C(w) = σga +σHa(w− a)2, which can also be written as C(w) = σga + 2σHa Ha (L(w)− L(a)). Furthermore, we suppose that C(w) = σga + 2σHa Ha (L(w) − L(a)) on the whole escaping path from a to b (not just near the local minimum a). It means that the variance of gradient noise becomes larger as the loss becomes larger. The following lemma gives the mean escaping time of power-law dynamic for 1-dimensional case. Lemma 6 Suppose that Assumption 1-3 are satisfied and C(w) = σga + 2σHa Ha (L(w)− L(a)) on the whole escaping path from a to b. The mean escaping time of 1-dimensional power-law dynamic is, τ = 2π (1− 1 2κ ) √ Ha|Hb| ( 1 + 2 κησga ∆L )κ− 1 2 , (10) where κ = HaησHa > 1 2 , Ha and Hb are the second-order derivatives of training loss at local minimum a and at saddle point b, respectively. The proof of Lemma 6 is based on the results in (Zhou & Du, 2014). We provide a full proof in Appendix 7.3.1. For the dynamic near the saddle point, we just assume that its dynamic is the same as that near the local minimum for simplicity. This assumption is not necessary and we put the extension to more complex dynamic in Appendix 7.3.3. We summarize the mean escaping time of power-law dynamic and dynamics in previous works in Table 1. Based on the results, we have the following discussions. Comparison with other dynamics: (1) Both power-law dynamic and Langevin dynamic can escape sharp minima faster than flat minima, where the sharpness is measured by Ha and larger Ha corresponds to sharper minimum. Power-law dynamic improves the order of barrier height (i.e., ∆L) from exponential to polynomial compared with Langevin dynamic, which implies a faster escaping efficiency of SGD to escape from deep basin. (2) The mean escaping time for α-stable process is independent with the barrier height, but it is in polynomial order of the width of the basin (i.e., width=|b− a|). Compared with α-stable process, the result for power-law dynamic is superior in the sense that it is also in polynomial order of the width (if ∆L ≈ O(|b− a|2)) and power-law dynamic does not rely on the infinite variance assumption. Based on Lemma 6, we analyze the mean escaping time for d-dimensional case. Under the low temperature condition, the probability density concentrates only along the most possible escaping paths in the high-dimensional landscape. For rigorous definition of most possible escaping paths, readers can refer section 3 in (Xie et al., 2020). For simplicity, we consider the case that there is only one most possible escaping path between basin a and basin c. Specifically, the Hessian at saddle point b has only one negative eigenvalue and the most possible escaping direction is the direction corresponding to the negative eigenvalue of the Hessian at b. Theorem 7 Suppose that Assumption 1-3 are satisfied. For w ∈ Rd, we suppose C(w) = Σga + 2 ηκ (L(w) − L(a)) on the whole escaping path from a to b and there is only one most possible path path between basin a and basin c. The mean escaping time for power-law dynamic escaping from basin a to basin c is τ = 2π √ −det(Hb) (1− d 2κ ) √ det(Ha) 1 |Hbe| ( 1 + 1 ηκσe ∆L )κ− 1 2 , (11) where e indicates the most possible escaping direction, Hbe is the only negative eigenvalue of Hb, σe is the eigenvalue of Σga that corresponds to the escaping direction, ∆L = L(b)− L(a), and det(·) is the determinant of a matrix. Remark: In d-dimensional case, the flatness is measured by det(Ha). IfHa has zero eigenvalues, we can replace Ha by H+a in above theorem, where H + a is obtained by projecting Ha onto the subspace composed by the eigenvectors corresponding to the positive eigenvalues of Ha. This is because by Taylor expansion, the loss L(w) only depends on the positive eigenvalues and the corresponding eigenvectors of Ha, i.e., L(w) = L(a) + 12 (w − a) THa(w − a) = L(a) + 12 (P(w − a)) TΛ H+a P(w − a), where Λ H+a is a diagonal matrix composed by non-zero eigenvalues of Ha and the operator P(·) operates the vector to the subspace corresponding to non-zero eigenvalues of Ha. Therefore, the dimension d in Theorem 7 can be regarded as the dimension of subspace that is composed by directions with large eigenvalues. It has been observed that most of the eigenvalues in H is very small (Sagun et al., 2016). Therefore, d will not be a large number and power-law dynamic in multi-dimensional case will inherit the benefit of that in 1-dimensional case compared with Langevin dynamic and α-stable process. The next theorem give an upper bound of the generalization error of the stationary distribution of power-law dynamic, which shows that flatter minimum has smaller generalization error. Theorem 8 Suppose that w ∈ Rd and κ > d2 . For δ > 0, with probability at least 1 − δ, the stationary distribution of power-law dynamic has the following generalization error bound, Ew∼p(w),x∼P(x)`(w, x) ≤ Ew∼p(w)L(w) + √ KL(p||p′) + log 1δ + log n+ 2 n− 1 , where KL(p||p′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η , p(w) is the stationary distribution of d-dimensional power-law dynamic, p′(w) is a prior distribution which is selected to be standard Gaussian distribution, and P(x) is the underlying distribution of data x, det(·) and Tr(·) are the determinant and trace of a matrix, respectively. We make the following discussions on results in Theorem 8. For 1-dimensional case, we have if H > η 2(1+ 1 2κ ) , KL divergence is decreasing as H decreases. For d > 1 and fixed Tr(ΣgH−1) and det(Σg), the generalization error (i.e., Ew∼p(w),x∼P(x)`(w, x)− Ew∼p(w)L(w)) is decreasing as det(H) decreases, which indicates that flatter minimum has smaller generalization error. Moreover, if 2d > Tr(ηΣgH−1), the generalization error is decreasing as κ increases. When κ → ∞, the generalization error tends to that for Langevin dynamic. Combining the mean escaping time and the generalization error bound, we can conclude that state-dependent noise makes SGD escape from sharp minima faster and implicitly tend to learn a flatter model which generalizes better. 5 EXPERIMENTS In this section, we conduct experiments to verify the theoretical results. We first study the fitness between parameter distribution trained by SGD and power-law κ distribution. Then we compare the escaping behavior for power-law dynamic, Langevin dynamic and SGD. 5.1 FITTING PARAMETER DISTRIBUTION USING POWER-LAW DISTRIBUTION We investigate the distribution of parameters trained by SGD on deep neural networks and use power-law κ distribution to fit the parameter distribution. We first use SGD to train various types of deep neural networks till it converge. For each network, we run SGD with different minibatch sizes over the range {64, 256, 1024}. For the settings of other hyper-parameters, readers can refer Appendix 7.5.2. We plot the distribution of model parameters at the same layer using histogram. Next, we use power-law κ distribution to fit the distribution of the parameters and estimate the value of κ via the embedded function "TsallisQGaussianDistribution[]" in Mathematica software. We show results for LeNet-5 with MNIST dataset and ResNet-18 with CIFAR10 dataset (LeCun et al., 2015; He et al., 2016b) in this section, and put results for other network architectures in Appendix 7.5.2. In Figure 3, we report the generalization error (i.e., Test error - Training error) and the values of κ that best fit the histogram. 2 We have the following observations: (1) The distribution of the parameter trained by SGD can be well fitted by power-law κ distribution (blue curve). (2) As the minibatch size becomes larger, κ becomes larger. It is because the noise σH linearly decreases as minibatch size becomes larger and κ = HησH . (3) As κ becomes smaller, the generalization error becomes lower. It indicates that κ also plays a role as indicator of generalization. These results are consistent with the theory in Section 4. 5.2 COMPARISON ON ESCAPING EFFICIENCY We use a 2-dimensional model to simulate the escaping efficiency from minima for power-law dynamic, Langevin dynamic and SGD. We design a non-convex 2-dimensional function written as L(w) = 1n ∑n i=1 `(w − xi), where `(w) = 15 ∑2 j=1 |wj − 1|2.5 · |wj + 1|3 and training data xi ∼ N (0, 0.01I2). We regard the following optimization iterates as the numerical discretization of the power-law dynamic, wt+1 = wt − ηg(wt) + ηλ2 √ 1 + λ1(wt − w∗)2 ξ, where ξ ∼ N (0, I2), λ1, λ2 are two hyper-parameters and stands for Hadamard product. Note that if we set λ1 = 0, it can be regarded as discretization of Langevin dynamic. We set learning rate η = 0.025, and we take 500 iterations in each training. In order to match the trace of covariance matrix of stochastic gradient at minimum point w∗ with the methods above, λ2 is chosen to satisfy Tr(Cov(λ2ξ)) = Tr(Cov(g(w∗))). We compare the success rate of escaping for power-law dynamic, Langevin dynamic and SGD by repeating the experiments 100 times. To analyze the noise term λ1, we choose different λ1 and evaluate corresponding success rate of escaping, as shown in Figure.4(c). The results show that: (1) there is a positive correlation between λ1 and the success rate of escaping; (2) power-law dynamic can mimic the escaping efficiency of SGD, while Langevin dynamic can not. We then scale the loss 2The training errors under the six settings are almost zero. function by 0.9 to make the minima flatter and repeat all the algorithms under the same setting. The success rate for the scaled loss function is shown in Figure.4(d). We can observe that all dynamics escape flatter minima slower. 6 CONCLUSION In this work, we study the dynamic of SGD via investigating state-dependent variance of the stochastic gradient. We propose power-law dynamic with state-dependent diffusion to approximate the dynamic of SGD. We analyze the escaping efficiency from local minima and the PAC-Bayes generalization error bound for power-law dynamic. Results indicate that state-dependent noise helps SGD escape from poor local minima faster and generalize better. We present direct empirical evidence to support our theoretical findings.This work may motivate many interesting research topics, for example, nonGaussian state-dependent noise, new types of state-dependent regularization tricks in deep learning algorithms and more accurate characterization about the loss surface of deep neural networks. We will investigate these topics in future work. 7 APPENDIX 7.1 POWER-LAW DYNAMIC AND STATIONARY DISTRIBUTION Theorem 9 (Theorem 2 in main paper) The stationary distribution density for 1-dimensional powerlaw dynamic (Eq.4) is p(w) = 1 Z (C(w)) − H ησH exp H ( 4ρg,H ·ArcTan ( C′(w)/ √ 4σHσg − 4ρ2g,H )) ησH √ 4σHσg − 4ρ2g,H , whereC(w) = σg+2ρg,H(w−w∗)+σH(w−w∗)2, Z is the normalization constant andArcTan(·) is the arctangent function. Proof: We denote the function H(4ρg,H ·ArcTan(C′(w)/ √ 4σHσg−4ρg,H)) ησH √ 4σHσg−4ρ2g,H as h(w). According to the Fokker-Planck equation, p(w) satisfies 0 = ∇p(w)g(w) + η 2 · ∇ · (C(w)∇p(w)) = ∇ · [ (p(w) · ∇L(w)) + η 2 C(w)∇p(w) ] = ∇ · [η 2 C(w) − HησH +1eh(w)∇(C(w) H ησH · e−h(w) · p(w)) ] Readers can check the third equality by calculating∇(C(w) H ησH · e−h(w) · p(w)) with C(w) = σg + 2ρg,H(w−w∗)+σH(w−w∗)2. Because the left side equals zero, we have C(w) H ησH ·e−h(w) ·p(w) equals to constant. So p(w) ∝ C(w)− H ησH ·eh(w) ·p(w). So we can get the conclusion in the theorem. Theorem 10 (Corollary 3 in main paper) If C(w) = σg + σH(w−w∗)2, the stationary distribution density of power-law dynamic is p(w) = 1 Z (1 + σHσ −1 g (w − w∗)2)−κ, (12) where Z = ∫ w (1 + σHσ −1 g (w − w∗)2)−κdw is the normalization constant and κ = HησH is the tail-index. Proof: According to the Fokker-Planck equation, p(w) satisfies 0 = ∇p(w)g(w) + η 2 · ∇ · (C(w)∇p(w)) = ∇(p(w) · ∇L(w)) + η 2 ∇ · (σg + 2σH H (L(w)− L(w∗)))∇p(w) = ∇ · η 2 C(w)(1 + 2σH Hσg (L(w)− L(w∗))) H −ησH ∇(1 + 2σH Hσg (L(w)− L(w∗))) H ησH p(w) Because the left side equals zero, we have (1 + 2σHHσg (L(w)− L(w ∗))) H ησH p(w) equals to constant. So p(w) ∝ (1 + 2σHHσg (L(w)− L(w ∗))) H −ησH . So we can get the conclusion in the theorem. We plot the un-normalized distribution density for 1-dimensional power-law dynamics with different κ in Figure 5. For the four curves, we set β = 10. We set κ = 1, 0.5, 0.1, 0 and use green, red, purple and blue line to illustrate their corresponding density function, respectively. When κ = 0, it is Gaussian distribution. From the figure, we can see that the tail for power-law κ-distribution is heavier than Gaussian distribution. Actually, for any given time t, the distribution p(w, t) for wt that satisfies power-law dynamic has analytic form, i.e., p(w, t) ∝ (1 + Hηκσ(t) (w −w(t)) 2)−κ, where w(t) = w∗ + (w0 −w∗)e−Ht and σ(t) is a function of σg and t. Readers can refer Eq.18 - Eq.23 in (Tsallis & Bukman, 1995) for the detailed expression. 7.2 SGD AND MULTIVARIATE POWER-LAW DYNAMIC The following proposition shows the covariance of stochastic gradient in SGD in d-dimensional case. We use the subscripts to denote the elements in a vector or a matrix. Proposition 11 For w ∈ Rd, we use C(w) to denote the covariance matrix of stochastic gradient g̃(w) = g̃(w∗)+H̃(w−w∗) and Σ to denote the covariance matrix of g̃(w∗). IfCov(g̃i(w∗), H̃jk) = 0,∀i, j, k, we have Cij(w) = Σij + (w − w∗)TA(ij)(w − w∗), (13) where Σij = Cov(g̃i(w∗), g̃j(w∗)), A(ij) is a d × d matrix with elements A(ij)ab = Cov(H̃ia, H̃jb) with a ∈ [d], b ∈ [d]. Eq.13 can be obtained by directly calculating the covariance of g̃i(w) and g̃j(w) where g̃i(w) = g̃i(w ∗) + ∑d a=1 H̃ia(wa − w∗a), g̃j(w) = g̃j(w∗) + ∑d b=1 H̃jb(wb − w∗b ). In order to get a analytic tractable form of C(w), we make the following assumptions: (1) If Σij = 0, A(ij) is a zero matrix; (2) For Σij 6= 0, A (ij) Σij are equal for all i ∈ [d], j ∈ [d]. The first assumption is reasonable because both Σij andA(ij) reflect the dependence of the derivatives along the i-th direction and j-th direction. Let ΣH = A (ij) Σij ,C(w) can be written asC(w) = Σg(1+(w−w∗)TΣH(w−w∗)). The d-dimensional power-law dynamic is written as dwt = −H(w − w∗)dt+ √ ηC(w)dBt, (14) where C(w) = Σg(1 + (w − w∗)TΣH(w − w∗)) which is a symmetric positive definite matrix that C(w)1/2 exists. The following proposition shows the stationary distribution of the d-dimensional power-law dynamic. Proposition 12 Suppose Σg,ΣH , H are codiagonalizable, i.e., there exist orthogonal matrix Q and diagonal matrices Λ,Γ,Π to satisfy Σg = QTΛQ,ΣH = QTΓQ,H = QTΠQ. Then, the stationary distribution of power-law dynamic is p(w) = 1 Z (1 + (w − w∗)TΣH(w − w∗))−κ, (15) where Z is the normalization constant and κ = Tr(H)ηTr(ΣHΣg) . Proof: Under the codiagonalization assumption on Σg,ΣH , H , Eq.15 can be rewritten as dvt = −Πvtdt+ √ ηΛ(1 + vTt Γvt)dBt if we let vt = Q(wt − w∗). We use φ(v) = ηC(v)2 = η 2 Λ(1 + v TΓv), the stationary probability density p(v) satisfies the Smoluchowski equation: 0 = d∑ i=1 ∂ ∂vi (Πivi · p(v)) + d∑ i=1 ∂ ∂vi · ( φi(w) ∂ ∂vi p(v) ) (16) = d∑ i=1 ∂ ∂vi (Πi·vi · p(v)) + d∑ i=1 ∂ ∂vi · ( ηΛi 2 (1 + vTΓv) ∂ ∂vi p(v) ) . (17) According to the result for 1-dimensional case, we have the expression of p(v) is p(v) ∝ (1 + vTΓv)−κ. To determine the value of κ, we put p(v) in the Smoluchowski equation to obtain d∑ i=1 Πip(v)− 2κ d∑ i=1 Πivi · Γivi · (1 + vTΓv)−κ−1 = d∑ i=1 ∂ ∂vi ( ηΛiκ(1 + v TΓv)−κ · Γivi ) = d∑ i=1 ( ηΛiκ(1 + v TΓv)−κ · Γi ) − 2 d∑ i=1 ( ηΛiκ 2(1 + vTΓv)−κ−1 · (Γivi)2 ) . The we have ∑d i=1 Πi = ηκ ∑d i=1 ΛiΓi. So we have κ = Tr(H) ηTr(ΣHΣg) . According to Proposition 11, we can also consider another assumption on Σg,ΣH , H without assuming their codiagonalization. Instead, we assume (1) If Σij = 0, A(ij) is a zero matrix; (2) For Σij 6= 0,A(ij) are equal for all i ∈ [d], j ∈ [d] and we denoteA(ij) = ΣH . We suppose η ·ΣH = κH . (3) Σg = σg · Id which is isotropic. Under these assumptions, we can get the following theorem. Theorem 13 (Theorem 4 in main paper) If w is d-dimensional and C(w) has the form in Eq.(8). The stationary distribution density of multivariate power-law dynamic is p(w) = 1 Z [1 + 1 ηκ (w − w∗)THΣ−1g (w − w∗)]−κ (18) where Z = ∫∞ −∞[1 + 1 ηκ (w − w ∗)THΣ−1g (w − w∗)]−κdw is the normalization constant. The proof for Theorem 12 is similar to that for Proposition 11. Readers can check that p(w) satisfies the Smoluchowski equation. An example to illustrate why C(w) is diagonally dominant. In Theorem 13, C(w) is assumed to be diagonally dominant. Diagonally dominant indicates that the variance of each dimension of g̃(w) is significantly larger than the covariance of two different dimensions of g̃(w). Consider a two layer fully-connected linear neural network fw,v(x) = wvx where w ∈ R1×m, v ∈ Rm×d, x ∈ Rd and h(·) is the ReLU activation. We consider the regression loss `(w, v) = 12 (y − fw,v(x)) 2. The gradient of wi and vjk can be written as ∂`(w, v) ∂wi = (fw,v(x)− y) · vix (19) ∂`(w, v) ∂vjk = (fw,v(x)− y) · wjxk, (20) where vi denotes the i-th row of matrix v. Suppose that the initialization of w and v is: wi i.i.d∼ N(0, δ1) and vij i.i.d∼ N(0, δ2) . We also assume that Exi = Exj = 0 and xi, xj are independent with each other for i 6= j where xi is the i-th dimension. We have Ew,v ∂`(w, v) ∂wi ∂`(w, v) ∂wj = Ew,v(fw,v(x)− y)2 · vix · vjx (21) = Ew,vy2 · vix · vjx+ Ew,v m∑ i=1 (wivix) 2 · vix · vjx− 2Ew,v( m∑ i=1 ywivix) · vix · vjx (22) Because the independence of vi, vj and their expectations are zero, we can obtain Ew,v ∂`(w,v)∂wi ∂`(w,v) ∂wj = 0 for i 6= j. Similarly, we can get Ew,v ∂`(w,v)∂wi ∂`(w,v) ∂vjk = 0 and Ew,v ∂`(w,v)∂vj′k′ ∂`(w,v) ∂vjk = 0 for (j, k) 6= (j′, k′). The above analyses show that the gradients for different dimensions are independent at initialization. It has been observed that many weights are kept random during training because of the over-parameterization Balduzzi et al. (2017). So, diagonalization dominant property of C(w) is reasonable. 7.3 SUPPLEMENTARY MATERIALS FOR RESULTS IN SECTION 4 7.3.1 PROOF FOR MEAN ESCAPING TIME Lemma 14 (Lemma 6 in main paper) We suppose C(w) = σga + 2σHa Ha (L(w)− L(a)) on the whole escaping path from a to b. The mean escaping time of the 1-dimensional power-law dynamic is, τ = 2π (1− 1 2κ ) √ Ha|Hb| ( 1 + 2 κησga ∆L )κ− 1 2 , (23) where κ = HaησHa , Ha, Hb are the second-order derivatives of training loss at local minimum a and saddle point b. Proof: According to (Van Kampen, 1992), the mean escaping time τ is expressed as τ = P (w∈Va)∫ Ω JdΩ , where Va is the volume of basin a, J is the probability current that satisfies −∇J(w, t) = ∂ ∂w (g(w) · p(w, t)) + ∂ ∂w ( φ(w) ∂p(w, t) ∂w ) = ∂ ∂w φ(w) · (1 + µ σg ∆L(w) )−κ ∂ ((1 + µ σg ∆L(w) )κ p(w, t) ) ∂w , where φ(w) = η2C(w) and µ = 2σHa Ha , σg = σga and ∆L(w) = L(w) − L(a). Integrating both sides, we obtain J(w) = −φ(w) · ( 1 + µ σg ∆L(w) )−κ ∂((1+ µσg ∆L(w))κp(w,t)) ∂w . Because there is no field source on the escape path, J(w) is fixed constant on the escape path. Multiplying φ(w)−1 · ( 1 + µσg ∆L(w) )κ on both sizes, we have J · ∫ c a φ(w)−1 · ( 1 + µ σg ∆L(w) )κ dw = − ∫ c a ∂ (( 1 + µσg ∆L(w) )κ p(w, t) ) ∂w dw = −0 + p(a). Then we get J = p(a)∫ c a φ(w)−1· ( 1+ µσg ∆L(w) )κ dw . As for the term ∫ c a φ(w)−1 · ( 1 + µσg ∆L(w) ) 1 κ dw, we have ∫ c a φ(w)−1 · ( 1 + µ σg ∆L(w) )κ dw (24) = 2 ησg ∫ c a ( 1 + µ σg ∆L(w) )−1+κ dw = 2 ησg ∫ b c ( 1 + µ σg (∆L(b)− 1 2 |Hb|(w − b)2) )−1+κ dw = 2 ησg ∫ b c ( 1 + µ σg (∆L(b)− 1 2 |Hb|(w − b)2) )−1+κ dw = 2 ησg (1 + µ σg ∆L(b))−1+κ ∫ b c ( 1− µ σg · 1 2 |Hb|(w − b)2 1 + µ σg ∆L(b) )−1+κ dw = 2 ησg (1 + µ σg ∆L(b))−1+κ · ( 1 2 µ σg |Hb| 1 + µ σg ∆L(b) )−1/2 ∫ 1 0 y−1/2(1− y)−1+κdy = 2 ησg (1 + µ σg ∆L(b))− 1 2 +κ √ 2σg µ|Hb| B( 1 2 , κ), where the third formula is based on the second order Taylor expansion. Under the low temperature assumption, we can use the second-order Taylor expansion around the saddle point b. As for the term P (w ∈ Va), we have P (w ∈ Va) = ∫ Va p(w)dV = ∫ w∈Va p(a)(1 + µ σg ∆L(w))−κ = p(a) √ 2σg µHa B( 1 2 , κ − 1 2 ), where we use Taylor expansion of L(w) near local minimum a. Then we have τ = P (w∈Va)∫ Ω JdΩ = P (w∈Va)J because J is a constant. Combining all the results, we can get the result in the lemma. Theorem 15 (Theorem 7 in main paper) Suppose w ∈ Rd and there is only one most possible path path between basin a and the outside of basin a. The mean escaping time for power-law dynamic escaping from basin a to the outside of basin a is τ = 2π √ −det(Hb) (1− d 2κ ) √ det(Ha) 1 |Hbe| ( 1 + 1 ηκσe ∆L )κ− 1 2 , (25) where e indicates the most possible escape direction, Hbe is the only negative eigenvalue of Hb, σe is the eigenvalue of Σga corresponding to the escape direction and ∆L = L(b)− L(a). Proof: According to (Van Kampen, 1992), the mean escaping time τ is expressed as τ = P (w∈Va)∫ Ω JdΩ , where Va is the volume of basin a, J is the probability current that satisfies −∇ · J(w, t) = ∂p(w,t)∂t . Under the low temperature assumption, the probability current J concentrates along the direction corresponding the negative eigenvalue of Hbe, and the probability flux of other directions can be ignored. Then we have∫ Ω JdΩ = Je · ∫ Ω ( 1 + 1 ηκ (w − b)T (HbΣ−1g )⊥e(w − b) )−κ+ 12 dΩ, (26) where Je = p(a) · η(1+µσe∆L(b)) −κ+ 1 2 √ µσe|Hbe| 2 √ 2B( 12 ,κ) which is obtained by the calculation of Je for 1-dimensional case in the proof of Lemma 13, and (·)⊥e denotes the directions perpendicular to the escape direction e. Suppose HbΣ−1g are symmetric matrix. Then there exist orthogonal matrix Q and diagonal matrix Λ = diag(λ1, · · · , λd) that satisfy HbΣ−1g = QTΛQ. We also denote v = Q(w − b). We define a sequence as Tk = 1 + 1ηκ · ∑d j=k λjv 2 j for k = 1, · · · , d. As for the term∫ Ω ( 1 + 1ηκ (w − b) T (HbΣ −1 g ) ⊥e(w − b) )−κ+ 12 dΩ, we have∫ Ω ( 1 + 1 ηκ (w − b)T (HbΣ−1g )⊥e(w − b) )−κ+ 12 dΩ = ∫ (1 + 1 ηκ · vTΛv)−κ+ 12 dw = ∫ (1 + 1 ηκ · d∑ j 6=e λjv 2 j ) −κ+ 12 dv =((ηκ)−1λ1) − 12 ∫ T −κ+ 12 2 B( 1 2 , κ)dv = d−2∏ j=0 ((ηκ)−1λj) − 12B( 1 2 , κ− j 2 ) = d−2∏ j=0 ((ηκ)−1λj) − 12 · √ πdΓ(κ− d2 ) Γ(κ) = √ (ηκπ)d−1 · Γ(κ− d−22 ) Γ(κ+ 12 ) √ det((HbΣ −1 g )⊥e) . As for the term P (w ∈ Va), we have P (w ∈ Va) = ∫ Va p(w)dV = p(a) ∫ w∈Va ( 1 + (w − w∗)THaΣ−1g (w − w∗) ) dw (27) =p(a) · √ (ηκπ)d · Γ(κ− d2 ) Γ(κ) √ det((HaΣ −1 g )) (28) where we use Taylor expansion of L(w) near local minimum a. Combined the results for P (w ∈ Va) and J , we can get the result. 7.3.2 FURTHER EXPLANATION ABOUT ASSUMPTION 1-3 We adopt the commonly used assumptions to analyze mean escaping time for dynamic system (Xie et al., 2020; Smith & Le, 2017; Zhou & Du, 2014). Assumption 2 can be replaced by weaker assumption that the system is quasi-equilibrium which is adopted in (Xie et al., 2020). For the differences between quasi-equilibrium and equilibrium, readers can refer to (Xie et al., 2020) for detailed discussions. Assumption 3 is commonly used (Xie et al., 2020; Zhou & Du, 2014). Under Assumption 3, the probability densities will concentrate around minima and the most possible paths. Assumption 3 will make the second order Taylor approximation more reasonable. 7.3.3 EXTENSION TO MORE COMPLEX DYNAMIC ON THE ESCAPING PATH In Lemma 6, we assume that C(w) = σga + 2σHa Ha (L(w) − L(a)) on the whole escaping path from a to b for ease of comparison and presentation. This assumption is not necessary and we can assume a different dynamic near saddle point b. Specially, we can assume the point z is the midpoint on the most possible path beween a and b, where L(z) = (1 − z)L(a) + zL(b). The dynamic with C(w) = σga + 2σHa Ha (L(w) − L(a)) dominates the path a → z and the dynamic with C(w) = σgb + 2σHb Hb (L(b)−L(w)) dominates the path z → b. Then only two things will be changed in proof of Lemma 6. First, we need to change the stationary distribution near saddle points according to its own dynamic in Eq.20. Second, we need to change the integral about probability density on the whole path to sum of integrals on these two sub-paths. Similar proof techniques are adopted for analyzing escaping time of Langevin dynamic in proof of Theorem 4.1 in the work Xie et al. (2020). Since the proof is analogous, we omit the details here. 7.4 PAC-BAYES GENERALIZATION BOUND We briefly introduce the basic settings for PAC-Bayes generalization error. The expected risk is defined as Ex∼P(x)`(w, x). Suppose the parameter follows a distribution with density p(w), the expected risk in terms of p(w) is defined as Ew∼p(w),x∼P(x)`(w, x). The empirical risk in terms of p(w) is defined as Ew∼p(w)L(w) = Ew∼p(w) 1n ∑n i=1 `(w, xi). Suppose the prior distribution over the parameter space is p′(w) and p(w) is the distribution on the parameter space expressing the learned hypothesis function. For power-law dynamic, p(w) is its stationary distribution and we choose p′(w) to be Gaussian distribution with center w∗ and covariance matrix I . Then we can get the following theorem. Theorem 16 (Theorem 8 in main paper) For w ∈ Rd, we select the prior distribution p′(w) to be standard Gaussian distribution. For δ > 0, with probability at least 1− δ, the stationary distribution of power-law dynamic has the following generalization error bound, Ew∼p(w),x∼P(x)`(w, x) ≤ Ew∼p(w)L(w) + √ KL(p||p′) + log 1δ + log n+ 2 n− 1 , (29) whereKL(p||p′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η andP(x) is the underlying distribution of data x. Proof: Eq.(29) directly follows the results in (McAllester, 1999). Here we calculate the Kullback–Leibler (KL) divergence between prior distribution and the stationary distribution of power-law dynamic. The prior distribution is selected to be standard Gaussion distribution with distribution density p′(w) = 1√ (2π)d det (I) exp{− 12 (w−w ∗)T I(w−w∗)}. The posterior distribution density is the stationary distribution for power-law dynamic, i.e., p(w) = 1Z ·(1+ 1 ηκ ·(w−w ∗)THΣ−1g (w−w∗))−κ. Suppose HΣ−1g are symmetric matrix. Then there exist orthogonal matrix Q and diagonal matrix Λ = diag(λ1, · · · , λd) that satisfy HΣ−1g = QTΛQ. We also denote v = Q(w − w∗). We have log ( p(w) p′(w) ) = −κ log(1 + 1 ηκ · (w − w∗)THΣ−1g (w − w∗))− logZ + 1 2 (w − w∗)T I(w − w∗) + d 2 log 2π The KL-divergence is defined as KL(p(w)||p′(w)) = ∫ w p(w) log ( p(w) p′(w) ) dw. Putting v = Q(w − w∗) in the integral, we have KL(p(w)||p′(w)) = d 2 log 2π − logZ + 1 2Z ∫ v vT v ( 1 + 1 ηκ · vTΛv )−κ dv − 1 Zη ∫ v vTΛv · (1 + 1 ηκ · vTΛv)−κdv, (30) where we use the approximation that log(1 + x) ≈ x. We define a sequence as Tk = 1 + 1ηκ ·∑d j=k λjv 2 j for k = 1, · · · , d. We first calculate the normalization constant Z. Z = ∫ (1 + 1 ηκ · vTΛv)−κdw = ∫ (1 + 1 ηκ · d∑ j=1 λjv 2 j ) −κdv =((ηκ)−1λ1) − 12 ∫ T −κ+ 12 2 B( 1 2 , κ− 1 2 )dv = d∏ j=1 ((ηκ)−1λj) − 12B( 1 2 , κ− j 2 ) = d∏ j=1 ((ηκ)−1λj) − 12 · √ πdΓ(κ− d2 ) Γ(κ) We define Zj = ((ηκ)−1λj)− 1 2B ( 1 2 , κ− j 2 ) . For the third term in Eq.(30), we have 2Z · III = ∫ v vT v(1 + 1 ηκ vTΛv)−κdv = ∫ v2,···vd ∫ v1 v21 ( 1 + 1 ηκ · vTΛv )−κ dv1 + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd = ∫ v2,···vd T−κ2 ∫ v1 v21 ( 1 + (ηκ)−1λ1v 2 1 T2 )−κ dv1 + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd = ∫ v2,··· ,vd T−κ2 ∫ ( T2 (ηκ)−1λ1 ) 3 2 y 1 2 (1 + y)−κ dy + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd = ∫ v2,··· ,vd ((ηκ)−1λ1) − 3 2 T −κ+ 3 2 2 B ( 3 2 , κ− 3 2 ) + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd =( λ1 ηκ )− 3 2B ( 3 2 , κ− 3 2 )∫ v2,··· ,vd T −κ+ 3 2 2 dv2··· ,vd + ∫ v2,··· ,vd Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd For term ∫ v2,··· ,vd T − 1κ+ 3 2 2 dv2··· ,vd in above equation, we have ∫ v2,··· ,vd T −κ+ 32 2 dv2··· ,vd = ∫ v3,··· ,vd T−κ+23 ((ηκ) −1λ2) − 12B ( 1 2 , κ− 2 ) dv3,··· ,vd = ∫ v4,··· ,vd T −κ+ 52 4 ((ηκ) −1λ2) − 12 ((ηκ)−1λ3) − 12B ( 1 2 , κ− 5 2 ) B ( 1 2 , κ− 2 ) dv4,··· ,vd = ∫ vd T −κ+ 12 + 1 2×d d d−1∏ j=2 ((ηκ)−1λj) − 12 d−1∏ j=2 B ( 1 2 , κ− ( j 2 + 1) ) dvd = d∏ j=2 ((ηκ)−1λj) − 12 d∏ j=2 B ( 1 2 , κ− ( j 2 + 1) ) Let Aj = ((ηκ)−1λj)− 3 2B ( 3 2 , κ− ( j 2 + 1) ) . According to the above two equations, we can get the recursion 2Z ∫ vT vT−κ1 dv =A1 · ∫ T −κ+ 32 2 + Z1 ∫ v2,··· ,vd d∑ j=2 v2j T−κ+ 122 dv2··· ,vd =A1 · ∫ T −κ+ 3−12 2 dv2···vd + Z1 ·A2 ∫ T −κ+ 42 3 dv3··· ,vd + Z1Z2 ∫ d∑ j=3 v2j T−κ+ 123 dv3··· ,vd = d−1∑ j=1 Aj j−1∏ k=1 Zk ∫ T −κ+ j+1+12 j+1 dvj+1,··· ,vd + d−1∏ k=1 Zk ∫ v2dT −κ+ d−12 d dvd = d−1∑ j=1 ( λj ηκ )− 3 2B ( 3 2 , κ− ( j 2 + 1) ) j−1∏ k=1 ( λk ηκ )− 1 2B ( 1 2 , κ− k 2 ) d∏ s=j+1 (( λs ηκ )− 1 2 d∏ s=j+1 B ( 1 2 , κ− (s 2 + 1) ) + d−1∏ j=1 ( λj ηκ )− 1 2B( 1 2 , κ− j 2 − 1) · (λd ηκ )− 3 2B( 3 2 , κ− (d 2 + 1)) = √ πdΓ(κ− d2 − 1)Tr(H −1Σg) 2Γ(κ) √ (ηκ)−(d+2) det(H−1Σg) We have III = √ πdΓ(κ− d2 − 1)Tr(H −1Σg) 4Γ(κ) √ (ηκ)−(d+2) det(H−1Σg) · d∏ j=1 ((ηκ)−1λj) 1 2 · Γ(κ)√ πdΓ(κ− d2 ) = ηκTr(H−1Σg) 4(κ− d2 − 1) Similarly, for the fourth term in Eq.(30), we have IV = κd 2(κ− d2−1) . Combining all the results together, we can get KL(p||p′) = 12 log det(H) (ηκ)d det(Σg) + log Γ(κ) Γ(κ− d2 ) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2. Using the fact that log Γ(κ) Γ(κ− d2 ) ≤ d2 log κ, we have KL(p||p ′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η . 7.5 IMPLEMENTATION DETAILS OF THE EXPERIMENTS 7.5.1 OBSERVATIONS ON THE COVARIANCE MATRIX In this section, we introduce the settings on experiments of the quadratic approximation of covariance of the stochastic gradient on plain convolutional neural network (CNN) and ResNet. For each model, we use gradient descent with small constant learning rate to train the network till it converges. The converged point can be regarded as a local minimum, denoted as w∗. As for the detailed settings of the CNN model, the structure for plain CNN model is input → Conv1→ maxpool → Conv2→ maxpool → fc1→ Relu→ fc2→ output. Both Conv1 and Conv2 use 5 × 5 kernels with 10 channels and no padding. Dimensions of full connected layer fc1 and fc2 are 1600 × 50 and 50 × 10 respectively. We randomly sample 1000 images from FashionMNIST (Xiao et al., 2017) dataset as training set. The initialization method is the Kaiming initialization (He et al., 2015) in PyTorch. The learning rate of gradient descent is set to be 0.1. After 3000 iterations, GD converges with almost 100% training accuracy and the training loss being 1e−3. As for ResNet, we use the ResNet-18 model (He et al., 2016b) and randomly sample 1000 images from Kaggle’s dogs-vs-cats dataset as training set. The initialization method is the Kaiming initialization (He et al., 2015) in PyTorch. The learning rate of gradient descent is set to be 0.001. After 10000 iterations, GD converges with 100% training accuracy and the training loss being 1e−3. We then calculate the covariance matrix of the stochastic gradient at some points belonging to the local region around w∗. The points are selected according to the formula: w∗layerL ± (i× Scale), where w∗layerL denotes the parameters at layer L, and i × Scale, i ∈ [N ] determines the distance away from w∗layerL. When we select points according to this formula by changing the parameters at layer L, we fixed the parameters at other layers. For both CNN model and ResNet18 model, we select 20 points by setting i = 1, · · · , 10. For example, for CNN model, we choose the 20 points by changing the parameters at the Conv1 layer with Scale = 0.001 and Conv2 layer with Scale = 0.0001, respectively. For ResNet18, we choose the 20 points by changing the parameters for a convolutional layer at the first residual block with Scale = 0.0001 and second residual block with Scale = 0.0001, respectively. The results are shown in Figure.1. The x-axis denotes the distance of the point away from the local minimum and the y-axis shows the value of the trace of covariance matrix at each point. The results show that the covariance of noise in SGD is indeed not constant and it can be well approximated by quadratic function of state (the blue line in the figures), which is consistent with our theoretical results in Section 3.1. 7.5.2 SUPPLEMENTARY EXPERIMENTS ON PARAMETER DISTRIBUTIONS OF DEEP NEURAL NETWORKS For Figure. 3(a), we train LeNet-5 on MNIST dataset using SGD with constant learning rate η = 0.03 for each batchsize till it converges. Parameters are conv2.weight in LeNet-5. For Figure 3(b), we train ResNet-18 on CIFAR10 using SGD with momentum. We do a RandomCrop on training set scaling to 32× 32 with padding = 4 and then a RandomHorizontalF lip. In training, momentum is set to be 0.9 and weight decay is set to be 5e− 4. Initial learning rate in SGD is set to be 0.1 and we using a learning rate decay of 0.1 on {150, 250}-th epoch respectively. We train it until converges after 250 epoch. Parameters are layer1.1.conv2.weight in ResNet-18. We also observe the parameter distribution on many pretrained models. Details for pre-trained models can be found on https://pytorch.org/docs/stable/torchvision/models.html. Figure.7 shows the distribution of parameters trained by SGD can be well fitted by powerlaw distribution. Parameters in this figure are all randomly selected to be features.10.weight, features.14.weight, features.5.expand3 × 3.weight, Mixed_6d.branch7 × 7_3.conv.weight, layer4.2.conv3.weight and features.denseblock2.denselayer1.conv2.weight for VGG-16, AlexNet, SqueezeNet 1.0, Inception v3, Wide ResNet-50-2 and DenseNet-121 respectively. A Q-Q plot is created by plotting quantiles of two probability distributions against one another, which can provide an assessment of "goodness of fit" by how much the solid line close to the dashed line. From Figure.8, it is clear that the solid lines in bottom pictures are closer to dashed lines on most cases, which indicates network parameters can be better fitted by power-law distribution. Moreover, solid lines in the upper plots severely deviate from dashed lines on the tail of distribution but those in the bottom plot do not, which means the distribution of parameters is indeed heavy-tailed. 7.5.3 FURTHER EXPLANATION ON EXPERIMENTS IN SECTION 5.2 As for the experiments for 2-D model, we also calculate coefficient of the second-order term for the quadratic curve shown in Figure.4(b), and its value is roughly 30, which matches the result in Figure.4(c) in the sense that the result for SGD is similar with the result for power-law dynamic with λ1 ≈ 32. 7.5.4 ESCAPING EFFICIENCY ON NEURAL NETWORK We follow the settings in (Zhu et al., 2019). For convenience of the readers, here we give the details of this setting again. We use corrupted FashionMNIST dataset which contains 1000 images with correct labels and another 200 images with random labels to be training data. A small LeNet-like network with 11,330 parameters is used. Firstly we run the full gradient decent to reach the parameters w∗ near the global minima. Then we continue training using both Langevin dynamic(GLD) and power-law dynamic(PLD). Following Zhu’s setting, the learning rates for GD, GLD and PLD are ηGD = 0.1, ηGLD = 0.07 and ηPLD = 0.07, respectively. For GLD, noise std σ = 10−4 as Zhu already tuned. For our PLD, wt+1 = wt − η∇L(wt) + η · α∇L(wt) √ 1 + β(wt − w∗)2 ξ, where α, β are hyperparameters, ξ ∼ N (0, I), and stands for Hadamard product. Here we select α = 2.4, β = 2 after grid search. Expected sharpness is measured as Eν∼
1. What is the novel contribution of the paper regarding power-law dynamics in SGD? 2. What are the strengths and weaknesses of the analytical results in the paper? 3. How does the reviewer assess the PAC generalization bound and its analysis in the paper? 4. What are the limitations of the experimental results in the paper? 5. Are there any typos or minor errors in the review that need correction?
Review
Review This paper proposes to use power-law dynamics to approximate the state-dependent gradient noise in SGD, and analyses its escaping efficiency compared with previous dynamics. Strength: 1. To the best of my knowledge, it is novel to use power-law dynamics to analyze the state-dependent noise in SGD. 2. Still with strong assumptions on covariance structure, the analytical results based on power-dynamics are interesting. For example, it indicates that so-called kappa distribution highly depends on the fluctuations to the curvature over the training data. This is consistent with following work. So I suggest authors provide some discussion with the following work. Wu et.al 2018. How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective. In Advances in Neural Information Processing Systems (pp. 8279-8288). Weakness & Issues 1. The analytical results seem that they strongly depend on the covariance structure assumption, i.e. C(w) is diagonally dominant according to empirical observation. Does it have any theoretical justifications, or even in simplified cases? 2. The delivered PAC generalization bound and the followed analysis are a little ambiguous. Firstly, in current deep learning theory community, the relationship between flatness (even how to define a proper flatness) and generalization is still mysterious and controversial, which depends many factors. This work uses one type of flatness measure, the determinant of H, and shows that flatter minima generalize better by only considering the KL term. However, the first term also includes the Hessian and might also affect generalization bound. Thus, the conclusion appears a little problematic. The authors said that generalization error will decrease w.r.t. kappa’s increase and infinite kappa results in Langevin dynamics. Then the question is what are the difference between the power-law dynamics and Langevin dynamics in term of generalization? My view on the ambiguous analysis is that the authors attempt to answer extremely challenging questions but left with many questionable concerns. 3. The experiments might not be sufficient. I don’t think fitting the parameter distribution according to limited empirical observations is an appropriate way to make justifications. At least, from visual observation, there are many other alternatives besides power-law distribution to fit, as Fig 3 shows. About comparing the escaping efficiency, the result only shows the success rate, and the evidence about the polynomial and exponential difference should be provided. Also, practical networks and datasets should also be considered to provide more strong evidence. If the authors can resolve these issues carefully, I would raise the score. Typos “Eq. 4” should be “Eq.3” below equation 3
ICLR
Title The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization Abstract Despite progress across a broad range of applications, Transformers have limited success in systematic generalization. The situation is especially frustrating in the case of algorithmic tasks, where they often fail to find intuitive solutions that route relevant information to the right node/operation at the right time in the grid represented by Transformer columns. To facilitate the learning of useful control flow, we propose two modifications to the Transformer architecture, copy gate and geometric attention. Our novel Neural Data Router (NDR) achieves 100% length generalization accuracy on the classic compositional table lookup task, as well as near-perfect accuracy on the simple arithmetic task and a new variant of ListOps testing for generalization across computational depths. NDR’s attention and gating patterns tend to be interpretable as an intuitive form of neural routing. Our code is public.1 1 INTRODUCTION Neural networks (NNs) may easily learn certain training sets, but typically they do not generalize on systematically different test sets. Examples of systematic generalization (Fodor et al., 1988) include generalization to sequences longer than those seen during training—productivity, and algorithmic combinations of previously learned rules—systematicity. Despite recent efforts (Bahdanau et al., 2019; Korrel et al., 2019; Lake, 2019; Li et al., 2019; Russin et al., 2019; Csordás et al., 2021), systematic generalization generally remains unsolved (Fodor & McLaughlin, 1990; Lake & Baroni, 2018; Liska et al., 2018; Greff et al., 2020; Hupkes et al., 2020). On some datasets, the best performing models are neuro-symbolic hybrids (Chen et al., 2020; Liu et al., 2020) using task-specific symbolic functions. However, their applicability to other datasets remains limited (Furrer et al., 2020; Shaw et al., 2020). A big question is: which type of architectural inductive bias encourages the training process to select “good” solutions which generalize systematically? The popular Transformers (Vaswani et al., 2017) also often fail to generalize on algorithmic tasks (e.g. Liska et al. (2018); Dubois et al. (2020); Chaabouni et al. (2021); Csordás et al. (2021); Ontañón et al. (2021)), even on tasks with intuitive solutions that can be simply expressed in terms of Transformer attention patterns. Given an input sequence of length N and a Transformer encoder of depth T , solving an algorithmic task is often all about routing the relevant information to the right node/operation at the right time in the T -by-N grid represented by Transformer columns (illustrated in Figure 1/Left). Effectively the task is to learn to draw an adaptive control flow on the canvas of Transformer columns. In fact, recent work by Weiss et al. (2021) introduced a programming language called RASP, which is specifically designed to express solutions to sequence processing problems, and which has a direct equivalent to the operations in Transformer encoders. However, it is shown that Transformers learn solutions expressed in RASP only through intermediate supervision of attention patterns, and sometimes, even such supervision fails. Generally speaking, Transformers fail to find easily interpretable and/or symbolic solutions to algorithmic tasks. We conversely hypothesize that attention-based NNs that are able to find intuitive solutions (achieving interpretable attention patterns) could improve systematic generalization. 1https://github.com/robertcsordas/ndr Here we point out that regular Transformers lack some basic ingredients for learning such “intuitive” solutions to algorithmic problems. As a remedy, we propose simple architectural modifications to help them learn data routing. As a first step towards validating our model, we focus on the popular length generalization task of compositional table lookup (CTL; Liska et al. (2018); Hupkes et al. (2019); Dubois et al. (2020)), as well as two more complex tasks: a simple arithmetic task and a variant of ListOps (Nangia & Bowman, 2018) designed to test the compositional generalization ability of NNs. Our novel Neural Data Router (NDR) achieves 100% generalization accuracy (never reported before; Dubois et al. (2020)) on the CTL task, and obtains nearly perfect accuracy on both the proposed simple arithmetic and ListOps tasks. We show that the attention and gating patterns of NDR tend to be interpretable as plausible control flows. 2 IMPROVING TRANSFORMERS FOR LEARNING ADAPTIVE CONTROL FLOW We argue that the following components are needed to build Transformers capable of learning adaptive control flow. First, composing known operations in an arbitrary order requires that all operations are available at every computational step. This can be easily achieved by sharing the weights of the layers, as is done in Universal Transformers (Dehghani et al., 2019). Second, the network should be sufficiently deep, at least as deep as the deepest data dependency in the computational graph built from elementary operations (e.g., in the case of a parse tree, this is the depth of the tree). Otherwise, multiple operations must be fused into a single layer and hinder natural and elegant compositions. Third, inputs in some columns should be kept unchanged until it is their turn to be processed. The regular Transformer lacks a mechanism for skipping the whole transformation step by simply copying the input to the next step/layer. We propose a special gating function, copy gate, to implement such a mechanism (Sec. 2.1). Finally, many algorithmic tasks require combining several local computations in the right order. This typically implies that attention should not focus on all possible matches at a given time but only on the closest match. We propose and investigate a new type of attention with a corresponding inductive bias called geometric attention (Sec. 2.2). Using both the geometric attention and copy gate, our model implements a “neural data routing mechanism”, which can adaptively serialize the input problem. We refer to the resulting new Transformer as Neural Data Router (NDR). In the experimental section (Sec. 3), we evaluate this model on three algorithmic tasks requiring length generalization and demonstrate its effectiveness. 2.1 COPY GATE: LEARNING TO SKIP OPERATIONS (VERTICAL FLOW) Each layer of the regular Transformer consists of one self-attention and one feedforward block. The input to each of these blocks is directly connected to the corresponding output via a residual connection (Srivastava et al., 2015; He et al., 2016). However, such a connection does not allow for skipping the transformation of the entire layer and simply passing the unchanged input to the next layer. Here we propose to add an explicit gate, which we call copy gate, to facilitate such a behavior. We consider a T -layer Transformer encoder and an input sequence of length N . Since each layer corresponds to one computational step, we often refer to a layer as a step t. We denote the Transformer state of column i in layer t as h(i,t) = Ht,i ∈ Rd where d is the state size, and Ht ∈ RN×d denotes the states of all N columns in layer t. In the copy gate-augmented Transformer (Figure 5 in the appendix), each column i in layer (t+ 1) processes the input Ht similarly to regular Transformers: a(i,t+1) = LayerNorm(MultiHeadAttention(h(i,t),Ht,Ht) + h(i,t)) (1) u(i,t+1) = LayerNorm(FFNdata(a(i,t+1))) (2) using the standard multi-head attention operation (Vaswani et al., 2017) MultiHeadAttention with a query obtained from h(i,t) and keys/values from Ht, but the output is gated (using g(i,t+1) ∈ Rd) as: g(i,t+1) = σ(FFNgate(a(i,t+1))) (3) h(i,t+1) = g(i,t+1) u(i,t+1) + (1− g(i,t+1)) h(i,t) (4) We use the basic two-layer feedforward block (Vaswani et al., 2017) for both FFNdata and FFNgate which transforms input x ∈ Rd to: FFN(x) = W2 max(W1x+ b1, 0) + b2 (5) but with separate parameters and different dimensionalities: for FFNdata W data1 ∈ RdFF×d, W data2 ∈ Rd×dFF , while for FFNgate W gate1 ,W gate 2 ∈ Rd×d, with biases bdata1 ∈ RdFF and bdata2 , b gate 1 , b gate 2 ∈ Rd. When the gate is closed i.e. g(i,t+1) = 0 in Eq. 4, the entire transformation is skipped and the input is copied over to the next layer h(i,t+1) = h(i,t). Crucially, we parameterize the gate (Eq. 3) as a function of the output of the self-attention (Eq. 1), such that the decision to copy or transform the input for each column depends on the states of all columns. This is a crucial difference compared to previously proposed gatings in Transformers, which are solely motivated by training stability (Parisotto et al., 2020) or by a common practice from convolution-based models (Chaabouni et al., 2021). None of the previous approaches can implement the behavior of our copy gate (see Sec. 6 on related work). The bias of the gate bgate2 is initialized to −3 (Hochreiter & Schmidhuber, 1997). This ensures that no update happens initially to create a better gradient flow between layers. It also encourages the model to skip layers unless they have an important contribution in the corresponding step. 2.2 GEOMETRIC ATTENTION: LEARNING TO ATTEND TO THE CLOSEST MATCH (HORIZONTAL FLOW) We propose geometric attention designed to attend to the closest matching element. Like in regular self-attention, given an input sequence [x(1),x(2), ...,x(N)] with x(i) ∈ Rdin , each input is projected to key k(i) ∈ Rdkey , value v(i) ∈ Rdvalue , query q(i) ∈ Rdkey vectors, and the dot product is computed for each key/query combination. In our geometric attention, the dot product is followed by a sigmoid function to obtain a score between 0 and 1: Pi,j = σ(k (j)>q(i)) (6) which will be treated as a probability of the key at (source) position j matching the query at (target) position i. These probabilities are finally converted to the attention scores Ai,j as follows: Ai,j = Pi,j ∏ k∈Si,j (1− Pi,k) (7) where Si,j denotes the set of all (source) indices which are closer to i than j is to i, and when two indices have the same distance to i, we consider the one which is to the right of i (i.e., greater than i) to be closer, i.e., Si,j = { k ∈ {1, ..., N} \ {i, j} : |i− k| < |i− j|, if i < j k ∈ {1, ..., N} \ {i, j} : |i− k| ≤ |i− j|, if j < i (8) In addition, we explicitly zero out the diagonal by setting Ai,i = 0 for all i = 1, ..., N . The ordering of source indices is illustrated in Figure 1/Right. The resulting scores Ai,j are the attention scores used to compute the weighted averages of the value vectors. By using the terms (1− Pi,k) in Eq. 7, when there is a match, it downscales any other more distant matches. Two recent works (Brooks et al., 2021; Banino et al., 2021) use such a parameterized geometric distribution in the form of Eq. 7 (see Sec. 6 on related work). The resulting attention function has a complexity of O(N2), similar to the regular self-attention used in Transformers (Vaswani et al., 2017). Eq. 7 can be implemented in a numerically stable way in log space. The products can then be calculated using cumulative sums, subtracting the elements for the correct indices in each position. Directional encoding. In practice, we augment Eq. 6 with an additional directional encoding. In fact, the only positional information available in the geometric attention presented above is the ordering used to define the product in Eqs. 7-8. In practice, we found it crucial to augment the score computation of Eq. 6 with additional directional information, encoded as a scalar Di,j ∈ R for each target/source position pair (i, j): Di,j = { WLRh (i) + bLR, if i ≤ j WRLh (i) + bRL, if i > j (9) where h(i) ∈ Rd denotes the input/state at position i and WLR,WRL ∈ R1×d, bLR, bRL ∈ R are trainable parameters. This directional information is integrated into the score computation of Eq. 6 as follows (akin to how Dai et al. (2019) introduce the relative positional encoding (Schmidhuber, 1992) as an extra term in the computation of attention scores): Pi,j = σ ( α ( Wqh (i) + bq )> Wk,Eh (j) + βDi,j + γ ) (10) where the matrix Wq ∈ Rdhead×d maps the states to queries, bq ∈ Rdhead is a bias for queries, Wk,E ∈ Rdhead×d maps states to keys (we note that dhead is typically the size of the key, query and value vectors for each head, dhead = dnheads ), and α, β, γ ∈ R are learned scaling coefficients and bias, initialized to α = 1√ dhead , β = 1, γ = 0. Using this additional directional information, each query (position i) can potentially learn to restrict its attention to either the left or right side. 3 EXPERIMENTS We evaluate the proposed methods on three tasks: the compositional table lookup (Liska et al., 2018; Hupkes et al., 2019), a custom variant of ListOps (Nangia & Bowman, 2018), and a simple arithmetic task which we propose. In all cases, the task is designed to test the compositional generalization ability of NNs: the model has to learn to apply operations seen during training in a longer/deeper compositional way (productivity). Further experimental details for each task can be found in Appendix C. 3.1 COMPOSITIONAL TABLE LOOKUP Task. The compositional table lookup task (Liska et al., 2018; Hupkes et al., 2019; Dubois et al., 2020) is constructed based on a set of symbols and unary functions defined over these symbols. Each example in the task is defined by one input symbol and a list of functions to be applied sequentially, i.e., the first function is applied to the input symbol and the resulting output becomes the input to the second function, and so forth. There are eight possible symbols. Each symbol is traditionally represented by a 3-bit bitstring (Liska et al., 2018). However, in practice, they are simply processed as one token (Dubois et al., 2020). The functions are bijective and randomly generated. Each function is represented by a letter. An example input is ‘101 d a b’, which corresponds to the expression b(a(d(101))); the model has to predict the correct output symbol. We note that there exists a sequenceto-sequence variant of this task (Dubois et al., 2020) where the model has to predict all intermediate steps (thus trained with intermediate supervision). We directly predict the final output. An ideal model should be able to solve this task independently of the presentation order, that is, it should not matter whether the task is encoded as ‘101 d a b’ or ‘b a d 101’. We thus study both forward (former) and backward (latter) variants of the task. To evaluate systematic generalization, the train/valid/test sets reflect different numbers of compositions: samples with 1-5/6-8/9-10 operations, respectively. To best of our knowledge, no previous work has reported perfect accuracy on this task through an NN. We refer the readers to Sec. 6 for further details on the previous work. Results. We consider five different baselines: an LSTM (Hochreiter & Schmidhuber, 1997), bidirectional LSTM (Schuster & Paliwal, 1997), DNC (Graves et al., 2016; Csordás & Schmidhuber, 2019), Universal Transformers (Vaswani et al., 2017; Dehghani et al., 2019), and its relative position variants (Csordás et al., 2021). For Transformers, the prediction is based on the last column in the final layer (we conduct an ablation study on this choice in Appendix A). The hyper-parameters used for each model can be found in Table 7 in the appendix. We also provide an ablation study on the number of layers needed for generalization in Appendix A, which supports our claim on the necessity for a “sufficiently” deep architecture. The main results on this task are shown in Table 1. The LSTM and DNC perform well in the forward variant, achieving perfect generalization for longer sequences, but fail on the backward variant. This is not surprising since in the forward case, input symbols are presented in the “right” processing order to the LSTM. As expected, the bidirectional LSTM performs well in both presentation orders, since one of its processing directions is always aligned with the order of computation. However, for an arbitrary task, the order of processing is not given. For example, for ListOps (Sec. 3.3), the processing should start from the deepest point in the parse tree, which is probably somewhere in the middle of the sequence. The experiments on other tasks (Sec. 3.2 and 3.3) requiring arbitrary processing orders show that bidirectional LSTMs do not generalize well in such tasks. This is not satisfactory since our goal is to create a generic architecture which can solve arbitrary problems with an arbitrary underlying input processing order. While the Transformer seems to be a good candidate for learning problem dependent processing orders, the baseline Transformer variants fail to generalize in this task in both directions. By introducing the copy gate (Sec. 2.1), the relative Transformer can solve the forward task, but not the backward one. Our analysis showed that the network learns to attend to the last operation based on the relative position information. Since the result is read from the last column, this position changes with the sequence length. The model thus fails to generalize to such arbitrary offsets. To address this issue, we introduce a simple mechanism to let the model choose between absolute and relative positional encodings at each position (see Appendix B). The resulting model effectively manages to use the absolute position for the prediction and perform well in both directions. However, such a combination of absolute/relative positional encoding might be an overly specific bias. A more generic solution, geometric attention (Sec. 2.2), also achieved perfect generalization and was found easier to train. We present the corresponding visualization of our model in Sec. 4. 3.2 SIMPLE ARITHMETIC In order to validate the success of the proposed model on a task that involves more complex data flows and operations, we propose the simple arithmetic task. Task. The task is to execute an arithmetic expression consisting of nested modulo 10 additions and multiplications. This requires the model to process tree-structured data flows, which is presumably more difficult than the sequential processing required for the CTL task. Each operation is surrounded by brackets, such that the boundaries of operations are easy to determine. For example ‘((4*7)+2)’ should evaluate to ‘0’ (30 modulo 10). The expressions are generated randomly. The tree depth is up to 5 for the training set, 6 for the validation set, and 7-8 for the test set. The depth is measured as the number of operations, ignoring the leaves, so the example above has a depth of 2. The sequence length is limited to at most 50 tokens. Results. Table 2 shows the results. All considered models perform well on the IID validation data, but none except the NDR performs well on the generalization test set, which achieves near-perfect accuracy of 98%. We also note that the NDR learns very quickly: while all other models require about 200 K steps to converge, the NDR achieves near-perfect accuracy after 50 K steps of training. 3.3 LISTOPS We also evaluate our model on a variant of the ListOps task (Nangia & Bowman, 2018) which is a popular task commonly used to evaluate parsing abilities of NNs (Havrylov et al., 2019; Shen et al., 2019; Xiong et al., 2021; Tay et al., 2021; Irie et al., 2021). Some special architectures such as Chowdhury & Caragea (2021) can almost perfectly generalize to longer sequences on this task. However, as far as we know, no Transformer variant has been reported to be fully successful. Task. The task consists of executing nested list operations written in prefix notation. All operations have a list of arguments that can be either a digit (from 0 to 9) or recursively another operation with its own list of arguments. The operations are min, max, median and sum. The sum is modulo 10, and the median is followed by the floor function such that the output of any operation lies between 0 and 9. For example: [MED 4 8 5 [MAX 8 4 9 ] ] should return 6. There are two well-known variants: the original one by Nangia & Bowman (2018) and the “Long Range Arena” variant by Tay et al. (2021) which have different maximum numbers of arguments in each function and maximum sequence lengths. In both variants, there is no strict control of the depth of data samples: there is simply a certain pre-defined probability that each argument in the list is expanded into another list (which may increase the tree depth). This is not suitable for evaluating systematic generalization in terms of compositionality (over the problem depth). We propose instead to generate clean train, valid, and test splits with disjoint depths: up to depth 5 for training, depth 6 for validation and depths 7 and 8 for test. Importantly, we make sure that a depth-K sample effectively requires computation until depth-K (otherwise min, max, and med operations could potentially find the output without executing all of its arguments). By dissociating the splits by the depth, we can clearly identify models which fail to generalize compositionally. Apart from the depth specifications, all train/valid/test sets share the same settings as follows: the maximum sequence length is 50 (tokens), the probability of recursively sampling another function inside a list is 30% at each position, and the maximum number of arguments for a function is 5. The train set consists of 1M, the valid and test sets of 1K sequences. Results. Table 3 shows the results. Like on the other tasks, the baseline LSTM and Transformers do not generalize well on the test set consisting of deeper problems, while they achieve a near-perfect accuracy on IID data. In contrast, our model achieves near-perfect generalization. 4 ANALYSIS In this section, we provide some visualizations of attention and gating patterns of the NDR and the corresponding analyses. For more visualizations, we refer the readers to Appendix D. Compositional Table Lookup. Figure 2 shows the gating and attention patterns of the NDR model for an example of the backward presentation task. As shown in Fig. 2/Bottom, the gates of different columns open sequentially one after another when the input is available for them. Fig. 2/Top shows the corresponding attention maps. Each column attends to the neighbouring one, waiting for its computation to be finished. The behavior of the last column is different: it always attends to the second position of the sequence, which corresponds to the last operation to be performed. ListOps. We can also identify how the NDR processes the data in ListOps. Different attention heads play different roles. We highlight the core observations in Figure 3. The input for this example is: [SM [MED [MIN 1 7 4 [MAX 2 4 0 8 9 ] ] 7 ] 5 [MED 8 5 8 ] 0 7 ]. First of all, we find that there is a head (head 13 in Figure 3, first row) which seems to be responsible for connecting operators and their arguments: the operands/arguments of an operation attend to the operator. In step 0 (t = 0 in the figure), we can recognize that the operations at the deepest level, namely MAX and the second MED have all the arguments ready (as is shown by vertical lines on the columns corresponding to MAX and MED). The model indeed identifies that these two operations are ready to be executed and that they can be processed in parallel (these arguments-to-operation attention patterns remain for a few steps). We note that at this stage, the last argument of MIN is not ready yet ([MIN 1 7 4 [MAX 2 4 0 8 9 ] ]). We can see that only arguments which are already ready (1 7 4) attend to the operator (see the column of MIN). In step 1 (t = 1, 2nd row), we can see that head 5 copies the expected result of MAX, 9 to the column of the operator (we note that this only requires one step as 9 is always the result of MAX when it is one of the arguments of MAX). Similarly in step 2, head 7 (2nd row) seems to copy the result of the second MED, 8 to the operator column. In step 3 (t = 3, 1st row), we recognize that the result of MAX is marked as an argument for MIN in head 13 which is responsible for communication between operators and their arguments. This is shown by the new attention which appears at t = 3 in head 13 from the source position MAX to the target position MIN (a pattern which is not visible at t = 2). In head 3, t = 6 (2nd row), the expected result of MIN, which is 1, is copied to the operator, similarly to the patterns we observed above for MAX and MED. In head 13, t = 6 (1st row), all arguments for the first MED are now also recognized (the result of MIN which is 1, and 7). Finally in t = 7 (2nd row), two heads, head 3 and head 5 seem to copy/gather two inputs needed to compute the corresponding median, 1 and 7, and store them in the column of the operator MED. A complete visualization of further steps can be found in Appendix D.2. We noticed that some of the heads do not seem to play a key role; we focused on interpreting those which seem to participate in the main computation. For ListOps, we also partially find the attention patterns described above in the baseline Transformer with relative positional encoding, at least on some inspected examples, which also explains its rather high accuracy. 5 DISCUSSION Learning adaptive serialization. The NDR architecture can be understood as performing adaptive serialization of the problem. A key requirement for reusable computation is decomposing the problem into reusable building blocks, typically applied in sequential steps. The granularity of the decomposition determines the degree of reusability: fusing operations in a single step makes the processing faster (fewer steps), but also more specialized. Learning the most granular solutions is thus preferable for generalization. At the same time, not all processing should happen serially: branches of the computational graph that do not have common data dependencies can be processed independently in parallel, which we empirically observe in our NDR in the ListOps example (Sec. 4). This enables the architecture to get away with a number of computational steps reflecting the depth of the computational graph rather than the length of the input. Bottom up approach for improving model architectures. Transformers have seen tremendous successes across various application domains (Devlin et al., 2019; Brown et al., 2020; Dosovitskiy et al., 2021). Impressive results have been reported when they are scaled up with a large amount of data (Brown et al., 2020). On the other hand, simple tasks like those highlighted in the present work demonstrate that the Transformer architecture still struggles with basic reasoning. Particularly in algorithmic tasks, it is often the case that a sub-optimal choice of architecture/optimization method makes the model fall back to simple memorization. We argue that it is crucial to look at isolated problems which test specific generalization capability. This calls for a bottom-up approach: building on toy tasks that focus on individual aspects of generalization and using them for improving models. 6 RELATED WORK Gating inside Transformers. Several prior works have proposed to use some sort of gating within Transformer architectures (Parisotto et al., 2020; Chaabouni et al., 2021). Our proposed copy gate is different from those as it satisfies two important properties. First, our copy gate allows the model to skip the entire Transformer layer (i.e., both the self-attention and the feedforward blocks) when the gate is closed. Second, the gate function is conditioned on the attention output such that the decision of opening or closing depends on information from all columns. While multiple gating variants have been proposed by Parisotto et al. (2020) to stabilize Transformers for reinforcement learning, none of them can produce this behavior. Empirically, we also tried out a few other gating variants which do not satisfy the two properties above; we found them not to improve over regular Transformers in our preliminary experiments on compositional table lookup. Recent work by Chaabouni et al. (2021) also makes use of “gating” in Transformers through a gated linear unit (GLU) activation function commonly used in convolutional NNs (Dauphin et al., 2017). Transformer models with such an activation function were reported to outperform RNN baselines on a systematic generalization task (Dessı̀ & Baroni, 2019). Unlike our copy gate or Parisotto et al. (2020)’s gating, such a gating activation does not have the “residual” term (i.e. a closed gate zeros out the input) which allows the model to skip a transformation. In a more general context, benefits of the GLU activation in Transformers vary across tasks (Irie et al., 2019; Shazeer, 2020). In language modeling, no improvement is typically obtained by using the standard highway gate instead of the residual connection in Transformers (Irie, 2020), while it yields improvements when combined with convolutional layers (Kim & Rush, 2016). Parameterized geometric distributions. Two recent works (Brooks et al., 2021; Banino et al., 2021) have used a form of parameterized geometric distribution (PGD; in the form of Eq. 7). Brooks et al. (2021) have used such a distribution to parameterize the movement of a pointer on a sequence of instructions. Banino et al. (2021) have used it to implement adaptive computation time (Schmidhuber, 2012; Graves, 2016). We use the PGD to obtain a generic attention mechanism as a replacement of the standard self-attention used in Transformers (Vaswani et al., 2017). Compositional table lookup. CTL task was proposed for evaluating the compositional ability of NNs (Liska et al., 2018). Previous works evaluated RNNs, RNNs with attention, and Transformers on this task with limited success (Hupkes et al., 2019; Dubois et al., 2020). Dubois et al. (2020) have proposed a special attention mechanism to augment the recurrent architecture. While they obtained good performance for the forward presentation order, the proposed model failed in the backward one. In contrast, two of our approaches (Sec. 3.1) achieve 100% generalization accuracy for both orders. Positional encodings. Many previous works have focused on improving positional encoding (Schmidhuber, 1992; Vaswani et al., 2017) for self-attention. Most notably, the relative positional encoding (Schmidhuber, 1992; Shaw et al., 2018; Dai et al., 2019) was found useful for improving systematic generalization of Transformers (Csordás et al., 2021). Here we also present two new approaches related to positional encoding. One is the gated combination of absolute and relative positional encoding (Sec. 3.1; details in Appendix B). We show that absolute positional encoding can complement relative positional encoding. The former enables the model to always attend to a specific position, as is needed for the CTL task in the last step, while the gating allows it to use relative positional encoding for other positions/steps. Second, we introduce directional encoding to augment geometric attention. Unlike positional encoding which can overfit to a range of positions seen during training, the direction information is found to be robust and to be a crucial augmentation of the geometric attention. 7 CONCLUSION We proposed a new view on the internal operations of Transformer encoders as a dynamic dataflow architecture between Transformer columns. This overcomes two shortcomings of traditional Transformers: the problem of routing and retaining data in an unaltered fashion, which we solve by an additional copy gate, and the problem of learning length-independent attention patterns, which we solve by geometric attention. Our new model, the Neural Data Router (NDR), generalizes to compositions longer than those seen during training on the popular compositional lookup table task in both forward and backward directions. NDR also achieves near perfect performance on simple arithmetic and ListOps tasks in settings that test systematic generalization in terms of computational depth. In general, the gates and the attention maps collectively make the architecture more interpretable than the baselines. Future work will extend this encoder-only architecture to a full sequence-to-sequence model and evaluate it on other standard tasks in systematic generalization requiring generation of variable-length output sequences. ACKNOWLEDGMENTS We thank Imanol Schlag and Sjoerd van Steenkiste for helpful discussions and suggestions on an earlier version of the manuscript. This research was partially funded by ERC Advanced grant no: 742870, project AlgoRNN, and by Swiss National Science Foundation grant no: 200021 192356, project NEUSYM. We are thankful for hardware donations from NVIDIA & IBM. The resources used for the project were partially provided by Swiss National Supercomputing Centre (CSCS) project s1023. A ABLATIONS IID Test nlayers Forward Backward Forward Backward 14 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 12 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 0.99 ± 0.02 10 1.00 ± 0.00 1.00 ± 0.00 0.75 ± 0.04 0.62 ± 0.05 8 1.00 ± 0.00 1.00 ± 0.00 0.23 ± 0.02 0.24 ± 0.03 6 1.00 ± 0.00 0.96 ± 0.03 0.22 ± 0.05 0.15 ± 0.01 4 0.96 ± 0.04 0.68 ± 0.11 0.14 ± 0.01 0.13 ± 0.01 Readout from the first instead of the last column. In our experiments with the Transformer models, the last column was used for the readout of the result. Under this configuration, the readout position depends on the length of the sequence which might increase the difficulty of the problem, in particular for the models using absolute positional embeddings. Table 5 shows the corresponding ablation study. We observe that this choice has only marginal impact on the model performance. As a side note, we also tried the variant where an additional cross-attention layer is used for the readout. Again, the generalization performance was not better. In fact, these results are not surprising since none of these changes fundamentally addresses the problem of length generalization. Does Adaptive Computation Time (ACT) help? In this work, we determined the number of layers/steps to be used in the model based on heuristics (see Appendix C.1). We could also consider using Adaptive Computation Time (ACT) to dynamically determine the number of steps. Furthermore, ACT introduces a form of gating which creates shortcuts in the credit assignment path between the output and a result of an intermediate layer. This “copying” mechanism resulting from the ACT (i.e. stop computation at a certain time and copy the result to the output) is fundamentally different from our copy gate (Sec. 2.1). Our copy gate allows Transformer columns to keep the input unchanged until it’s their turn to be processed (a crucial property to implement control flow like behavior). This behavior can not be simulated by the ACT. Here we provide some experimental results on models with ACT which confirm that the proposed copy gate is a crucial component for generalization which can not be replaced by ACT. We note that there are various versions of ACT in the literature, e.g., the variant used by Dehghani et al. (2019) in Universal Transformers is different from the one used by Graves (2016). Here we focus on two variants: one in which we directly apply Graves (2016) to Transformers, and another one used by Dehghani et al. (2019). We start with the description of the former. An extra sigmoidal unit p̂(i,t) is computed for each column i in each timestep t as: p̂(i,t) = σ(WHh (i,t) + bH) (11) where WH ∈ R1×d and bH ∈ R are trainable parameters. By comparing the cumulative sum of p̂(i,t) over time steps to a certain threshold value (1− ) with a hyper-parameter (0.01 in our experiment), we determine the termination step T i for column i as: T i = min{Tmax,min{t′ : t′∑ t=1 p̂(i,t) ≥ 1− }} (12) where Tmax is the pre-defined maximum number of steps. The corresponding halting probability p(i,t) is then computed as: p(i,t) = { p̂(i,t) if t < T i Ri if t = T i (13) Ri = 1− T i−1∑ t=1 p̂(i,t) (14) which is used to compute the final output of column i as: oi = T i∑ t=1 p(i,t)h(i,t) (15) In Dehghani et al. (2019)’s variant, a different equation is used in lieu of Eq. 15 above and the computation of the reminder term Ri in Eq. 14 above is not properly handled in case where Eq. 12 terminates because of the first condition on Tmax. For further details, we refer the readers to Listing 1 and 2 in Dehghani et al. (2019) and/or our public code. One subtlety introduced by Dehghani et al. (2019) which we note here is that the computation of the final output oi of column i effectively “halts” after T i (since oi only depends on h(i,t) for 0 < t < T i), but column i itself still continues transforming the hidden states h(i,t) for steps t > T i until all columns reach the termination step, and its updated states can be attended/read by another column j which has not halted yet (i.e. T j > T i). In this sense, computation is never stopped independently for each column. The mechanism described above instead finds the readout steps for each column (as is used in Eq. 15). We follow this decision in our implementation of both variants. In addition, a new regularizer term, LACT = α 1N ∑N i=1R i is added to the loss function, where N is the length of the input sequence. This makes the network prefer short computations. We ran a hyper-parameter search for α from the following values: 0.001, 0.003, 0.01, 0.03, 0.1. We found α = 0.03 to work the best. We conducted experiments on the compositional table lookup task. We first noted that ACT helps training our baseline Transformer models with a maximum step of 14 layers which was not possible without ACT (our baseline Transformer had only 11 layers for this reason; see Table 7). The shortcut in the credit assignment path introduced by ACT certainly helps training of this 14 layer model. As we noticed that the models with ACT learn slower than those with gating, we increased the number of training steps to 60k steps which is twice as many as 30k used for the models without ACT. Table 6 shows the results. We observe that, interestingly, ACT enables generalization for longer lengths in the forward direction of the Transformer with relative positional encoding and the one with geometric attention. However, we were not able to find any configuration that generalizes in the backward case. This demonstrates that the copy gate is effectively a crucial component for generalization which can not be replaced by ACT. Furthermore, the convergence of models with ACT is significantly slower than those of models with our gating, and they are more unstable and very sensitive to the value of α on the regularization term, even in the successful forward case. Overall, the only benefit of ACT is thus the adaptive depth, as is illustrated in Figure 4, which is orthogonal to our study. B DETAILS OF ATTENTION WITH COMBINED ABSOLUTE/RELATIVE POSITIONAL ENCODING The use of copy gates enables Transformers to generalize to longer lengths in the forward presentation order of the CTL task (Sec. 3.1), but that alone was not enough to make the model generalize in the backward order variant of the task. Examining the attention maps reveals that the model uses position-based attention to read out the result instead of content-based attention. In the backward presentation order, the last column of the transformer should focus on the second column, whose relative position changes dynamically with the length of the sequence. We solve this issue by adding an option to choose between absolute and relative positional encodings to the attention head. In what follows, we describe the operation within a single layer/step. This allows us to omit the layer/step-index t for better readability, and thus denote the state of column/position i as hi instead of h(i,t). We use the relative positional embedding variant of self-attention by Dai et al. (2019). Our attention matrix with the gated absolute/relative positional encodings can be decomposed as follows: ri = σ(hiWar + bar) (16) Âi,j = h > i W > q Wk,Ehj︸ ︷︷ ︸ (a) + b>q,EWk,Ehj︸ ︷︷ ︸ (c) + ( h>i W > q Wk,P︸ ︷︷ ︸ (b) + b>q,PWk,P︸ ︷︷ ︸ (d) ) ( pi−jri + pj(1− ri)︸ ︷︷ ︸ (e) ) (17) where the matrix Wq ∈ Rdhead×d maps the states to queries, Wk,E ∈ Rdhead×d maps states to keys, while Wk,P ∈ Rdhead×d maps positional embeddings to keys. dhead is the size of the key, query and value vectors for each head, set as dhead = dnhead . bq,E , bq,P ∈ R dhead are learned vectors. pi ∈ Rd is the standard sinusoidal embedding for position i (Vaswani et al., 2017). Softmax is applied to the second dimension of  to obtain the final attention scores, A. Component (a) corresponds to content-based addressing, (b, e) to content-based positional addressing, (c) represents a global content bias, while (d, e) represent a global position bias. We introduce term (e) for the positional embedding which can switch between absolute and relative positional encodings using the scalar gate ri (Eq. 16; parameterized by War ∈ Rd×1 and bar ∈ R), which is the function of the state at target position i. C IMPLEMENTATION DETAILS A PyTorch implementation of our models together with the experimental setup is available under https://github.com/robertcsordas/ndr. The performance of all models is reported as mean and standard deviations over 5 different seeds. C.1 CHOOSING THE NUMBER OF LAYERS In Sec. 2, we hypothesized that one of the conditions for our model to generalize is to be “sufficiently” deep such that elementary operations are learned in separate layers which would then become composable. In practice, a “sufficient” depth can be determined by the basic units of compositions implicitly defined by the dataset. The depth of the model must be at least as deep as the deepest path in the computation graph defined by these basic operations. This hypothesis was empirically validated in the ablation study presented above (Appendix A). In general, we used the following heuristics to choose the depth of the Transformers: (length of the deepest path in the graph) × (steps per operation) + a few more layers. Determining the number of steps needed by the elementary operation is not straightforward but it can be done empirically. For example, for ListOps, as is shown in Sec. 4, it requires two steps per operation: one step in which the operands attend to the operation, followed by another one where the result is written back to the operation. For other tasks, we found that a single step per operation was enough. Choosing more layers than needed is safe, and it can be used to determine the required number of layers, for example by looking at the gate activity. Finally, “+ a few more layers” are needed because one additional layer should be used to read out the final result, and one or a few more can be needed for communication between columns (e.g., to determine operator precedence). Since parameters are shared across layers, we can optionally train models with a certain number of layers and increase the number of computational steps at test time. This allows us to train models using a depth which is “sufficient” to solve the training set, but increase it at test time to generalize to a test set requiring more computational steps. We did this for the ListOps experiment (Sec. 3.3): the model was trained with 20 layers and tested with 24. Our preliminary experiments confirmed that this practice has no performance penalty, while it speeds up training. C.2 DATASET DETAILS Compositional table lookup. Our implementation uses 8 symbols as input arguments and 9 randomly sampled bijective functions denoted by lower case letters of the English alphabet. All functions are included in the train set in combination with all possible input symbols. The rest of the training set consists of random combinations of functions applied to a random symbol as an argument, up to length 5. The total size of the train set is 53,704 samples. The samples are roughly balanced such that there are similar numbers of samples for each depth. There are different validation sets: an IID set, which matches the distribution of the train set, and a depth validation, which includes samples of lengths 6, 7 and 8. The test set consists of sequences of lengths 9 and 10. Simple arithmetic. The dataset is constructed by sampling random digits (0-9) and operations + (add) and ∗ (multiply). The operations are performed modulo 10. Parentheses surround the arguments of the operations. The depth of the resulting tree is computed, and rejection sampling is used to ensure that the same number of samples from each depth is present in the given split. The maximum length of samples is 50 tokens, sub-operations are sampled with probability 0.2. 100 K samples are used for training, 1 K for both test and validation sets. The train set consists of 0-5 operations, the validation set of 6 and the test set of 7 operations. ListOps. Random digits are sampled from range 0-9. Operations are sample from the set summodulo (SM), which is a sum modulo 10, min (MIN), max (MAX) and median followed by the floor function (MED). The maximum number of arguments for each operation is 5. A sub-operation is sampled with probability 0.3. 1 M samples are used for training, 1 K for test and validation. The train set consists of 0-5 operations, 6 for the validation set, and 7 for the test set. For each sample, we calculate a number which we call dependency depth. To understand it, note that MIN and MAX operations only select one of their operands, MED selects 1 or 2. In SUM, all operands are needed to perform the operation. If we construct a parse tree and prune away the branches which were not selected by any operation and measure the depth of such a tree, the resulting number is the dependency depth. This ensures that the deeper parts of the tree contribute to the result calculation, preventing shallow heuristics, like ignoring all branches of the tree that are too deep and still getting the correct result with a high chance. We also ensure that the number of samples is the same for all possible dependency depths in each split. C.3 MODEL DETAILS We use the AdamW optimizer (Loshchilov & Hutter, 2019) for all of our models. Standard hyperparameters are listed in Tab. 7, 8 and 9. Additionally, models with gating use dropout (Hanson, 1990; Srivastava et al., 2014) applied to the content-based query and the position-query components of 0.1 for most models, except for non-gated Transformers on ListOps, where this value is 0.05. In the case of geometric attention, since the channels of the directional encoding does not have any redundancy, dropout is applied just to the content-query. In the case of Transformers with the copy gate but without geometric attention, we use tanh instead of LayerNorm in Eq. 2. The Transformer/NDR layer with a copy gate is illustrated in Figure 5. The hyperparameters of the gateless Transformers differ significantly from the gated ones. This is because they were very hard to train to achieve good performance even on the IID set, requiring extensive hyperparameter tuning. One might argue that fewer layers make them less competitive on longer sequences. However, we were unable to train them to perform well even on IID data with comparable sizes. All Transformer variants have a begin (B) and end (E) token included in the sequence. RNNs (LSTM and DNC) have no such tokens. All Transformers are encoders only, and the results are read from the last column (corresponding to the end token). The DNC has 21 memory cells, 4 read heads, and an LSTM controller. It contains recently introduced improvements (Csordás & Schmidhuber, 2019). We use gradient clipping with magnitude 5 (for CTL) or 1 (for simple arithmetic and ListOps) for all of our models. Hyperparameters were obtained by a Bayesian hyperparameter search of Weights & Biases2 over the systematically different (OOD) validation set for the +abs/rel + gate models and were reused for all other gated models. For the non-gated models, we used the +rel variant for tuning. It was not possible to tune the baselines using only the OOD validation set because their performance was too bad on that set. We thus used a mixture of IID and OOD validation sets to tune the hyperparameters for the baselines. Table 10 shows the range of hyperparameters used for tuning. “FF multiplier” is used to calculate dFF from dmodel. We train all models for a fixed number of niters iterations and measure their validation performance every 1000 iterations. For each model, we select the best checkpoint according to the validation performance, and report its test accuracy. D ADDITIONAL ANALYSIS D.1 COMPOSITIONAL TABLE LOOKUP An idealized sequence of computations in a Transformer for an example from CTL task is shown in Fig. 6. Each column waits for its input from the left side, then performs an update. Finally, the last 2https://wandb.ai/ column copies the result. So far, in the main text, we only had space to show the gate and attention activity of the NDR for a few timesteps. Here we show the corresponding visualization of all steps in Figures 10 and 11, as well as the attention map for the baseline Transformer with relative positional encoding in Figure 7. We also show the Transformer + abs/rel + gate variant in Fig. 8 and Fig. 9. Please directly refer to the caption of the figures for the corresponding analysis. In general, the visualization for our NDR and the abs/rel + gate variant is easily interpretable, unlike that of the baseline Transformer model. D.2 LISTOPS Figures 12 and 14 shows the attention and gate patterns of our NDR architecture on an example from the ListOps dataset. We highlighted notable attention patterns in Sec. 4. Different heads seem to specialize in different functions. As already mentioned in Sec. 4, head 13 of the NDR architecture, shown in Figure 13, seems to specialize in selecting which arguments belong to which operator. The gating patterns are also very interesting. In the early stages, the deepest parts of the input are updated: [MAX 2 4 0 8 9] and [MED 8 5 8], which are independent branches of the parse tree that can be processed in parallel. In the following steps, the update patterns spread up in the parse tree, updating the operations that have their arguments available. In this task, the input is read from the first column, which is written in a very late stage.
1. What is the issue with transformers that the paper aims to address? 2. What are the two modifications proposed by the authors to improve the performance of transformers? 3. How effective are the proposed modifications in achieving near-perfect accuracy on various benchmarks? 4. What are the strengths and weaknesses of the paper regarding its contributions, explanations, and assumptions? 5. Are there any tasks that may require different settings than the ones considered by the authors? 6. How necessary and practical is the proposed depth in the data dependency graph? 7. Can the gating function result in optimization shortcuts or collapse? 8. Can tasks prefer non-local operations over local ones, and how does the proposed method perform in such cases? 9. How well-supported is the claim that previous results did not achieve perfect accuracy? 10. Should the authors have considered and adapted hyperparameters for SOTA methods for tasks with limited sequence length?
Summary Of The Paper Review
Summary Of The Paper This paper addresses an issue of transformers that sometimes they fail to find solutions that are easily expressible by attention patterns. The issue is justified to be the same as the problem of learning useful control flow. The authors propose two modifications, namely adding a copy gate functionality and a geometric attention module which facilitates focusing on local useful operations. The resulting method achieves near perfect accuracy on the considered benchmarks for length generalization, simple arithmetic tasks, and computational depth generalization. Review Main strengths: The paper is well written and does a great job in introducing the problem and revealing the flaws of the universal transformer in achieving good performance in the described tasks, as well the authors' intuition of the properties of a good solution. The main components of the proposed method are explained in sufficient details to help reproducing the proposed method. The proposed benchmarks and datasets, the empirical approach, and chosen hyperparameters are provided and discussed in details. The paper is well positioned with regards to the related work. Main Weaknesses: It is not clear if the the considered benchmarks cover all required aspects of task generalization, or the generalization is only valid for tasks that are to some extent similar to the considered experiments. The authors should further explain which aspects, if any, are missing and are not addressed in this work. It is not clear if the considered assumptions are always necessarily and correct. The authors should address the following questions in the paper either in form of justified explanations or if required with ablation studies: 6.1) Is there any task which would benefit or require settings that are not covered by the considered settings described in section 2? 6.2) Regarding point 2 in section 2: What if the data dependency graph was too long that memory complexity would not practically allow to use such a depth? In other words, to what extent the proposed depth is necessary? 6.3) Regarding point 3 in section 2: Could the gating function result in a shortcut/collapse in optimization? (Considering a far more complex task that is generally addressed by transformers could reveal such issues.) 6.4) Regarding the final point in section 2: Could a task would prefer non-local operations to local ones? Does the performance of the proposed method degrade in that situation? Some previous works, for example on the ListOps task, consider sequences that are orders of magnitude longer than the ones considered in this paper (A couple of examples are [1], [2]). It is not clear if the claim that previous results did not achieve the perfect accuracy is well-supported? It seems like that to be fair, the authors should have considered some of the SOTA methods and adapt their hyperparameters for these tasks with limited sequence length before testing how they would perform. Minor points: In section B.2, the set of values or ranges over which the hyperparameters are searched should also be mentioned. First line of page 14. "sample" -> "sampled" References: [1] "Modeling Hierarchical Structures with Continuous Recursive Neural Networks" by Chowdhury, J.R. and Caragea, C., (arXiv:2106.06038v1) [2] "Nystr ̈omformer: A Nystr ̈om-based Algorithm for Approximating Self-Attention" by Xiong et al., (arXiv:2102.03902v3)
ICLR
Title The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization Abstract Despite progress across a broad range of applications, Transformers have limited success in systematic generalization. The situation is especially frustrating in the case of algorithmic tasks, where they often fail to find intuitive solutions that route relevant information to the right node/operation at the right time in the grid represented by Transformer columns. To facilitate the learning of useful control flow, we propose two modifications to the Transformer architecture, copy gate and geometric attention. Our novel Neural Data Router (NDR) achieves 100% length generalization accuracy on the classic compositional table lookup task, as well as near-perfect accuracy on the simple arithmetic task and a new variant of ListOps testing for generalization across computational depths. NDR’s attention and gating patterns tend to be interpretable as an intuitive form of neural routing. Our code is public.1 1 INTRODUCTION Neural networks (NNs) may easily learn certain training sets, but typically they do not generalize on systematically different test sets. Examples of systematic generalization (Fodor et al., 1988) include generalization to sequences longer than those seen during training—productivity, and algorithmic combinations of previously learned rules—systematicity. Despite recent efforts (Bahdanau et al., 2019; Korrel et al., 2019; Lake, 2019; Li et al., 2019; Russin et al., 2019; Csordás et al., 2021), systematic generalization generally remains unsolved (Fodor & McLaughlin, 1990; Lake & Baroni, 2018; Liska et al., 2018; Greff et al., 2020; Hupkes et al., 2020). On some datasets, the best performing models are neuro-symbolic hybrids (Chen et al., 2020; Liu et al., 2020) using task-specific symbolic functions. However, their applicability to other datasets remains limited (Furrer et al., 2020; Shaw et al., 2020). A big question is: which type of architectural inductive bias encourages the training process to select “good” solutions which generalize systematically? The popular Transformers (Vaswani et al., 2017) also often fail to generalize on algorithmic tasks (e.g. Liska et al. (2018); Dubois et al. (2020); Chaabouni et al. (2021); Csordás et al. (2021); Ontañón et al. (2021)), even on tasks with intuitive solutions that can be simply expressed in terms of Transformer attention patterns. Given an input sequence of length N and a Transformer encoder of depth T , solving an algorithmic task is often all about routing the relevant information to the right node/operation at the right time in the T -by-N grid represented by Transformer columns (illustrated in Figure 1/Left). Effectively the task is to learn to draw an adaptive control flow on the canvas of Transformer columns. In fact, recent work by Weiss et al. (2021) introduced a programming language called RASP, which is specifically designed to express solutions to sequence processing problems, and which has a direct equivalent to the operations in Transformer encoders. However, it is shown that Transformers learn solutions expressed in RASP only through intermediate supervision of attention patterns, and sometimes, even such supervision fails. Generally speaking, Transformers fail to find easily interpretable and/or symbolic solutions to algorithmic tasks. We conversely hypothesize that attention-based NNs that are able to find intuitive solutions (achieving interpretable attention patterns) could improve systematic generalization. 1https://github.com/robertcsordas/ndr Here we point out that regular Transformers lack some basic ingredients for learning such “intuitive” solutions to algorithmic problems. As a remedy, we propose simple architectural modifications to help them learn data routing. As a first step towards validating our model, we focus on the popular length generalization task of compositional table lookup (CTL; Liska et al. (2018); Hupkes et al. (2019); Dubois et al. (2020)), as well as two more complex tasks: a simple arithmetic task and a variant of ListOps (Nangia & Bowman, 2018) designed to test the compositional generalization ability of NNs. Our novel Neural Data Router (NDR) achieves 100% generalization accuracy (never reported before; Dubois et al. (2020)) on the CTL task, and obtains nearly perfect accuracy on both the proposed simple arithmetic and ListOps tasks. We show that the attention and gating patterns of NDR tend to be interpretable as plausible control flows. 2 IMPROVING TRANSFORMERS FOR LEARNING ADAPTIVE CONTROL FLOW We argue that the following components are needed to build Transformers capable of learning adaptive control flow. First, composing known operations in an arbitrary order requires that all operations are available at every computational step. This can be easily achieved by sharing the weights of the layers, as is done in Universal Transformers (Dehghani et al., 2019). Second, the network should be sufficiently deep, at least as deep as the deepest data dependency in the computational graph built from elementary operations (e.g., in the case of a parse tree, this is the depth of the tree). Otherwise, multiple operations must be fused into a single layer and hinder natural and elegant compositions. Third, inputs in some columns should be kept unchanged until it is their turn to be processed. The regular Transformer lacks a mechanism for skipping the whole transformation step by simply copying the input to the next step/layer. We propose a special gating function, copy gate, to implement such a mechanism (Sec. 2.1). Finally, many algorithmic tasks require combining several local computations in the right order. This typically implies that attention should not focus on all possible matches at a given time but only on the closest match. We propose and investigate a new type of attention with a corresponding inductive bias called geometric attention (Sec. 2.2). Using both the geometric attention and copy gate, our model implements a “neural data routing mechanism”, which can adaptively serialize the input problem. We refer to the resulting new Transformer as Neural Data Router (NDR). In the experimental section (Sec. 3), we evaluate this model on three algorithmic tasks requiring length generalization and demonstrate its effectiveness. 2.1 COPY GATE: LEARNING TO SKIP OPERATIONS (VERTICAL FLOW) Each layer of the regular Transformer consists of one self-attention and one feedforward block. The input to each of these blocks is directly connected to the corresponding output via a residual connection (Srivastava et al., 2015; He et al., 2016). However, such a connection does not allow for skipping the transformation of the entire layer and simply passing the unchanged input to the next layer. Here we propose to add an explicit gate, which we call copy gate, to facilitate such a behavior. We consider a T -layer Transformer encoder and an input sequence of length N . Since each layer corresponds to one computational step, we often refer to a layer as a step t. We denote the Transformer state of column i in layer t as h(i,t) = Ht,i ∈ Rd where d is the state size, and Ht ∈ RN×d denotes the states of all N columns in layer t. In the copy gate-augmented Transformer (Figure 5 in the appendix), each column i in layer (t+ 1) processes the input Ht similarly to regular Transformers: a(i,t+1) = LayerNorm(MultiHeadAttention(h(i,t),Ht,Ht) + h(i,t)) (1) u(i,t+1) = LayerNorm(FFNdata(a(i,t+1))) (2) using the standard multi-head attention operation (Vaswani et al., 2017) MultiHeadAttention with a query obtained from h(i,t) and keys/values from Ht, but the output is gated (using g(i,t+1) ∈ Rd) as: g(i,t+1) = σ(FFNgate(a(i,t+1))) (3) h(i,t+1) = g(i,t+1) u(i,t+1) + (1− g(i,t+1)) h(i,t) (4) We use the basic two-layer feedforward block (Vaswani et al., 2017) for both FFNdata and FFNgate which transforms input x ∈ Rd to: FFN(x) = W2 max(W1x+ b1, 0) + b2 (5) but with separate parameters and different dimensionalities: for FFNdata W data1 ∈ RdFF×d, W data2 ∈ Rd×dFF , while for FFNgate W gate1 ,W gate 2 ∈ Rd×d, with biases bdata1 ∈ RdFF and bdata2 , b gate 1 , b gate 2 ∈ Rd. When the gate is closed i.e. g(i,t+1) = 0 in Eq. 4, the entire transformation is skipped and the input is copied over to the next layer h(i,t+1) = h(i,t). Crucially, we parameterize the gate (Eq. 3) as a function of the output of the self-attention (Eq. 1), such that the decision to copy or transform the input for each column depends on the states of all columns. This is a crucial difference compared to previously proposed gatings in Transformers, which are solely motivated by training stability (Parisotto et al., 2020) or by a common practice from convolution-based models (Chaabouni et al., 2021). None of the previous approaches can implement the behavior of our copy gate (see Sec. 6 on related work). The bias of the gate bgate2 is initialized to −3 (Hochreiter & Schmidhuber, 1997). This ensures that no update happens initially to create a better gradient flow between layers. It also encourages the model to skip layers unless they have an important contribution in the corresponding step. 2.2 GEOMETRIC ATTENTION: LEARNING TO ATTEND TO THE CLOSEST MATCH (HORIZONTAL FLOW) We propose geometric attention designed to attend to the closest matching element. Like in regular self-attention, given an input sequence [x(1),x(2), ...,x(N)] with x(i) ∈ Rdin , each input is projected to key k(i) ∈ Rdkey , value v(i) ∈ Rdvalue , query q(i) ∈ Rdkey vectors, and the dot product is computed for each key/query combination. In our geometric attention, the dot product is followed by a sigmoid function to obtain a score between 0 and 1: Pi,j = σ(k (j)>q(i)) (6) which will be treated as a probability of the key at (source) position j matching the query at (target) position i. These probabilities are finally converted to the attention scores Ai,j as follows: Ai,j = Pi,j ∏ k∈Si,j (1− Pi,k) (7) where Si,j denotes the set of all (source) indices which are closer to i than j is to i, and when two indices have the same distance to i, we consider the one which is to the right of i (i.e., greater than i) to be closer, i.e., Si,j = { k ∈ {1, ..., N} \ {i, j} : |i− k| < |i− j|, if i < j k ∈ {1, ..., N} \ {i, j} : |i− k| ≤ |i− j|, if j < i (8) In addition, we explicitly zero out the diagonal by setting Ai,i = 0 for all i = 1, ..., N . The ordering of source indices is illustrated in Figure 1/Right. The resulting scores Ai,j are the attention scores used to compute the weighted averages of the value vectors. By using the terms (1− Pi,k) in Eq. 7, when there is a match, it downscales any other more distant matches. Two recent works (Brooks et al., 2021; Banino et al., 2021) use such a parameterized geometric distribution in the form of Eq. 7 (see Sec. 6 on related work). The resulting attention function has a complexity of O(N2), similar to the regular self-attention used in Transformers (Vaswani et al., 2017). Eq. 7 can be implemented in a numerically stable way in log space. The products can then be calculated using cumulative sums, subtracting the elements for the correct indices in each position. Directional encoding. In practice, we augment Eq. 6 with an additional directional encoding. In fact, the only positional information available in the geometric attention presented above is the ordering used to define the product in Eqs. 7-8. In practice, we found it crucial to augment the score computation of Eq. 6 with additional directional information, encoded as a scalar Di,j ∈ R for each target/source position pair (i, j): Di,j = { WLRh (i) + bLR, if i ≤ j WRLh (i) + bRL, if i > j (9) where h(i) ∈ Rd denotes the input/state at position i and WLR,WRL ∈ R1×d, bLR, bRL ∈ R are trainable parameters. This directional information is integrated into the score computation of Eq. 6 as follows (akin to how Dai et al. (2019) introduce the relative positional encoding (Schmidhuber, 1992) as an extra term in the computation of attention scores): Pi,j = σ ( α ( Wqh (i) + bq )> Wk,Eh (j) + βDi,j + γ ) (10) where the matrix Wq ∈ Rdhead×d maps the states to queries, bq ∈ Rdhead is a bias for queries, Wk,E ∈ Rdhead×d maps states to keys (we note that dhead is typically the size of the key, query and value vectors for each head, dhead = dnheads ), and α, β, γ ∈ R are learned scaling coefficients and bias, initialized to α = 1√ dhead , β = 1, γ = 0. Using this additional directional information, each query (position i) can potentially learn to restrict its attention to either the left or right side. 3 EXPERIMENTS We evaluate the proposed methods on three tasks: the compositional table lookup (Liska et al., 2018; Hupkes et al., 2019), a custom variant of ListOps (Nangia & Bowman, 2018), and a simple arithmetic task which we propose. In all cases, the task is designed to test the compositional generalization ability of NNs: the model has to learn to apply operations seen during training in a longer/deeper compositional way (productivity). Further experimental details for each task can be found in Appendix C. 3.1 COMPOSITIONAL TABLE LOOKUP Task. The compositional table lookup task (Liska et al., 2018; Hupkes et al., 2019; Dubois et al., 2020) is constructed based on a set of symbols and unary functions defined over these symbols. Each example in the task is defined by one input symbol and a list of functions to be applied sequentially, i.e., the first function is applied to the input symbol and the resulting output becomes the input to the second function, and so forth. There are eight possible symbols. Each symbol is traditionally represented by a 3-bit bitstring (Liska et al., 2018). However, in practice, they are simply processed as one token (Dubois et al., 2020). The functions are bijective and randomly generated. Each function is represented by a letter. An example input is ‘101 d a b’, which corresponds to the expression b(a(d(101))); the model has to predict the correct output symbol. We note that there exists a sequenceto-sequence variant of this task (Dubois et al., 2020) where the model has to predict all intermediate steps (thus trained with intermediate supervision). We directly predict the final output. An ideal model should be able to solve this task independently of the presentation order, that is, it should not matter whether the task is encoded as ‘101 d a b’ or ‘b a d 101’. We thus study both forward (former) and backward (latter) variants of the task. To evaluate systematic generalization, the train/valid/test sets reflect different numbers of compositions: samples with 1-5/6-8/9-10 operations, respectively. To best of our knowledge, no previous work has reported perfect accuracy on this task through an NN. We refer the readers to Sec. 6 for further details on the previous work. Results. We consider five different baselines: an LSTM (Hochreiter & Schmidhuber, 1997), bidirectional LSTM (Schuster & Paliwal, 1997), DNC (Graves et al., 2016; Csordás & Schmidhuber, 2019), Universal Transformers (Vaswani et al., 2017; Dehghani et al., 2019), and its relative position variants (Csordás et al., 2021). For Transformers, the prediction is based on the last column in the final layer (we conduct an ablation study on this choice in Appendix A). The hyper-parameters used for each model can be found in Table 7 in the appendix. We also provide an ablation study on the number of layers needed for generalization in Appendix A, which supports our claim on the necessity for a “sufficiently” deep architecture. The main results on this task are shown in Table 1. The LSTM and DNC perform well in the forward variant, achieving perfect generalization for longer sequences, but fail on the backward variant. This is not surprising since in the forward case, input symbols are presented in the “right” processing order to the LSTM. As expected, the bidirectional LSTM performs well in both presentation orders, since one of its processing directions is always aligned with the order of computation. However, for an arbitrary task, the order of processing is not given. For example, for ListOps (Sec. 3.3), the processing should start from the deepest point in the parse tree, which is probably somewhere in the middle of the sequence. The experiments on other tasks (Sec. 3.2 and 3.3) requiring arbitrary processing orders show that bidirectional LSTMs do not generalize well in such tasks. This is not satisfactory since our goal is to create a generic architecture which can solve arbitrary problems with an arbitrary underlying input processing order. While the Transformer seems to be a good candidate for learning problem dependent processing orders, the baseline Transformer variants fail to generalize in this task in both directions. By introducing the copy gate (Sec. 2.1), the relative Transformer can solve the forward task, but not the backward one. Our analysis showed that the network learns to attend to the last operation based on the relative position information. Since the result is read from the last column, this position changes with the sequence length. The model thus fails to generalize to such arbitrary offsets. To address this issue, we introduce a simple mechanism to let the model choose between absolute and relative positional encodings at each position (see Appendix B). The resulting model effectively manages to use the absolute position for the prediction and perform well in both directions. However, such a combination of absolute/relative positional encoding might be an overly specific bias. A more generic solution, geometric attention (Sec. 2.2), also achieved perfect generalization and was found easier to train. We present the corresponding visualization of our model in Sec. 4. 3.2 SIMPLE ARITHMETIC In order to validate the success of the proposed model on a task that involves more complex data flows and operations, we propose the simple arithmetic task. Task. The task is to execute an arithmetic expression consisting of nested modulo 10 additions and multiplications. This requires the model to process tree-structured data flows, which is presumably more difficult than the sequential processing required for the CTL task. Each operation is surrounded by brackets, such that the boundaries of operations are easy to determine. For example ‘((4*7)+2)’ should evaluate to ‘0’ (30 modulo 10). The expressions are generated randomly. The tree depth is up to 5 for the training set, 6 for the validation set, and 7-8 for the test set. The depth is measured as the number of operations, ignoring the leaves, so the example above has a depth of 2. The sequence length is limited to at most 50 tokens. Results. Table 2 shows the results. All considered models perform well on the IID validation data, but none except the NDR performs well on the generalization test set, which achieves near-perfect accuracy of 98%. We also note that the NDR learns very quickly: while all other models require about 200 K steps to converge, the NDR achieves near-perfect accuracy after 50 K steps of training. 3.3 LISTOPS We also evaluate our model on a variant of the ListOps task (Nangia & Bowman, 2018) which is a popular task commonly used to evaluate parsing abilities of NNs (Havrylov et al., 2019; Shen et al., 2019; Xiong et al., 2021; Tay et al., 2021; Irie et al., 2021). Some special architectures such as Chowdhury & Caragea (2021) can almost perfectly generalize to longer sequences on this task. However, as far as we know, no Transformer variant has been reported to be fully successful. Task. The task consists of executing nested list operations written in prefix notation. All operations have a list of arguments that can be either a digit (from 0 to 9) or recursively another operation with its own list of arguments. The operations are min, max, median and sum. The sum is modulo 10, and the median is followed by the floor function such that the output of any operation lies between 0 and 9. For example: [MED 4 8 5 [MAX 8 4 9 ] ] should return 6. There are two well-known variants: the original one by Nangia & Bowman (2018) and the “Long Range Arena” variant by Tay et al. (2021) which have different maximum numbers of arguments in each function and maximum sequence lengths. In both variants, there is no strict control of the depth of data samples: there is simply a certain pre-defined probability that each argument in the list is expanded into another list (which may increase the tree depth). This is not suitable for evaluating systematic generalization in terms of compositionality (over the problem depth). We propose instead to generate clean train, valid, and test splits with disjoint depths: up to depth 5 for training, depth 6 for validation and depths 7 and 8 for test. Importantly, we make sure that a depth-K sample effectively requires computation until depth-K (otherwise min, max, and med operations could potentially find the output without executing all of its arguments). By dissociating the splits by the depth, we can clearly identify models which fail to generalize compositionally. Apart from the depth specifications, all train/valid/test sets share the same settings as follows: the maximum sequence length is 50 (tokens), the probability of recursively sampling another function inside a list is 30% at each position, and the maximum number of arguments for a function is 5. The train set consists of 1M, the valid and test sets of 1K sequences. Results. Table 3 shows the results. Like on the other tasks, the baseline LSTM and Transformers do not generalize well on the test set consisting of deeper problems, while they achieve a near-perfect accuracy on IID data. In contrast, our model achieves near-perfect generalization. 4 ANALYSIS In this section, we provide some visualizations of attention and gating patterns of the NDR and the corresponding analyses. For more visualizations, we refer the readers to Appendix D. Compositional Table Lookup. Figure 2 shows the gating and attention patterns of the NDR model for an example of the backward presentation task. As shown in Fig. 2/Bottom, the gates of different columns open sequentially one after another when the input is available for them. Fig. 2/Top shows the corresponding attention maps. Each column attends to the neighbouring one, waiting for its computation to be finished. The behavior of the last column is different: it always attends to the second position of the sequence, which corresponds to the last operation to be performed. ListOps. We can also identify how the NDR processes the data in ListOps. Different attention heads play different roles. We highlight the core observations in Figure 3. The input for this example is: [SM [MED [MIN 1 7 4 [MAX 2 4 0 8 9 ] ] 7 ] 5 [MED 8 5 8 ] 0 7 ]. First of all, we find that there is a head (head 13 in Figure 3, first row) which seems to be responsible for connecting operators and their arguments: the operands/arguments of an operation attend to the operator. In step 0 (t = 0 in the figure), we can recognize that the operations at the deepest level, namely MAX and the second MED have all the arguments ready (as is shown by vertical lines on the columns corresponding to MAX and MED). The model indeed identifies that these two operations are ready to be executed and that they can be processed in parallel (these arguments-to-operation attention patterns remain for a few steps). We note that at this stage, the last argument of MIN is not ready yet ([MIN 1 7 4 [MAX 2 4 0 8 9 ] ]). We can see that only arguments which are already ready (1 7 4) attend to the operator (see the column of MIN). In step 1 (t = 1, 2nd row), we can see that head 5 copies the expected result of MAX, 9 to the column of the operator (we note that this only requires one step as 9 is always the result of MAX when it is one of the arguments of MAX). Similarly in step 2, head 7 (2nd row) seems to copy the result of the second MED, 8 to the operator column. In step 3 (t = 3, 1st row), we recognize that the result of MAX is marked as an argument for MIN in head 13 which is responsible for communication between operators and their arguments. This is shown by the new attention which appears at t = 3 in head 13 from the source position MAX to the target position MIN (a pattern which is not visible at t = 2). In head 3, t = 6 (2nd row), the expected result of MIN, which is 1, is copied to the operator, similarly to the patterns we observed above for MAX and MED. In head 13, t = 6 (1st row), all arguments for the first MED are now also recognized (the result of MIN which is 1, and 7). Finally in t = 7 (2nd row), two heads, head 3 and head 5 seem to copy/gather two inputs needed to compute the corresponding median, 1 and 7, and store them in the column of the operator MED. A complete visualization of further steps can be found in Appendix D.2. We noticed that some of the heads do not seem to play a key role; we focused on interpreting those which seem to participate in the main computation. For ListOps, we also partially find the attention patterns described above in the baseline Transformer with relative positional encoding, at least on some inspected examples, which also explains its rather high accuracy. 5 DISCUSSION Learning adaptive serialization. The NDR architecture can be understood as performing adaptive serialization of the problem. A key requirement for reusable computation is decomposing the problem into reusable building blocks, typically applied in sequential steps. The granularity of the decomposition determines the degree of reusability: fusing operations in a single step makes the processing faster (fewer steps), but also more specialized. Learning the most granular solutions is thus preferable for generalization. At the same time, not all processing should happen serially: branches of the computational graph that do not have common data dependencies can be processed independently in parallel, which we empirically observe in our NDR in the ListOps example (Sec. 4). This enables the architecture to get away with a number of computational steps reflecting the depth of the computational graph rather than the length of the input. Bottom up approach for improving model architectures. Transformers have seen tremendous successes across various application domains (Devlin et al., 2019; Brown et al., 2020; Dosovitskiy et al., 2021). Impressive results have been reported when they are scaled up with a large amount of data (Brown et al., 2020). On the other hand, simple tasks like those highlighted in the present work demonstrate that the Transformer architecture still struggles with basic reasoning. Particularly in algorithmic tasks, it is often the case that a sub-optimal choice of architecture/optimization method makes the model fall back to simple memorization. We argue that it is crucial to look at isolated problems which test specific generalization capability. This calls for a bottom-up approach: building on toy tasks that focus on individual aspects of generalization and using them for improving models. 6 RELATED WORK Gating inside Transformers. Several prior works have proposed to use some sort of gating within Transformer architectures (Parisotto et al., 2020; Chaabouni et al., 2021). Our proposed copy gate is different from those as it satisfies two important properties. First, our copy gate allows the model to skip the entire Transformer layer (i.e., both the self-attention and the feedforward blocks) when the gate is closed. Second, the gate function is conditioned on the attention output such that the decision of opening or closing depends on information from all columns. While multiple gating variants have been proposed by Parisotto et al. (2020) to stabilize Transformers for reinforcement learning, none of them can produce this behavior. Empirically, we also tried out a few other gating variants which do not satisfy the two properties above; we found them not to improve over regular Transformers in our preliminary experiments on compositional table lookup. Recent work by Chaabouni et al. (2021) also makes use of “gating” in Transformers through a gated linear unit (GLU) activation function commonly used in convolutional NNs (Dauphin et al., 2017). Transformer models with such an activation function were reported to outperform RNN baselines on a systematic generalization task (Dessı̀ & Baroni, 2019). Unlike our copy gate or Parisotto et al. (2020)’s gating, such a gating activation does not have the “residual” term (i.e. a closed gate zeros out the input) which allows the model to skip a transformation. In a more general context, benefits of the GLU activation in Transformers vary across tasks (Irie et al., 2019; Shazeer, 2020). In language modeling, no improvement is typically obtained by using the standard highway gate instead of the residual connection in Transformers (Irie, 2020), while it yields improvements when combined with convolutional layers (Kim & Rush, 2016). Parameterized geometric distributions. Two recent works (Brooks et al., 2021; Banino et al., 2021) have used a form of parameterized geometric distribution (PGD; in the form of Eq. 7). Brooks et al. (2021) have used such a distribution to parameterize the movement of a pointer on a sequence of instructions. Banino et al. (2021) have used it to implement adaptive computation time (Schmidhuber, 2012; Graves, 2016). We use the PGD to obtain a generic attention mechanism as a replacement of the standard self-attention used in Transformers (Vaswani et al., 2017). Compositional table lookup. CTL task was proposed for evaluating the compositional ability of NNs (Liska et al., 2018). Previous works evaluated RNNs, RNNs with attention, and Transformers on this task with limited success (Hupkes et al., 2019; Dubois et al., 2020). Dubois et al. (2020) have proposed a special attention mechanism to augment the recurrent architecture. While they obtained good performance for the forward presentation order, the proposed model failed in the backward one. In contrast, two of our approaches (Sec. 3.1) achieve 100% generalization accuracy for both orders. Positional encodings. Many previous works have focused on improving positional encoding (Schmidhuber, 1992; Vaswani et al., 2017) for self-attention. Most notably, the relative positional encoding (Schmidhuber, 1992; Shaw et al., 2018; Dai et al., 2019) was found useful for improving systematic generalization of Transformers (Csordás et al., 2021). Here we also present two new approaches related to positional encoding. One is the gated combination of absolute and relative positional encoding (Sec. 3.1; details in Appendix B). We show that absolute positional encoding can complement relative positional encoding. The former enables the model to always attend to a specific position, as is needed for the CTL task in the last step, while the gating allows it to use relative positional encoding for other positions/steps. Second, we introduce directional encoding to augment geometric attention. Unlike positional encoding which can overfit to a range of positions seen during training, the direction information is found to be robust and to be a crucial augmentation of the geometric attention. 7 CONCLUSION We proposed a new view on the internal operations of Transformer encoders as a dynamic dataflow architecture between Transformer columns. This overcomes two shortcomings of traditional Transformers: the problem of routing and retaining data in an unaltered fashion, which we solve by an additional copy gate, and the problem of learning length-independent attention patterns, which we solve by geometric attention. Our new model, the Neural Data Router (NDR), generalizes to compositions longer than those seen during training on the popular compositional lookup table task in both forward and backward directions. NDR also achieves near perfect performance on simple arithmetic and ListOps tasks in settings that test systematic generalization in terms of computational depth. In general, the gates and the attention maps collectively make the architecture more interpretable than the baselines. Future work will extend this encoder-only architecture to a full sequence-to-sequence model and evaluate it on other standard tasks in systematic generalization requiring generation of variable-length output sequences. ACKNOWLEDGMENTS We thank Imanol Schlag and Sjoerd van Steenkiste for helpful discussions and suggestions on an earlier version of the manuscript. This research was partially funded by ERC Advanced grant no: 742870, project AlgoRNN, and by Swiss National Science Foundation grant no: 200021 192356, project NEUSYM. We are thankful for hardware donations from NVIDIA & IBM. The resources used for the project were partially provided by Swiss National Supercomputing Centre (CSCS) project s1023. A ABLATIONS IID Test nlayers Forward Backward Forward Backward 14 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 12 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 0.99 ± 0.02 10 1.00 ± 0.00 1.00 ± 0.00 0.75 ± 0.04 0.62 ± 0.05 8 1.00 ± 0.00 1.00 ± 0.00 0.23 ± 0.02 0.24 ± 0.03 6 1.00 ± 0.00 0.96 ± 0.03 0.22 ± 0.05 0.15 ± 0.01 4 0.96 ± 0.04 0.68 ± 0.11 0.14 ± 0.01 0.13 ± 0.01 Readout from the first instead of the last column. In our experiments with the Transformer models, the last column was used for the readout of the result. Under this configuration, the readout position depends on the length of the sequence which might increase the difficulty of the problem, in particular for the models using absolute positional embeddings. Table 5 shows the corresponding ablation study. We observe that this choice has only marginal impact on the model performance. As a side note, we also tried the variant where an additional cross-attention layer is used for the readout. Again, the generalization performance was not better. In fact, these results are not surprising since none of these changes fundamentally addresses the problem of length generalization. Does Adaptive Computation Time (ACT) help? In this work, we determined the number of layers/steps to be used in the model based on heuristics (see Appendix C.1). We could also consider using Adaptive Computation Time (ACT) to dynamically determine the number of steps. Furthermore, ACT introduces a form of gating which creates shortcuts in the credit assignment path between the output and a result of an intermediate layer. This “copying” mechanism resulting from the ACT (i.e. stop computation at a certain time and copy the result to the output) is fundamentally different from our copy gate (Sec. 2.1). Our copy gate allows Transformer columns to keep the input unchanged until it’s their turn to be processed (a crucial property to implement control flow like behavior). This behavior can not be simulated by the ACT. Here we provide some experimental results on models with ACT which confirm that the proposed copy gate is a crucial component for generalization which can not be replaced by ACT. We note that there are various versions of ACT in the literature, e.g., the variant used by Dehghani et al. (2019) in Universal Transformers is different from the one used by Graves (2016). Here we focus on two variants: one in which we directly apply Graves (2016) to Transformers, and another one used by Dehghani et al. (2019). We start with the description of the former. An extra sigmoidal unit p̂(i,t) is computed for each column i in each timestep t as: p̂(i,t) = σ(WHh (i,t) + bH) (11) where WH ∈ R1×d and bH ∈ R are trainable parameters. By comparing the cumulative sum of p̂(i,t) over time steps to a certain threshold value (1− ) with a hyper-parameter (0.01 in our experiment), we determine the termination step T i for column i as: T i = min{Tmax,min{t′ : t′∑ t=1 p̂(i,t) ≥ 1− }} (12) where Tmax is the pre-defined maximum number of steps. The corresponding halting probability p(i,t) is then computed as: p(i,t) = { p̂(i,t) if t < T i Ri if t = T i (13) Ri = 1− T i−1∑ t=1 p̂(i,t) (14) which is used to compute the final output of column i as: oi = T i∑ t=1 p(i,t)h(i,t) (15) In Dehghani et al. (2019)’s variant, a different equation is used in lieu of Eq. 15 above and the computation of the reminder term Ri in Eq. 14 above is not properly handled in case where Eq. 12 terminates because of the first condition on Tmax. For further details, we refer the readers to Listing 1 and 2 in Dehghani et al. (2019) and/or our public code. One subtlety introduced by Dehghani et al. (2019) which we note here is that the computation of the final output oi of column i effectively “halts” after T i (since oi only depends on h(i,t) for 0 < t < T i), but column i itself still continues transforming the hidden states h(i,t) for steps t > T i until all columns reach the termination step, and its updated states can be attended/read by another column j which has not halted yet (i.e. T j > T i). In this sense, computation is never stopped independently for each column. The mechanism described above instead finds the readout steps for each column (as is used in Eq. 15). We follow this decision in our implementation of both variants. In addition, a new regularizer term, LACT = α 1N ∑N i=1R i is added to the loss function, where N is the length of the input sequence. This makes the network prefer short computations. We ran a hyper-parameter search for α from the following values: 0.001, 0.003, 0.01, 0.03, 0.1. We found α = 0.03 to work the best. We conducted experiments on the compositional table lookup task. We first noted that ACT helps training our baseline Transformer models with a maximum step of 14 layers which was not possible without ACT (our baseline Transformer had only 11 layers for this reason; see Table 7). The shortcut in the credit assignment path introduced by ACT certainly helps training of this 14 layer model. As we noticed that the models with ACT learn slower than those with gating, we increased the number of training steps to 60k steps which is twice as many as 30k used for the models without ACT. Table 6 shows the results. We observe that, interestingly, ACT enables generalization for longer lengths in the forward direction of the Transformer with relative positional encoding and the one with geometric attention. However, we were not able to find any configuration that generalizes in the backward case. This demonstrates that the copy gate is effectively a crucial component for generalization which can not be replaced by ACT. Furthermore, the convergence of models with ACT is significantly slower than those of models with our gating, and they are more unstable and very sensitive to the value of α on the regularization term, even in the successful forward case. Overall, the only benefit of ACT is thus the adaptive depth, as is illustrated in Figure 4, which is orthogonal to our study. B DETAILS OF ATTENTION WITH COMBINED ABSOLUTE/RELATIVE POSITIONAL ENCODING The use of copy gates enables Transformers to generalize to longer lengths in the forward presentation order of the CTL task (Sec. 3.1), but that alone was not enough to make the model generalize in the backward order variant of the task. Examining the attention maps reveals that the model uses position-based attention to read out the result instead of content-based attention. In the backward presentation order, the last column of the transformer should focus on the second column, whose relative position changes dynamically with the length of the sequence. We solve this issue by adding an option to choose between absolute and relative positional encodings to the attention head. In what follows, we describe the operation within a single layer/step. This allows us to omit the layer/step-index t for better readability, and thus denote the state of column/position i as hi instead of h(i,t). We use the relative positional embedding variant of self-attention by Dai et al. (2019). Our attention matrix with the gated absolute/relative positional encodings can be decomposed as follows: ri = σ(hiWar + bar) (16) Âi,j = h > i W > q Wk,Ehj︸ ︷︷ ︸ (a) + b>q,EWk,Ehj︸ ︷︷ ︸ (c) + ( h>i W > q Wk,P︸ ︷︷ ︸ (b) + b>q,PWk,P︸ ︷︷ ︸ (d) ) ( pi−jri + pj(1− ri)︸ ︷︷ ︸ (e) ) (17) where the matrix Wq ∈ Rdhead×d maps the states to queries, Wk,E ∈ Rdhead×d maps states to keys, while Wk,P ∈ Rdhead×d maps positional embeddings to keys. dhead is the size of the key, query and value vectors for each head, set as dhead = dnhead . bq,E , bq,P ∈ R dhead are learned vectors. pi ∈ Rd is the standard sinusoidal embedding for position i (Vaswani et al., 2017). Softmax is applied to the second dimension of  to obtain the final attention scores, A. Component (a) corresponds to content-based addressing, (b, e) to content-based positional addressing, (c) represents a global content bias, while (d, e) represent a global position bias. We introduce term (e) for the positional embedding which can switch between absolute and relative positional encodings using the scalar gate ri (Eq. 16; parameterized by War ∈ Rd×1 and bar ∈ R), which is the function of the state at target position i. C IMPLEMENTATION DETAILS A PyTorch implementation of our models together with the experimental setup is available under https://github.com/robertcsordas/ndr. The performance of all models is reported as mean and standard deviations over 5 different seeds. C.1 CHOOSING THE NUMBER OF LAYERS In Sec. 2, we hypothesized that one of the conditions for our model to generalize is to be “sufficiently” deep such that elementary operations are learned in separate layers which would then become composable. In practice, a “sufficient” depth can be determined by the basic units of compositions implicitly defined by the dataset. The depth of the model must be at least as deep as the deepest path in the computation graph defined by these basic operations. This hypothesis was empirically validated in the ablation study presented above (Appendix A). In general, we used the following heuristics to choose the depth of the Transformers: (length of the deepest path in the graph) × (steps per operation) + a few more layers. Determining the number of steps needed by the elementary operation is not straightforward but it can be done empirically. For example, for ListOps, as is shown in Sec. 4, it requires two steps per operation: one step in which the operands attend to the operation, followed by another one where the result is written back to the operation. For other tasks, we found that a single step per operation was enough. Choosing more layers than needed is safe, and it can be used to determine the required number of layers, for example by looking at the gate activity. Finally, “+ a few more layers” are needed because one additional layer should be used to read out the final result, and one or a few more can be needed for communication between columns (e.g., to determine operator precedence). Since parameters are shared across layers, we can optionally train models with a certain number of layers and increase the number of computational steps at test time. This allows us to train models using a depth which is “sufficient” to solve the training set, but increase it at test time to generalize to a test set requiring more computational steps. We did this for the ListOps experiment (Sec. 3.3): the model was trained with 20 layers and tested with 24. Our preliminary experiments confirmed that this practice has no performance penalty, while it speeds up training. C.2 DATASET DETAILS Compositional table lookup. Our implementation uses 8 symbols as input arguments and 9 randomly sampled bijective functions denoted by lower case letters of the English alphabet. All functions are included in the train set in combination with all possible input symbols. The rest of the training set consists of random combinations of functions applied to a random symbol as an argument, up to length 5. The total size of the train set is 53,704 samples. The samples are roughly balanced such that there are similar numbers of samples for each depth. There are different validation sets: an IID set, which matches the distribution of the train set, and a depth validation, which includes samples of lengths 6, 7 and 8. The test set consists of sequences of lengths 9 and 10. Simple arithmetic. The dataset is constructed by sampling random digits (0-9) and operations + (add) and ∗ (multiply). The operations are performed modulo 10. Parentheses surround the arguments of the operations. The depth of the resulting tree is computed, and rejection sampling is used to ensure that the same number of samples from each depth is present in the given split. The maximum length of samples is 50 tokens, sub-operations are sampled with probability 0.2. 100 K samples are used for training, 1 K for both test and validation sets. The train set consists of 0-5 operations, the validation set of 6 and the test set of 7 operations. ListOps. Random digits are sampled from range 0-9. Operations are sample from the set summodulo (SM), which is a sum modulo 10, min (MIN), max (MAX) and median followed by the floor function (MED). The maximum number of arguments for each operation is 5. A sub-operation is sampled with probability 0.3. 1 M samples are used for training, 1 K for test and validation. The train set consists of 0-5 operations, 6 for the validation set, and 7 for the test set. For each sample, we calculate a number which we call dependency depth. To understand it, note that MIN and MAX operations only select one of their operands, MED selects 1 or 2. In SUM, all operands are needed to perform the operation. If we construct a parse tree and prune away the branches which were not selected by any operation and measure the depth of such a tree, the resulting number is the dependency depth. This ensures that the deeper parts of the tree contribute to the result calculation, preventing shallow heuristics, like ignoring all branches of the tree that are too deep and still getting the correct result with a high chance. We also ensure that the number of samples is the same for all possible dependency depths in each split. C.3 MODEL DETAILS We use the AdamW optimizer (Loshchilov & Hutter, 2019) for all of our models. Standard hyperparameters are listed in Tab. 7, 8 and 9. Additionally, models with gating use dropout (Hanson, 1990; Srivastava et al., 2014) applied to the content-based query and the position-query components of 0.1 for most models, except for non-gated Transformers on ListOps, where this value is 0.05. In the case of geometric attention, since the channels of the directional encoding does not have any redundancy, dropout is applied just to the content-query. In the case of Transformers with the copy gate but without geometric attention, we use tanh instead of LayerNorm in Eq. 2. The Transformer/NDR layer with a copy gate is illustrated in Figure 5. The hyperparameters of the gateless Transformers differ significantly from the gated ones. This is because they were very hard to train to achieve good performance even on the IID set, requiring extensive hyperparameter tuning. One might argue that fewer layers make them less competitive on longer sequences. However, we were unable to train them to perform well even on IID data with comparable sizes. All Transformer variants have a begin (B) and end (E) token included in the sequence. RNNs (LSTM and DNC) have no such tokens. All Transformers are encoders only, and the results are read from the last column (corresponding to the end token). The DNC has 21 memory cells, 4 read heads, and an LSTM controller. It contains recently introduced improvements (Csordás & Schmidhuber, 2019). We use gradient clipping with magnitude 5 (for CTL) or 1 (for simple arithmetic and ListOps) for all of our models. Hyperparameters were obtained by a Bayesian hyperparameter search of Weights & Biases2 over the systematically different (OOD) validation set for the +abs/rel + gate models and were reused for all other gated models. For the non-gated models, we used the +rel variant for tuning. It was not possible to tune the baselines using only the OOD validation set because their performance was too bad on that set. We thus used a mixture of IID and OOD validation sets to tune the hyperparameters for the baselines. Table 10 shows the range of hyperparameters used for tuning. “FF multiplier” is used to calculate dFF from dmodel. We train all models for a fixed number of niters iterations and measure their validation performance every 1000 iterations. For each model, we select the best checkpoint according to the validation performance, and report its test accuracy. D ADDITIONAL ANALYSIS D.1 COMPOSITIONAL TABLE LOOKUP An idealized sequence of computations in a Transformer for an example from CTL task is shown in Fig. 6. Each column waits for its input from the left side, then performs an update. Finally, the last 2https://wandb.ai/ column copies the result. So far, in the main text, we only had space to show the gate and attention activity of the NDR for a few timesteps. Here we show the corresponding visualization of all steps in Figures 10 and 11, as well as the attention map for the baseline Transformer with relative positional encoding in Figure 7. We also show the Transformer + abs/rel + gate variant in Fig. 8 and Fig. 9. Please directly refer to the caption of the figures for the corresponding analysis. In general, the visualization for our NDR and the abs/rel + gate variant is easily interpretable, unlike that of the baseline Transformer model. D.2 LISTOPS Figures 12 and 14 shows the attention and gate patterns of our NDR architecture on an example from the ListOps dataset. We highlighted notable attention patterns in Sec. 4. Different heads seem to specialize in different functions. As already mentioned in Sec. 4, head 13 of the NDR architecture, shown in Figure 13, seems to specialize in selecting which arguments belong to which operator. The gating patterns are also very interesting. In the early stages, the deepest parts of the input are updated: [MAX 2 4 0 8 9] and [MED 8 5 8], which are independent branches of the parse tree that can be processed in parallel. In the following steps, the update patterns spread up in the parse tree, updating the operations that have their arguments available. In this task, the input is read from the first column, which is written in a very late stage.
1. What are the key contributions and novel aspects introduced by the paper in transformer architecture? 2. What are the strengths of the proposed approach, particularly in terms of its motivation, implementation, and experimental results? 3. What are the weaknesses or limitations of the paper regarding its focus, evaluation, and analysis? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes two modifications to provide additional inductive bias to the attention mechanism in the transformer architecture. The first modification adds a copy mechanism to simulate a “no-op” at a given transformer layer, and the second modification is an attention mechanism that is biased towards attending to local context. Both of these modifications are motivated as being useful for algorithmic tasks like compositional table lookup and arithmetic. From experiments that are mostly concerned with some kind of length/depth generalization, we see very significant improvements. Review Overall, I enjoyed reading this work. The writing is clear and to the point, and the approach itself is very well motivated (see questions for more), and simple to implement without too many tunable hyperparameters. And as such, the experiments are cleanly setup and do suggest improved improved generalization. From the analysis of the attention maps, we can see that the method is doing exactly what it is supposed to as well (that is, copying previous values until other intermediates have been computed, paying more attention to local hidden states etc). Based on these strengths, I recommend that this paper be accepted to the conference. So now, let me focus on some weaknesses / suggestions / questions: Overall positioning: Firstly, I think the paper should probably make it more clear that it’s only focusing on a very specific notion of systematicity that has to do with length / depth generalization, and not other more traditional notions like generalizing to new compositions (which isn’t really something that is evaluated) like SQOOP from Bahdanau 2019 etc. Evaluation: Secondly, while not a strict requirement, there is no evaluation on language tasks / pseudo language tasks like SCAN - there is a length generalization benchmark within SCAN itself and it would be good to know how this method does on that. Analysis: In Figure-2 bottom, what does It is unclear what the y axis is. Isn’t the copy gate just a single number for each time step, for each layer? If so, i would’ve expected the figure to just be a single number for each time step for the various layers, so I don’t understand what the grid signifies.
ICLR
Title The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization Abstract Despite progress across a broad range of applications, Transformers have limited success in systematic generalization. The situation is especially frustrating in the case of algorithmic tasks, where they often fail to find intuitive solutions that route relevant information to the right node/operation at the right time in the grid represented by Transformer columns. To facilitate the learning of useful control flow, we propose two modifications to the Transformer architecture, copy gate and geometric attention. Our novel Neural Data Router (NDR) achieves 100% length generalization accuracy on the classic compositional table lookup task, as well as near-perfect accuracy on the simple arithmetic task and a new variant of ListOps testing for generalization across computational depths. NDR’s attention and gating patterns tend to be interpretable as an intuitive form of neural routing. Our code is public.1 1 INTRODUCTION Neural networks (NNs) may easily learn certain training sets, but typically they do not generalize on systematically different test sets. Examples of systematic generalization (Fodor et al., 1988) include generalization to sequences longer than those seen during training—productivity, and algorithmic combinations of previously learned rules—systematicity. Despite recent efforts (Bahdanau et al., 2019; Korrel et al., 2019; Lake, 2019; Li et al., 2019; Russin et al., 2019; Csordás et al., 2021), systematic generalization generally remains unsolved (Fodor & McLaughlin, 1990; Lake & Baroni, 2018; Liska et al., 2018; Greff et al., 2020; Hupkes et al., 2020). On some datasets, the best performing models are neuro-symbolic hybrids (Chen et al., 2020; Liu et al., 2020) using task-specific symbolic functions. However, their applicability to other datasets remains limited (Furrer et al., 2020; Shaw et al., 2020). A big question is: which type of architectural inductive bias encourages the training process to select “good” solutions which generalize systematically? The popular Transformers (Vaswani et al., 2017) also often fail to generalize on algorithmic tasks (e.g. Liska et al. (2018); Dubois et al. (2020); Chaabouni et al. (2021); Csordás et al. (2021); Ontañón et al. (2021)), even on tasks with intuitive solutions that can be simply expressed in terms of Transformer attention patterns. Given an input sequence of length N and a Transformer encoder of depth T , solving an algorithmic task is often all about routing the relevant information to the right node/operation at the right time in the T -by-N grid represented by Transformer columns (illustrated in Figure 1/Left). Effectively the task is to learn to draw an adaptive control flow on the canvas of Transformer columns. In fact, recent work by Weiss et al. (2021) introduced a programming language called RASP, which is specifically designed to express solutions to sequence processing problems, and which has a direct equivalent to the operations in Transformer encoders. However, it is shown that Transformers learn solutions expressed in RASP only through intermediate supervision of attention patterns, and sometimes, even such supervision fails. Generally speaking, Transformers fail to find easily interpretable and/or symbolic solutions to algorithmic tasks. We conversely hypothesize that attention-based NNs that are able to find intuitive solutions (achieving interpretable attention patterns) could improve systematic generalization. 1https://github.com/robertcsordas/ndr Here we point out that regular Transformers lack some basic ingredients for learning such “intuitive” solutions to algorithmic problems. As a remedy, we propose simple architectural modifications to help them learn data routing. As a first step towards validating our model, we focus on the popular length generalization task of compositional table lookup (CTL; Liska et al. (2018); Hupkes et al. (2019); Dubois et al. (2020)), as well as two more complex tasks: a simple arithmetic task and a variant of ListOps (Nangia & Bowman, 2018) designed to test the compositional generalization ability of NNs. Our novel Neural Data Router (NDR) achieves 100% generalization accuracy (never reported before; Dubois et al. (2020)) on the CTL task, and obtains nearly perfect accuracy on both the proposed simple arithmetic and ListOps tasks. We show that the attention and gating patterns of NDR tend to be interpretable as plausible control flows. 2 IMPROVING TRANSFORMERS FOR LEARNING ADAPTIVE CONTROL FLOW We argue that the following components are needed to build Transformers capable of learning adaptive control flow. First, composing known operations in an arbitrary order requires that all operations are available at every computational step. This can be easily achieved by sharing the weights of the layers, as is done in Universal Transformers (Dehghani et al., 2019). Second, the network should be sufficiently deep, at least as deep as the deepest data dependency in the computational graph built from elementary operations (e.g., in the case of a parse tree, this is the depth of the tree). Otherwise, multiple operations must be fused into a single layer and hinder natural and elegant compositions. Third, inputs in some columns should be kept unchanged until it is their turn to be processed. The regular Transformer lacks a mechanism for skipping the whole transformation step by simply copying the input to the next step/layer. We propose a special gating function, copy gate, to implement such a mechanism (Sec. 2.1). Finally, many algorithmic tasks require combining several local computations in the right order. This typically implies that attention should not focus on all possible matches at a given time but only on the closest match. We propose and investigate a new type of attention with a corresponding inductive bias called geometric attention (Sec. 2.2). Using both the geometric attention and copy gate, our model implements a “neural data routing mechanism”, which can adaptively serialize the input problem. We refer to the resulting new Transformer as Neural Data Router (NDR). In the experimental section (Sec. 3), we evaluate this model on three algorithmic tasks requiring length generalization and demonstrate its effectiveness. 2.1 COPY GATE: LEARNING TO SKIP OPERATIONS (VERTICAL FLOW) Each layer of the regular Transformer consists of one self-attention and one feedforward block. The input to each of these blocks is directly connected to the corresponding output via a residual connection (Srivastava et al., 2015; He et al., 2016). However, such a connection does not allow for skipping the transformation of the entire layer and simply passing the unchanged input to the next layer. Here we propose to add an explicit gate, which we call copy gate, to facilitate such a behavior. We consider a T -layer Transformer encoder and an input sequence of length N . Since each layer corresponds to one computational step, we often refer to a layer as a step t. We denote the Transformer state of column i in layer t as h(i,t) = Ht,i ∈ Rd where d is the state size, and Ht ∈ RN×d denotes the states of all N columns in layer t. In the copy gate-augmented Transformer (Figure 5 in the appendix), each column i in layer (t+ 1) processes the input Ht similarly to regular Transformers: a(i,t+1) = LayerNorm(MultiHeadAttention(h(i,t),Ht,Ht) + h(i,t)) (1) u(i,t+1) = LayerNorm(FFNdata(a(i,t+1))) (2) using the standard multi-head attention operation (Vaswani et al., 2017) MultiHeadAttention with a query obtained from h(i,t) and keys/values from Ht, but the output is gated (using g(i,t+1) ∈ Rd) as: g(i,t+1) = σ(FFNgate(a(i,t+1))) (3) h(i,t+1) = g(i,t+1) u(i,t+1) + (1− g(i,t+1)) h(i,t) (4) We use the basic two-layer feedforward block (Vaswani et al., 2017) for both FFNdata and FFNgate which transforms input x ∈ Rd to: FFN(x) = W2 max(W1x+ b1, 0) + b2 (5) but with separate parameters and different dimensionalities: for FFNdata W data1 ∈ RdFF×d, W data2 ∈ Rd×dFF , while for FFNgate W gate1 ,W gate 2 ∈ Rd×d, with biases bdata1 ∈ RdFF and bdata2 , b gate 1 , b gate 2 ∈ Rd. When the gate is closed i.e. g(i,t+1) = 0 in Eq. 4, the entire transformation is skipped and the input is copied over to the next layer h(i,t+1) = h(i,t). Crucially, we parameterize the gate (Eq. 3) as a function of the output of the self-attention (Eq. 1), such that the decision to copy or transform the input for each column depends on the states of all columns. This is a crucial difference compared to previously proposed gatings in Transformers, which are solely motivated by training stability (Parisotto et al., 2020) or by a common practice from convolution-based models (Chaabouni et al., 2021). None of the previous approaches can implement the behavior of our copy gate (see Sec. 6 on related work). The bias of the gate bgate2 is initialized to −3 (Hochreiter & Schmidhuber, 1997). This ensures that no update happens initially to create a better gradient flow between layers. It also encourages the model to skip layers unless they have an important contribution in the corresponding step. 2.2 GEOMETRIC ATTENTION: LEARNING TO ATTEND TO THE CLOSEST MATCH (HORIZONTAL FLOW) We propose geometric attention designed to attend to the closest matching element. Like in regular self-attention, given an input sequence [x(1),x(2), ...,x(N)] with x(i) ∈ Rdin , each input is projected to key k(i) ∈ Rdkey , value v(i) ∈ Rdvalue , query q(i) ∈ Rdkey vectors, and the dot product is computed for each key/query combination. In our geometric attention, the dot product is followed by a sigmoid function to obtain a score between 0 and 1: Pi,j = σ(k (j)>q(i)) (6) which will be treated as a probability of the key at (source) position j matching the query at (target) position i. These probabilities are finally converted to the attention scores Ai,j as follows: Ai,j = Pi,j ∏ k∈Si,j (1− Pi,k) (7) where Si,j denotes the set of all (source) indices which are closer to i than j is to i, and when two indices have the same distance to i, we consider the one which is to the right of i (i.e., greater than i) to be closer, i.e., Si,j = { k ∈ {1, ..., N} \ {i, j} : |i− k| < |i− j|, if i < j k ∈ {1, ..., N} \ {i, j} : |i− k| ≤ |i− j|, if j < i (8) In addition, we explicitly zero out the diagonal by setting Ai,i = 0 for all i = 1, ..., N . The ordering of source indices is illustrated in Figure 1/Right. The resulting scores Ai,j are the attention scores used to compute the weighted averages of the value vectors. By using the terms (1− Pi,k) in Eq. 7, when there is a match, it downscales any other more distant matches. Two recent works (Brooks et al., 2021; Banino et al., 2021) use such a parameterized geometric distribution in the form of Eq. 7 (see Sec. 6 on related work). The resulting attention function has a complexity of O(N2), similar to the regular self-attention used in Transformers (Vaswani et al., 2017). Eq. 7 can be implemented in a numerically stable way in log space. The products can then be calculated using cumulative sums, subtracting the elements for the correct indices in each position. Directional encoding. In practice, we augment Eq. 6 with an additional directional encoding. In fact, the only positional information available in the geometric attention presented above is the ordering used to define the product in Eqs. 7-8. In practice, we found it crucial to augment the score computation of Eq. 6 with additional directional information, encoded as a scalar Di,j ∈ R for each target/source position pair (i, j): Di,j = { WLRh (i) + bLR, if i ≤ j WRLh (i) + bRL, if i > j (9) where h(i) ∈ Rd denotes the input/state at position i and WLR,WRL ∈ R1×d, bLR, bRL ∈ R are trainable parameters. This directional information is integrated into the score computation of Eq. 6 as follows (akin to how Dai et al. (2019) introduce the relative positional encoding (Schmidhuber, 1992) as an extra term in the computation of attention scores): Pi,j = σ ( α ( Wqh (i) + bq )> Wk,Eh (j) + βDi,j + γ ) (10) where the matrix Wq ∈ Rdhead×d maps the states to queries, bq ∈ Rdhead is a bias for queries, Wk,E ∈ Rdhead×d maps states to keys (we note that dhead is typically the size of the key, query and value vectors for each head, dhead = dnheads ), and α, β, γ ∈ R are learned scaling coefficients and bias, initialized to α = 1√ dhead , β = 1, γ = 0. Using this additional directional information, each query (position i) can potentially learn to restrict its attention to either the left or right side. 3 EXPERIMENTS We evaluate the proposed methods on three tasks: the compositional table lookup (Liska et al., 2018; Hupkes et al., 2019), a custom variant of ListOps (Nangia & Bowman, 2018), and a simple arithmetic task which we propose. In all cases, the task is designed to test the compositional generalization ability of NNs: the model has to learn to apply operations seen during training in a longer/deeper compositional way (productivity). Further experimental details for each task can be found in Appendix C. 3.1 COMPOSITIONAL TABLE LOOKUP Task. The compositional table lookup task (Liska et al., 2018; Hupkes et al., 2019; Dubois et al., 2020) is constructed based on a set of symbols and unary functions defined over these symbols. Each example in the task is defined by one input symbol and a list of functions to be applied sequentially, i.e., the first function is applied to the input symbol and the resulting output becomes the input to the second function, and so forth. There are eight possible symbols. Each symbol is traditionally represented by a 3-bit bitstring (Liska et al., 2018). However, in practice, they are simply processed as one token (Dubois et al., 2020). The functions are bijective and randomly generated. Each function is represented by a letter. An example input is ‘101 d a b’, which corresponds to the expression b(a(d(101))); the model has to predict the correct output symbol. We note that there exists a sequenceto-sequence variant of this task (Dubois et al., 2020) where the model has to predict all intermediate steps (thus trained with intermediate supervision). We directly predict the final output. An ideal model should be able to solve this task independently of the presentation order, that is, it should not matter whether the task is encoded as ‘101 d a b’ or ‘b a d 101’. We thus study both forward (former) and backward (latter) variants of the task. To evaluate systematic generalization, the train/valid/test sets reflect different numbers of compositions: samples with 1-5/6-8/9-10 operations, respectively. To best of our knowledge, no previous work has reported perfect accuracy on this task through an NN. We refer the readers to Sec. 6 for further details on the previous work. Results. We consider five different baselines: an LSTM (Hochreiter & Schmidhuber, 1997), bidirectional LSTM (Schuster & Paliwal, 1997), DNC (Graves et al., 2016; Csordás & Schmidhuber, 2019), Universal Transformers (Vaswani et al., 2017; Dehghani et al., 2019), and its relative position variants (Csordás et al., 2021). For Transformers, the prediction is based on the last column in the final layer (we conduct an ablation study on this choice in Appendix A). The hyper-parameters used for each model can be found in Table 7 in the appendix. We also provide an ablation study on the number of layers needed for generalization in Appendix A, which supports our claim on the necessity for a “sufficiently” deep architecture. The main results on this task are shown in Table 1. The LSTM and DNC perform well in the forward variant, achieving perfect generalization for longer sequences, but fail on the backward variant. This is not surprising since in the forward case, input symbols are presented in the “right” processing order to the LSTM. As expected, the bidirectional LSTM performs well in both presentation orders, since one of its processing directions is always aligned with the order of computation. However, for an arbitrary task, the order of processing is not given. For example, for ListOps (Sec. 3.3), the processing should start from the deepest point in the parse tree, which is probably somewhere in the middle of the sequence. The experiments on other tasks (Sec. 3.2 and 3.3) requiring arbitrary processing orders show that bidirectional LSTMs do not generalize well in such tasks. This is not satisfactory since our goal is to create a generic architecture which can solve arbitrary problems with an arbitrary underlying input processing order. While the Transformer seems to be a good candidate for learning problem dependent processing orders, the baseline Transformer variants fail to generalize in this task in both directions. By introducing the copy gate (Sec. 2.1), the relative Transformer can solve the forward task, but not the backward one. Our analysis showed that the network learns to attend to the last operation based on the relative position information. Since the result is read from the last column, this position changes with the sequence length. The model thus fails to generalize to such arbitrary offsets. To address this issue, we introduce a simple mechanism to let the model choose between absolute and relative positional encodings at each position (see Appendix B). The resulting model effectively manages to use the absolute position for the prediction and perform well in both directions. However, such a combination of absolute/relative positional encoding might be an overly specific bias. A more generic solution, geometric attention (Sec. 2.2), also achieved perfect generalization and was found easier to train. We present the corresponding visualization of our model in Sec. 4. 3.2 SIMPLE ARITHMETIC In order to validate the success of the proposed model on a task that involves more complex data flows and operations, we propose the simple arithmetic task. Task. The task is to execute an arithmetic expression consisting of nested modulo 10 additions and multiplications. This requires the model to process tree-structured data flows, which is presumably more difficult than the sequential processing required for the CTL task. Each operation is surrounded by brackets, such that the boundaries of operations are easy to determine. For example ‘((4*7)+2)’ should evaluate to ‘0’ (30 modulo 10). The expressions are generated randomly. The tree depth is up to 5 for the training set, 6 for the validation set, and 7-8 for the test set. The depth is measured as the number of operations, ignoring the leaves, so the example above has a depth of 2. The sequence length is limited to at most 50 tokens. Results. Table 2 shows the results. All considered models perform well on the IID validation data, but none except the NDR performs well on the generalization test set, which achieves near-perfect accuracy of 98%. We also note that the NDR learns very quickly: while all other models require about 200 K steps to converge, the NDR achieves near-perfect accuracy after 50 K steps of training. 3.3 LISTOPS We also evaluate our model on a variant of the ListOps task (Nangia & Bowman, 2018) which is a popular task commonly used to evaluate parsing abilities of NNs (Havrylov et al., 2019; Shen et al., 2019; Xiong et al., 2021; Tay et al., 2021; Irie et al., 2021). Some special architectures such as Chowdhury & Caragea (2021) can almost perfectly generalize to longer sequences on this task. However, as far as we know, no Transformer variant has been reported to be fully successful. Task. The task consists of executing nested list operations written in prefix notation. All operations have a list of arguments that can be either a digit (from 0 to 9) or recursively another operation with its own list of arguments. The operations are min, max, median and sum. The sum is modulo 10, and the median is followed by the floor function such that the output of any operation lies between 0 and 9. For example: [MED 4 8 5 [MAX 8 4 9 ] ] should return 6. There are two well-known variants: the original one by Nangia & Bowman (2018) and the “Long Range Arena” variant by Tay et al. (2021) which have different maximum numbers of arguments in each function and maximum sequence lengths. In both variants, there is no strict control of the depth of data samples: there is simply a certain pre-defined probability that each argument in the list is expanded into another list (which may increase the tree depth). This is not suitable for evaluating systematic generalization in terms of compositionality (over the problem depth). We propose instead to generate clean train, valid, and test splits with disjoint depths: up to depth 5 for training, depth 6 for validation and depths 7 and 8 for test. Importantly, we make sure that a depth-K sample effectively requires computation until depth-K (otherwise min, max, and med operations could potentially find the output without executing all of its arguments). By dissociating the splits by the depth, we can clearly identify models which fail to generalize compositionally. Apart from the depth specifications, all train/valid/test sets share the same settings as follows: the maximum sequence length is 50 (tokens), the probability of recursively sampling another function inside a list is 30% at each position, and the maximum number of arguments for a function is 5. The train set consists of 1M, the valid and test sets of 1K sequences. Results. Table 3 shows the results. Like on the other tasks, the baseline LSTM and Transformers do not generalize well on the test set consisting of deeper problems, while they achieve a near-perfect accuracy on IID data. In contrast, our model achieves near-perfect generalization. 4 ANALYSIS In this section, we provide some visualizations of attention and gating patterns of the NDR and the corresponding analyses. For more visualizations, we refer the readers to Appendix D. Compositional Table Lookup. Figure 2 shows the gating and attention patterns of the NDR model for an example of the backward presentation task. As shown in Fig. 2/Bottom, the gates of different columns open sequentially one after another when the input is available for them. Fig. 2/Top shows the corresponding attention maps. Each column attends to the neighbouring one, waiting for its computation to be finished. The behavior of the last column is different: it always attends to the second position of the sequence, which corresponds to the last operation to be performed. ListOps. We can also identify how the NDR processes the data in ListOps. Different attention heads play different roles. We highlight the core observations in Figure 3. The input for this example is: [SM [MED [MIN 1 7 4 [MAX 2 4 0 8 9 ] ] 7 ] 5 [MED 8 5 8 ] 0 7 ]. First of all, we find that there is a head (head 13 in Figure 3, first row) which seems to be responsible for connecting operators and their arguments: the operands/arguments of an operation attend to the operator. In step 0 (t = 0 in the figure), we can recognize that the operations at the deepest level, namely MAX and the second MED have all the arguments ready (as is shown by vertical lines on the columns corresponding to MAX and MED). The model indeed identifies that these two operations are ready to be executed and that they can be processed in parallel (these arguments-to-operation attention patterns remain for a few steps). We note that at this stage, the last argument of MIN is not ready yet ([MIN 1 7 4 [MAX 2 4 0 8 9 ] ]). We can see that only arguments which are already ready (1 7 4) attend to the operator (see the column of MIN). In step 1 (t = 1, 2nd row), we can see that head 5 copies the expected result of MAX, 9 to the column of the operator (we note that this only requires one step as 9 is always the result of MAX when it is one of the arguments of MAX). Similarly in step 2, head 7 (2nd row) seems to copy the result of the second MED, 8 to the operator column. In step 3 (t = 3, 1st row), we recognize that the result of MAX is marked as an argument for MIN in head 13 which is responsible for communication between operators and their arguments. This is shown by the new attention which appears at t = 3 in head 13 from the source position MAX to the target position MIN (a pattern which is not visible at t = 2). In head 3, t = 6 (2nd row), the expected result of MIN, which is 1, is copied to the operator, similarly to the patterns we observed above for MAX and MED. In head 13, t = 6 (1st row), all arguments for the first MED are now also recognized (the result of MIN which is 1, and 7). Finally in t = 7 (2nd row), two heads, head 3 and head 5 seem to copy/gather two inputs needed to compute the corresponding median, 1 and 7, and store them in the column of the operator MED. A complete visualization of further steps can be found in Appendix D.2. We noticed that some of the heads do not seem to play a key role; we focused on interpreting those which seem to participate in the main computation. For ListOps, we also partially find the attention patterns described above in the baseline Transformer with relative positional encoding, at least on some inspected examples, which also explains its rather high accuracy. 5 DISCUSSION Learning adaptive serialization. The NDR architecture can be understood as performing adaptive serialization of the problem. A key requirement for reusable computation is decomposing the problem into reusable building blocks, typically applied in sequential steps. The granularity of the decomposition determines the degree of reusability: fusing operations in a single step makes the processing faster (fewer steps), but also more specialized. Learning the most granular solutions is thus preferable for generalization. At the same time, not all processing should happen serially: branches of the computational graph that do not have common data dependencies can be processed independently in parallel, which we empirically observe in our NDR in the ListOps example (Sec. 4). This enables the architecture to get away with a number of computational steps reflecting the depth of the computational graph rather than the length of the input. Bottom up approach for improving model architectures. Transformers have seen tremendous successes across various application domains (Devlin et al., 2019; Brown et al., 2020; Dosovitskiy et al., 2021). Impressive results have been reported when they are scaled up with a large amount of data (Brown et al., 2020). On the other hand, simple tasks like those highlighted in the present work demonstrate that the Transformer architecture still struggles with basic reasoning. Particularly in algorithmic tasks, it is often the case that a sub-optimal choice of architecture/optimization method makes the model fall back to simple memorization. We argue that it is crucial to look at isolated problems which test specific generalization capability. This calls for a bottom-up approach: building on toy tasks that focus on individual aspects of generalization and using them for improving models. 6 RELATED WORK Gating inside Transformers. Several prior works have proposed to use some sort of gating within Transformer architectures (Parisotto et al., 2020; Chaabouni et al., 2021). Our proposed copy gate is different from those as it satisfies two important properties. First, our copy gate allows the model to skip the entire Transformer layer (i.e., both the self-attention and the feedforward blocks) when the gate is closed. Second, the gate function is conditioned on the attention output such that the decision of opening or closing depends on information from all columns. While multiple gating variants have been proposed by Parisotto et al. (2020) to stabilize Transformers for reinforcement learning, none of them can produce this behavior. Empirically, we also tried out a few other gating variants which do not satisfy the two properties above; we found them not to improve over regular Transformers in our preliminary experiments on compositional table lookup. Recent work by Chaabouni et al. (2021) also makes use of “gating” in Transformers through a gated linear unit (GLU) activation function commonly used in convolutional NNs (Dauphin et al., 2017). Transformer models with such an activation function were reported to outperform RNN baselines on a systematic generalization task (Dessı̀ & Baroni, 2019). Unlike our copy gate or Parisotto et al. (2020)’s gating, such a gating activation does not have the “residual” term (i.e. a closed gate zeros out the input) which allows the model to skip a transformation. In a more general context, benefits of the GLU activation in Transformers vary across tasks (Irie et al., 2019; Shazeer, 2020). In language modeling, no improvement is typically obtained by using the standard highway gate instead of the residual connection in Transformers (Irie, 2020), while it yields improvements when combined with convolutional layers (Kim & Rush, 2016). Parameterized geometric distributions. Two recent works (Brooks et al., 2021; Banino et al., 2021) have used a form of parameterized geometric distribution (PGD; in the form of Eq. 7). Brooks et al. (2021) have used such a distribution to parameterize the movement of a pointer on a sequence of instructions. Banino et al. (2021) have used it to implement adaptive computation time (Schmidhuber, 2012; Graves, 2016). We use the PGD to obtain a generic attention mechanism as a replacement of the standard self-attention used in Transformers (Vaswani et al., 2017). Compositional table lookup. CTL task was proposed for evaluating the compositional ability of NNs (Liska et al., 2018). Previous works evaluated RNNs, RNNs with attention, and Transformers on this task with limited success (Hupkes et al., 2019; Dubois et al., 2020). Dubois et al. (2020) have proposed a special attention mechanism to augment the recurrent architecture. While they obtained good performance for the forward presentation order, the proposed model failed in the backward one. In contrast, two of our approaches (Sec. 3.1) achieve 100% generalization accuracy for both orders. Positional encodings. Many previous works have focused on improving positional encoding (Schmidhuber, 1992; Vaswani et al., 2017) for self-attention. Most notably, the relative positional encoding (Schmidhuber, 1992; Shaw et al., 2018; Dai et al., 2019) was found useful for improving systematic generalization of Transformers (Csordás et al., 2021). Here we also present two new approaches related to positional encoding. One is the gated combination of absolute and relative positional encoding (Sec. 3.1; details in Appendix B). We show that absolute positional encoding can complement relative positional encoding. The former enables the model to always attend to a specific position, as is needed for the CTL task in the last step, while the gating allows it to use relative positional encoding for other positions/steps. Second, we introduce directional encoding to augment geometric attention. Unlike positional encoding which can overfit to a range of positions seen during training, the direction information is found to be robust and to be a crucial augmentation of the geometric attention. 7 CONCLUSION We proposed a new view on the internal operations of Transformer encoders as a dynamic dataflow architecture between Transformer columns. This overcomes two shortcomings of traditional Transformers: the problem of routing and retaining data in an unaltered fashion, which we solve by an additional copy gate, and the problem of learning length-independent attention patterns, which we solve by geometric attention. Our new model, the Neural Data Router (NDR), generalizes to compositions longer than those seen during training on the popular compositional lookup table task in both forward and backward directions. NDR also achieves near perfect performance on simple arithmetic and ListOps tasks in settings that test systematic generalization in terms of computational depth. In general, the gates and the attention maps collectively make the architecture more interpretable than the baselines. Future work will extend this encoder-only architecture to a full sequence-to-sequence model and evaluate it on other standard tasks in systematic generalization requiring generation of variable-length output sequences. ACKNOWLEDGMENTS We thank Imanol Schlag and Sjoerd van Steenkiste for helpful discussions and suggestions on an earlier version of the manuscript. This research was partially funded by ERC Advanced grant no: 742870, project AlgoRNN, and by Swiss National Science Foundation grant no: 200021 192356, project NEUSYM. We are thankful for hardware donations from NVIDIA & IBM. The resources used for the project were partially provided by Swiss National Supercomputing Centre (CSCS) project s1023. A ABLATIONS IID Test nlayers Forward Backward Forward Backward 14 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 12 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 0.99 ± 0.02 10 1.00 ± 0.00 1.00 ± 0.00 0.75 ± 0.04 0.62 ± 0.05 8 1.00 ± 0.00 1.00 ± 0.00 0.23 ± 0.02 0.24 ± 0.03 6 1.00 ± 0.00 0.96 ± 0.03 0.22 ± 0.05 0.15 ± 0.01 4 0.96 ± 0.04 0.68 ± 0.11 0.14 ± 0.01 0.13 ± 0.01 Readout from the first instead of the last column. In our experiments with the Transformer models, the last column was used for the readout of the result. Under this configuration, the readout position depends on the length of the sequence which might increase the difficulty of the problem, in particular for the models using absolute positional embeddings. Table 5 shows the corresponding ablation study. We observe that this choice has only marginal impact on the model performance. As a side note, we also tried the variant where an additional cross-attention layer is used for the readout. Again, the generalization performance was not better. In fact, these results are not surprising since none of these changes fundamentally addresses the problem of length generalization. Does Adaptive Computation Time (ACT) help? In this work, we determined the number of layers/steps to be used in the model based on heuristics (see Appendix C.1). We could also consider using Adaptive Computation Time (ACT) to dynamically determine the number of steps. Furthermore, ACT introduces a form of gating which creates shortcuts in the credit assignment path between the output and a result of an intermediate layer. This “copying” mechanism resulting from the ACT (i.e. stop computation at a certain time and copy the result to the output) is fundamentally different from our copy gate (Sec. 2.1). Our copy gate allows Transformer columns to keep the input unchanged until it’s their turn to be processed (a crucial property to implement control flow like behavior). This behavior can not be simulated by the ACT. Here we provide some experimental results on models with ACT which confirm that the proposed copy gate is a crucial component for generalization which can not be replaced by ACT. We note that there are various versions of ACT in the literature, e.g., the variant used by Dehghani et al. (2019) in Universal Transformers is different from the one used by Graves (2016). Here we focus on two variants: one in which we directly apply Graves (2016) to Transformers, and another one used by Dehghani et al. (2019). We start with the description of the former. An extra sigmoidal unit p̂(i,t) is computed for each column i in each timestep t as: p̂(i,t) = σ(WHh (i,t) + bH) (11) where WH ∈ R1×d and bH ∈ R are trainable parameters. By comparing the cumulative sum of p̂(i,t) over time steps to a certain threshold value (1− ) with a hyper-parameter (0.01 in our experiment), we determine the termination step T i for column i as: T i = min{Tmax,min{t′ : t′∑ t=1 p̂(i,t) ≥ 1− }} (12) where Tmax is the pre-defined maximum number of steps. The corresponding halting probability p(i,t) is then computed as: p(i,t) = { p̂(i,t) if t < T i Ri if t = T i (13) Ri = 1− T i−1∑ t=1 p̂(i,t) (14) which is used to compute the final output of column i as: oi = T i∑ t=1 p(i,t)h(i,t) (15) In Dehghani et al. (2019)’s variant, a different equation is used in lieu of Eq. 15 above and the computation of the reminder term Ri in Eq. 14 above is not properly handled in case where Eq. 12 terminates because of the first condition on Tmax. For further details, we refer the readers to Listing 1 and 2 in Dehghani et al. (2019) and/or our public code. One subtlety introduced by Dehghani et al. (2019) which we note here is that the computation of the final output oi of column i effectively “halts” after T i (since oi only depends on h(i,t) for 0 < t < T i), but column i itself still continues transforming the hidden states h(i,t) for steps t > T i until all columns reach the termination step, and its updated states can be attended/read by another column j which has not halted yet (i.e. T j > T i). In this sense, computation is never stopped independently for each column. The mechanism described above instead finds the readout steps for each column (as is used in Eq. 15). We follow this decision in our implementation of both variants. In addition, a new regularizer term, LACT = α 1N ∑N i=1R i is added to the loss function, where N is the length of the input sequence. This makes the network prefer short computations. We ran a hyper-parameter search for α from the following values: 0.001, 0.003, 0.01, 0.03, 0.1. We found α = 0.03 to work the best. We conducted experiments on the compositional table lookup task. We first noted that ACT helps training our baseline Transformer models with a maximum step of 14 layers which was not possible without ACT (our baseline Transformer had only 11 layers for this reason; see Table 7). The shortcut in the credit assignment path introduced by ACT certainly helps training of this 14 layer model. As we noticed that the models with ACT learn slower than those with gating, we increased the number of training steps to 60k steps which is twice as many as 30k used for the models without ACT. Table 6 shows the results. We observe that, interestingly, ACT enables generalization for longer lengths in the forward direction of the Transformer with relative positional encoding and the one with geometric attention. However, we were not able to find any configuration that generalizes in the backward case. This demonstrates that the copy gate is effectively a crucial component for generalization which can not be replaced by ACT. Furthermore, the convergence of models with ACT is significantly slower than those of models with our gating, and they are more unstable and very sensitive to the value of α on the regularization term, even in the successful forward case. Overall, the only benefit of ACT is thus the adaptive depth, as is illustrated in Figure 4, which is orthogonal to our study. B DETAILS OF ATTENTION WITH COMBINED ABSOLUTE/RELATIVE POSITIONAL ENCODING The use of copy gates enables Transformers to generalize to longer lengths in the forward presentation order of the CTL task (Sec. 3.1), but that alone was not enough to make the model generalize in the backward order variant of the task. Examining the attention maps reveals that the model uses position-based attention to read out the result instead of content-based attention. In the backward presentation order, the last column of the transformer should focus on the second column, whose relative position changes dynamically with the length of the sequence. We solve this issue by adding an option to choose between absolute and relative positional encodings to the attention head. In what follows, we describe the operation within a single layer/step. This allows us to omit the layer/step-index t for better readability, and thus denote the state of column/position i as hi instead of h(i,t). We use the relative positional embedding variant of self-attention by Dai et al. (2019). Our attention matrix with the gated absolute/relative positional encodings can be decomposed as follows: ri = σ(hiWar + bar) (16) Âi,j = h > i W > q Wk,Ehj︸ ︷︷ ︸ (a) + b>q,EWk,Ehj︸ ︷︷ ︸ (c) + ( h>i W > q Wk,P︸ ︷︷ ︸ (b) + b>q,PWk,P︸ ︷︷ ︸ (d) ) ( pi−jri + pj(1− ri)︸ ︷︷ ︸ (e) ) (17) where the matrix Wq ∈ Rdhead×d maps the states to queries, Wk,E ∈ Rdhead×d maps states to keys, while Wk,P ∈ Rdhead×d maps positional embeddings to keys. dhead is the size of the key, query and value vectors for each head, set as dhead = dnhead . bq,E , bq,P ∈ R dhead are learned vectors. pi ∈ Rd is the standard sinusoidal embedding for position i (Vaswani et al., 2017). Softmax is applied to the second dimension of  to obtain the final attention scores, A. Component (a) corresponds to content-based addressing, (b, e) to content-based positional addressing, (c) represents a global content bias, while (d, e) represent a global position bias. We introduce term (e) for the positional embedding which can switch between absolute and relative positional encodings using the scalar gate ri (Eq. 16; parameterized by War ∈ Rd×1 and bar ∈ R), which is the function of the state at target position i. C IMPLEMENTATION DETAILS A PyTorch implementation of our models together with the experimental setup is available under https://github.com/robertcsordas/ndr. The performance of all models is reported as mean and standard deviations over 5 different seeds. C.1 CHOOSING THE NUMBER OF LAYERS In Sec. 2, we hypothesized that one of the conditions for our model to generalize is to be “sufficiently” deep such that elementary operations are learned in separate layers which would then become composable. In practice, a “sufficient” depth can be determined by the basic units of compositions implicitly defined by the dataset. The depth of the model must be at least as deep as the deepest path in the computation graph defined by these basic operations. This hypothesis was empirically validated in the ablation study presented above (Appendix A). In general, we used the following heuristics to choose the depth of the Transformers: (length of the deepest path in the graph) × (steps per operation) + a few more layers. Determining the number of steps needed by the elementary operation is not straightforward but it can be done empirically. For example, for ListOps, as is shown in Sec. 4, it requires two steps per operation: one step in which the operands attend to the operation, followed by another one where the result is written back to the operation. For other tasks, we found that a single step per operation was enough. Choosing more layers than needed is safe, and it can be used to determine the required number of layers, for example by looking at the gate activity. Finally, “+ a few more layers” are needed because one additional layer should be used to read out the final result, and one or a few more can be needed for communication between columns (e.g., to determine operator precedence). Since parameters are shared across layers, we can optionally train models with a certain number of layers and increase the number of computational steps at test time. This allows us to train models using a depth which is “sufficient” to solve the training set, but increase it at test time to generalize to a test set requiring more computational steps. We did this for the ListOps experiment (Sec. 3.3): the model was trained with 20 layers and tested with 24. Our preliminary experiments confirmed that this practice has no performance penalty, while it speeds up training. C.2 DATASET DETAILS Compositional table lookup. Our implementation uses 8 symbols as input arguments and 9 randomly sampled bijective functions denoted by lower case letters of the English alphabet. All functions are included in the train set in combination with all possible input symbols. The rest of the training set consists of random combinations of functions applied to a random symbol as an argument, up to length 5. The total size of the train set is 53,704 samples. The samples are roughly balanced such that there are similar numbers of samples for each depth. There are different validation sets: an IID set, which matches the distribution of the train set, and a depth validation, which includes samples of lengths 6, 7 and 8. The test set consists of sequences of lengths 9 and 10. Simple arithmetic. The dataset is constructed by sampling random digits (0-9) and operations + (add) and ∗ (multiply). The operations are performed modulo 10. Parentheses surround the arguments of the operations. The depth of the resulting tree is computed, and rejection sampling is used to ensure that the same number of samples from each depth is present in the given split. The maximum length of samples is 50 tokens, sub-operations are sampled with probability 0.2. 100 K samples are used for training, 1 K for both test and validation sets. The train set consists of 0-5 operations, the validation set of 6 and the test set of 7 operations. ListOps. Random digits are sampled from range 0-9. Operations are sample from the set summodulo (SM), which is a sum modulo 10, min (MIN), max (MAX) and median followed by the floor function (MED). The maximum number of arguments for each operation is 5. A sub-operation is sampled with probability 0.3. 1 M samples are used for training, 1 K for test and validation. The train set consists of 0-5 operations, 6 for the validation set, and 7 for the test set. For each sample, we calculate a number which we call dependency depth. To understand it, note that MIN and MAX operations only select one of their operands, MED selects 1 or 2. In SUM, all operands are needed to perform the operation. If we construct a parse tree and prune away the branches which were not selected by any operation and measure the depth of such a tree, the resulting number is the dependency depth. This ensures that the deeper parts of the tree contribute to the result calculation, preventing shallow heuristics, like ignoring all branches of the tree that are too deep and still getting the correct result with a high chance. We also ensure that the number of samples is the same for all possible dependency depths in each split. C.3 MODEL DETAILS We use the AdamW optimizer (Loshchilov & Hutter, 2019) for all of our models. Standard hyperparameters are listed in Tab. 7, 8 and 9. Additionally, models with gating use dropout (Hanson, 1990; Srivastava et al., 2014) applied to the content-based query and the position-query components of 0.1 for most models, except for non-gated Transformers on ListOps, where this value is 0.05. In the case of geometric attention, since the channels of the directional encoding does not have any redundancy, dropout is applied just to the content-query. In the case of Transformers with the copy gate but without geometric attention, we use tanh instead of LayerNorm in Eq. 2. The Transformer/NDR layer with a copy gate is illustrated in Figure 5. The hyperparameters of the gateless Transformers differ significantly from the gated ones. This is because they were very hard to train to achieve good performance even on the IID set, requiring extensive hyperparameter tuning. One might argue that fewer layers make them less competitive on longer sequences. However, we were unable to train them to perform well even on IID data with comparable sizes. All Transformer variants have a begin (B) and end (E) token included in the sequence. RNNs (LSTM and DNC) have no such tokens. All Transformers are encoders only, and the results are read from the last column (corresponding to the end token). The DNC has 21 memory cells, 4 read heads, and an LSTM controller. It contains recently introduced improvements (Csordás & Schmidhuber, 2019). We use gradient clipping with magnitude 5 (for CTL) or 1 (for simple arithmetic and ListOps) for all of our models. Hyperparameters were obtained by a Bayesian hyperparameter search of Weights & Biases2 over the systematically different (OOD) validation set for the +abs/rel + gate models and were reused for all other gated models. For the non-gated models, we used the +rel variant for tuning. It was not possible to tune the baselines using only the OOD validation set because their performance was too bad on that set. We thus used a mixture of IID and OOD validation sets to tune the hyperparameters for the baselines. Table 10 shows the range of hyperparameters used for tuning. “FF multiplier” is used to calculate dFF from dmodel. We train all models for a fixed number of niters iterations and measure their validation performance every 1000 iterations. For each model, we select the best checkpoint according to the validation performance, and report its test accuracy. D ADDITIONAL ANALYSIS D.1 COMPOSITIONAL TABLE LOOKUP An idealized sequence of computations in a Transformer for an example from CTL task is shown in Fig. 6. Each column waits for its input from the left side, then performs an update. Finally, the last 2https://wandb.ai/ column copies the result. So far, in the main text, we only had space to show the gate and attention activity of the NDR for a few timesteps. Here we show the corresponding visualization of all steps in Figures 10 and 11, as well as the attention map for the baseline Transformer with relative positional encoding in Figure 7. We also show the Transformer + abs/rel + gate variant in Fig. 8 and Fig. 9. Please directly refer to the caption of the figures for the corresponding analysis. In general, the visualization for our NDR and the abs/rel + gate variant is easily interpretable, unlike that of the baseline Transformer model. D.2 LISTOPS Figures 12 and 14 shows the attention and gate patterns of our NDR architecture on an example from the ListOps dataset. We highlighted notable attention patterns in Sec. 4. Different heads seem to specialize in different functions. As already mentioned in Sec. 4, head 13 of the NDR architecture, shown in Figure 13, seems to specialize in selecting which arguments belong to which operator. The gating patterns are also very interesting. In the early stages, the deepest parts of the input are updated: [MAX 2 4 0 8 9] and [MED 8 5 8], which are independent branches of the parse tree that can be processed in parallel. In the following steps, the update patterns spread up in the parse tree, updating the operations that have their arguments available. In this task, the input is read from the first column, which is written in a very late stage.
1. What are the strengths and weaknesses of the proposed Transformer Control Flow (TCF) architecture compared to the Universal Transformer (UT)? 2. How does TCF solve the extrapolation problem for longer sequences, and how does it compare to other solutions such as adding iterations during inference or using adaptive computation time? 3. What are the design choices made in TCF regarding model depth, computational depth, and copy-gating, and how do they impact its performance? 4. How does the encoder-only output and geometric attention used in TCF compare to alternative decoders such as sequence-to-sequence models or special attention-based layers? 5. What are the limitations of using a single output element to decode the solution, and how would using a longer part of the output sequence or a special attention-based layer improve the model's performance? 6. Why do transformers perform poorly on the backward task in the compositional table lookup experiment, and how might relative positional embeddings impact the results? 7. Is the backward case useful in demonstrating the efficiency of TCF, or does it have limitations due to the nature of causal models like LSTM?
Summary Of The Paper Review
Summary Of The Paper The authors propose Transformer Control Flow (TCF), a set of improvements to the Universal Transformer (Dehghani et al, ICLR 2019). They show that, for three compositional problems, TCF allows trained models to generalize to longer sequences, a common problem of many transformer implementations. As in the Universal Transformer (UT), the encoder consists of one shared transformer layer (self attention + fully connected network) which is iterated through a fixed number of times, by feeding the output of each iteration back into the input of the shared layer. However, whereas the UT uses a sequence to sequence model, TCF is an encoder-only architecture, which decodes the last element in the output sequence as the final result. Two new features are introduced : a gating mechanism that allows the model to "skip a layer" (the input is then copied to the output), on the basis of the self-attention output, a weighting system for the outputs of attention heads, which favors short-range attention (i.e. tokens close to the one currently considered), and can be trained to be biased towards a certain direction (before or after the current token). Experiments are conducted over three tasks: predicting the output of sequences of permutations of 8 elements, in prefix or postfix notation, predicting the result of of additions and multiplications modulo 10, in infix notation, predicting the result of operations on lists of small integers, in prefix notation. For each task, TCF is shown to be capable of extrapolation to larger problems (i.e. longer sequences) than those seen at training. Review The paper is very clearly written, and proposes an interesting solution to an important question. The tasks chosen are meaningful, and the experimental results suggest that the proposed architecture can solve the extrapolation problem. The technical aspects are precisely documented, which makes the research easy to reproduce. My main concerns are related to the experimental comparisons, and the impact of certain design choices, such as the use of a fixed model depth at training and test time, the use of the last word in the encoder representation as the basis for model output, and the absence of the Adaptive Computation Time (ACT) in the Universal Transformer implementation that serves as the main baseline. This makes it difficult to judge the impact of the two improvements suggested (copy-gate and geometric attention), and the benefits of the new architecture, compared to an encoder-only state-of-the-art version of the Universal Transformer (with relative positional embedding, and ACT). I believe improving this part of the experimental design and discussion would greatly reinforce the paper. Below are my concerns and questions, split into four themes. Computational, and model, depth At the beginning of section two, the authors argue that four properties are needed for network to extrapolate to larger problems: shared layers depth of the computational graph step skipping short-range attention I would disagree with the second point, for two reasons. First, computational depth is a relative notion. In an arithmetic task, I can choose to represent modular addition as one operation, or two (addition and modulo), or even three (digit addition, carry propagation, modulo). On the other hand, some linear algebra packages define "add and mul" as a single operation. There is no doubt that network depth should somehow increase with complexity, but defining it from computational depth seems unpractical. Second, since you use shared layers, model depth can be varied without having to retrain. Specifically, model depth could be adjusted to the complexity of the training examples, and then increased at inference to fit the complexity of the test set. Using the maximum depth in both train and test sets (provided it can be defined from computational trees) is not necessary and might not even be beneficial. In a recent paper (https://arxiv.org/abs/2106.04537), Schwarzschild et al. have shown (using different architectures, and testing on different tasks) that adding iterations during inference could help models extrapolate to larger problems. It would be interesting to test this on TCF (and baselines). Copy-gating and variable depth The original Universal Transformers paper proposes a copy-gating mechanism, which uses the Adaptive Computation Time (ACT) mechanism (Graves 2016) at the token level. The gating works differently than in TCF: all gates begin closed, and once opened remain so. However, I believe this (universal transformer plus token-level ACT gating) is the correct baseline for TCF. Can such a comparison be provided? This is all the more important as gating has a large impact on performance, for the three problems considered. ACT-gating has another merit: it adaptively controls the depth of the Universal Transformer, which goes on iterating until all gates are open. This means that the model can adjust to longer sequences by iterating for more "ponder time". This would provide an adaptive solution to the depth adjustment problem discussed in the previous section. Do you think an adaptive control for the number of iterations/layers could be implemented? Encoder-only output, and geometric attention The original Universal Transformer is a sequence to sequence model. When decoding a solution, all the output sequence of the encoder is attended to. In your implementation, only the last element in the output is taken into account. As you observe in the results discussion of section 3.1, this creates problems at test time because the position of the last word changes as sequence length increases. It also complicates training, since the output position depends on the sequence length (which varies in all the problems you propose). As you show, this can be alleviated by relative positional embeddings and directional encodings, which can be used to force the result to "move right" as the computation proceeds. It also seems to be the main justification of geometric attention (which seems to bring very little when used alone, cf table 1). But could the original problem, variable output position, be eliminated? What would happen if the output position had a fixed positional embedding? This could be done in many ways: reading the output from the first position instead of the last (since the transformer is bidirectional, this should have no adverse effect), or from some other fixed value (e.g. the fifth output word), or enumerating positions so that the last token has a fixed embedding (e.g. counting backward, or from both ends to the center). Another question is the use of a single output element to decode the solution. Would the model be improved by using a longer part of the output sequence (e.g. the N first output words, with N the minimal size of output sequences, or shorter output padded to this size)? An alternative (and in my opinion much better) solution would be to use a special attention-based layer for the decoder (an attention plus a linear layer working from the output sequence of the encoder). This amounts to a minimal seq2seq model, with one non-shared, cross-attention-only layer in the decoder. This would eliminate the variable output position problem, and allow the full output sequence to be taken into account while decoding. I believe these alternative decoders need to be tested. Without them, it is hard to assess the importance of relative positions and geometric attention. Compositional table lookup: the backward case Unless their architectures are bidirectional, the backward task is very unfair on LSTM and DNC, which are causal models. To solve the backward task, they would need to memorize all the tables before seeing the value to be operated on, an impossible task given their capacity. Transformers, on the other hand, are bidirectional. Their bad performance on the IID backward case comes as a surprise, and the unusually large error on this observation suggests an experimental problem. Do you have explanations about this high standard deviation of experimental results? On the test data, the fact that relative positional embeddings seem to improve the forward, but not the backward case, might be due to the choice of the last term in the output as the result to be decoded. Would, for instance, the results be inverted if the first output were chosen instead (or some fixed middle position)? Overall, I am not certain this backward case helps the argument about the efficiency of TCF (what it demonstrates, I think, is that causal models like the LSTM need their inputs to be presented in a particular order, which is no new news...).
ICLR
Title SpanDrop: Simple and Effective Counterfactual Learning for Long Sequences Abstract Distilling supervision signal from a long sequence to make predictions is a challenging task in machine learning, especially when not all elements in the input sequence contribute equally to the desired output. In this paper, we propose SPANDROP, a simple and effective data augmentation technique that helps models identify the true supervision signal in a long sequence with very few examples. By directly manipulating the input sequence, SPANDROP randomly ablates parts of the sequence at a time and ask the model to perform the same task to emulate counterfactual learning and achieve input attribution. Based on theoretical analysis of its properties, we also propose a variant of SPANDROP based on the beta-Bernoulli distribution, which yields diverse augmented sequences while providing a learning objective that is more consistent with the original dataset. We demonstrate the effectiveness of SPANDROP on a set of carefully designed toy tasks, as well as various natural language processing tasks that require reasoning over long sequences to arrive at the correct answer, and show that it helps models improve performance both when data is scarce and abundant. 1 INTRODUCTION Building effective machine learning systems for long sequences is a challenging and important task, which helps us better understand underlying patterns in naturally occurring sequential data like long texts (Radford et al., 2019), protein sequences (Jumper et al., 2021), financial time series (Bao et al., 2017), etc. Recently, there is growing interest in studying neural network models that can capture long-range correlations in sequential data with high computational, memory, and statistical efficiency, especially widely adopted Transformer models (Vaswani et al., 2017). Previous work approach long-sequence learning in Transformers largely by introducing computational approaches to replace the attention mechanism with more efficient counterparts. These approaches include limiting the input range over which the attention mechanism is applied (Kitaev et al., 2019) to limiting sequence-level attention to only a handful of positions (Beltagy et al., 2020; Zaheer et al., 2020). Other researchers make use of techniques akin to the kernel trick to eliminate the need to compute or instantiate the costly attention matrix (Peng et al., 2020; Katharopoulos et al., 2020; Choromanski et al., 2020). Essentially, these approaches aim to approximate the original pairwise interaction with lower cost, and are often interested in still capturing the interactions between every pair of input elements (e.g., the long sequence benchmark proposed by Tay et al., 2020). In this paper, we instead investigate learning problems for long sequences where not all input elements contribute equally to the desired output. Natural examples that take this form include sentiment classification for long customer review documents (where a few salient sentiment words contribute the most), question answering from a large document (where each question typically requires a small number of supporting sentences to answer), key phrase detection in audio processing (where a small number of recorded frames actually determine the prediction), as well as detecting a specific object from a complex scene (where, similarly, a small amount of pixels determine the outcome), to name a few. In these problems, it is usually counterproductive to try and make direct use of the entire input if the contributing portion is small or sparse, which results in a problem of underspecification (i.e., the data does not sufficiently define the goal for statistical models). One approach to address this problem is annotating the segments or neighborhoods that directly contribute to the outcome in the entire input. This could take the form of a subset of sentences that answer a question or describe the relation between entities in a paragraph (Yang et al., 2018; Yao et al., 2019), which function as explainable evidence that supplements the answer. When such annotation is not feasible, researchers and practitioners often need to resort to either collecting more input-output pairs or designing problem-specific data augmentation techniques to make up for the data gap. For real-valued data, this often translates to random transformations (e.g., shifting or flipping an image); for symbolic data like natural language, techniques like masking or substitution are more commonly used (e.g., randomly swapping words with a special mask token or other words). While these approaches have proven effective in some tasks, each has limitations that prevents it from being well-suited for the underspecification scenario. For instance, while global feature transformations enhance groupinvariance in learned representations, they do not directly help with better locating the underlying true stimulus. On the other hand, while replacement techniques like masking and substitution help ablate parts of the input, they are susceptible to the position bias of where the true stimulus might occur in the input. Furthermore, while substitution techniques can help create challenging contrastive examples, it is significantly more difficult to design them for complex symbolic sequences (e.g., replacing a phrase naturally in a sentence). To address these challenges, we propose SPANDROP, a simple and effective technique that helps models distill sparse supervision signal from long sequences when the problem is underspecified. Similar to replacement-based techniques such as masking and substitution, SPANDROP directly ablates parts of the input at random to construct counterfactual examples that preserve the original supervision signal with high probability. Instead of preserving the original sequence positions, however, SPANDROP directly removes ablated elements from the input to mitigate any bias that is related to the absolute positions of elements (rather than the relative positions between them) in the input. Upon closer examination of its theoretical and empirical properties, we further propose a more effective variant of SPANDROP based on the Beta-Bernoulli distribution that enhances the consistency of the augmented objective function with the original one. We demonstrate via carefully designed toy experiments that SPANDROP not only helps models achieve up to 20⇥ sample-efficiency in low-data settings, but also further reduces overfitting even when training data is abundant. We find that it is very effective at mitigating position bias compared to replacement-based counterfactual approaches, and enhances out-of-distribution generalization effectively. We further experiments on four natural language processing tasks that require models to answer question or extract entity relations from long texts, and demonstrate that SPANDROP can improve the performance of already competitive neural models without any change in model architecture. 2 METHOD In this section, we first formulate the problem of sequence inference, where the model takes sequential data as input to make predictions. Then, we introduce SPANDROP, a simple and effective data augmentation technique for long sequence inference, and analyze its theoretical properties. 2.1 PROBLEM DEFINITION Sequence Inference. We consider a task where a model takes a sequence S as input and predicts the output y. We assume that S consists of n disjoint but contiguous spans, each representing a part of the sequence in order S = (s1, . . . , sn). One example of sequence inference is sentiment classification from a paragraph of text, where S is the paragraph and y the desired sentiment label. Spans could be words, phrases, sentences, or a mixture of these in the paragraph. Another example is time series prediction, where S is historical data, y is the value at the next time step. Supporting facts. Given an input-output pair (S,y) for sequence prediction, we assume that y is truly determined by only a subset of spans in S. More formally, we assume that there is a subset of spans Ssup ⇢ {s1, s2, . . . , sn} such that y is independent of si, if si /2 Ssup. In sentiment classification, Ssup could consist of important sentiment words or conjunctions (like “good”, “bad”, “but”); in time series prediction, it could reflect the most recent time steps as well as those a few cycles away if the series is periodic. For simplicity, we will denote the size of this set m = |Ssup|, and restrict our attention to tasks where m ⌧ n, such as those described in the previous section. 2.2 SPANDROP In a long sequence inference task with sparse support facts (m ⌧ n), most of the spans in the input sequence will not contribute to the prediction of y, but they will introduce spurious correlation in a low-data scenario. SPANDROP generates new data instances (S̃,y) by ablating these spans at random, while preserving the supporting facts with high probability so that the model is still trained to make the correct prediction y. This is akin to counterfactually determining whether each span truly determines the outcome y by asking what the prediction would have been without it. Definition 1 (SPANDROP). Formally, given a sequence S that consists of spans (s1, s2, · · · sn), SPANDROP generates a new sequence S̃ as follows: i i.i.d.⇠ Bernoulli(1 p), S̃ =(si)ni=1, i=1, (1) where p is the hyperparameter that determines the probability to drop a span. Note that SPANDROP does not require introducing substitute spans or artificial symbols when ablating spans from the input sequence. It makes the most of the natural sequence as it occurs in the original training data, and preserves the relative order between spans that are not dropped, which is often helpful in understanding sequential data (e.g., time series or text). It is also not difficult to establish that the resulting sequence S̃ can preserve all of the m supporting facts with high probability regardless of how large n is. Remark 1. The new sequence length n0 = |S̃| and the number of preserved supporting facts m0 = |S̃ \ Ssup| follow binomial distributions with parameters (n, 1 p) and (m, 1 p), respectively: P (n0|n, p) = ✓ n n0 ◆ (1 p)n 0 pn n 0 , P (m0|m, p) = ✓ m m0 ◆ (1 p)m 0 pm m 0 . (2) Therefore, the proportion of sequences where all supporting facts are retained (i.e., m0 = m) is (1 p)m, which is independent of n. This means that as long as the total number of supporting facts in the sequence is bounded, then regardless of the sequence length, we can always choose p carefully such that we end up with many valid new examples with bounded noise introduced to supporting facts. Note that our analysis so far relies only on the assumption that m is known or can be estimated, and thus it can be applied to tasks where the precise set of supporting facts Ssup is unknown. More formally, the amount of new examples can be characterized by the size of the typical set of S̃, i.e.the set of sequences that the randomly ablated sequence will fall into with high probability. The size of the typical set for SPANDROP is approximately 2nH(p), where H(p) is the binary entropy of a Bernoulli random variable with probability p. Intuitively, these results indicate that the amount of total counterfactual examples generated by SPANDROP scales exponentially in n, but the level of supporting fact noise can be bounded as long as m is small. However, this formulation of SPANDROP does have a notable drawback that could potentially hinder its efficacy. Because the new sequence length n0 follows a binomial distribution, its mean is n(1 p) and its variance is np(1 p). For sufficiently large n, most of the resulting S̃ will have lengths that concentrate around the mean with a width of O( p n), which creates an artificial and permanent distribution drift from the original length (see Figure 1(a)). Furthermore, if we know the identity of Ssup and keep these spans during training, this length reduction will bias the training set towards easier examples to locate spans in Ssup. In the next subsection, we will introduce a variant of SPANDROP based on the beta-Bernoulli distribution that alleviates this issue. 2.3 BETA-SPANDROP To address the problem of distribution drift with SPANDROP, we introduce a variant that is based on the beta-Bernoulli distribution. The main idea is that instead of dropping each span in a sequence independently with a fixed probability p, we first sample a sequence-level probability ⇡ at which spans are dropped from a Beta distribution, then use this probability to perform SPANDROP. Definition 2 (Beta-SPANDROP). Let ↵ = , = · 1 p p , where > 0 is a scaling hyperparameter. Beta-SPANDROP generates S̃ over S as: ⇡ ⇠B(↵, ), i i.i.d.⇠ Bernoulli(1 ⇡), S̃ =(si)ni=1, i=1, (3) where B(↵, ) is the beta-distribution with parameters ↵ and . It can be easily demonstrated that in Beta-SPANDROP, the probability that each span is dropped is still controlled by p, same as in SPANDROP: E[ i|p] = E[E[ i|⇡]|p] = E[1 ⇡|p] = 1 ↵↵+ = 1 p. In fact, we can show that as ! 1, Beta-SPANDROP degenerates into SPANDROP since the beta-distribution would assign all probability mass on ⇡ = p. Despite the simplicity in its implementation, Beta-SPANDROP is significantly less likely to introduce unwanted data distribution drift, while is capable of generating diverse counterfactual examples to regularize the training of sequence inference models. This is due to the following properties: Remark 2. The new sequence length n0 = |S̃| and the number of preserved supporting facts m0 = |S̃ \ Ssup| follow binomial distributions with parameters (n, ,↵) and (m, ,↵), respectively: P (n0|n,↵, ) = (n+ 1) (n0 + 1) (n n0 + 1) (n0 + ) (n n0 + ↵) (n+ ↵+ ) (↵+ ) (↵) ( ) , (4) P (m0|m,↵, ) = (m+ 1) (m0 + 1) (m m0 + 1) (m0 + ) (m m0 + ↵) (m+ ↵+ ) (↵+ ) (↵) ( ) , (5) where (z) = R1 0 x z 1e xdx is the gamma function. As a result, we can show that the probability that Beta-SPANDROP preserves the entire original sequence with the following probability P (n0 = n|n,↵, ) = (n+ ) (↵+ ) (n+ ↵+ ) ( ) . (6) When = 1, this expression simply reduces to n+ ; when 6= 1, this quantity tends to O(n ) as n grows sufficiently large. Comparing this to the O((1 p)n) rate from SPANDROP, we can see that when n is large, Beta-SPANDROP recovers more of the original distribution represented by (S̃,y) compared to SPANDROP. In fact, as evidenced by Figure 1(a), the counterfactual sequences generated by Beta-SPANDROP are also more spread-out in their length distribution besides covering the original length n with significantly higher probability. A similar analysis can be performed by substituting n and n0 with m and m0, where we can conclude that as m grows, Beta-SPANDROP is much better at generating counterfactual sequences that preserve the entire supporting fact set Ssup. This is shown in Figure 1(b), where the proportion of “noise-free” examples (i.e., m0 = m) decays exponentially with SPANDROP ( = 1) while remaining much higher when is sufficiently small. For instance, when p = 0.1, = 1 and m = 10, the proportion of noise-free examples for SPANDROP is just 34.9%, while that for Beta-SPANDROP is 47.4%. As we have seen, Beta-SPANDROP is significantly better than its Bernoulli counterpart at assigning probability mass to the original data as well as generated sequences that contain the entire set of supporting facts. A natural question is, does this come at the cost of diverse counterfactual examples? To answer this question we study the entropy of the distribution that S̃ follows by varying and n, and normalize it by n to study the size of typical set of this distribution. As can be seen in Figure 1(c), as long as is large enough, the average entropy per span H̄ degrades very little from the theoretical maximum, which is H(p), attained when = 1. Therefore, to balance between introducing noise in the supporting facts and generating diverse examples, we set = 1 in our experiments. Using the beta-Bernoulli distribution in dropout. The beta-Bernoulli distribution has been studied in prior work in seeking replacements for the (Bernoulli) dropout mechanism (Srivastava et al., 2014). Liu et al. (2019a) set ↵ = for the beta distribution in their formulation, which limits the dropout rate to always be 0.5. Lee et al. (2018) fix = 1 and vary ↵ to control the sparsity of the result of dropout, which is similar to Beta-SPANDROP when = 1. However, we note that these approaches (as with dropout) are focused more on adding noise to internal representations of neural networks to introduce regularization, while SPANDROP operates directly on the input to ablate different components therein, and thus orthogonal (and potentially complementary) to these approaches. Further, SPANDROP has the benefit of not having to make any assumptions about the model or any changes to it during training, which makes it much more widely applicable. 3 FINDCATS: DISTILLING SUPERVISION FROM LONG-SEQUENCES In this section, we design a synthetic task of finding the animal name “cat” in a character sequence to a) demonstrate the effectiveness of SPANDROP and Beta-SPANDROP in promoting the performance over a series of problems with different settings, b) analyze the various factors that may affect the efficacy of these approaches, and c) compare it to other counterfactual augmentation techniques like masking on mitigating position bias. 3.1 EXPERIMENTAL SETUP FINDCATS. To understand the effectiveness of SPANDROP and Beta-SPANDROP in an experimental setting, we designed a synthetic task called FINDCATS where the model is trained to discern that given an animal name “cat”, whether a character string contains it as a subsequence (i.e., contains characters in “cat” in order, for instance, “abcdafgbijktma”) or not (e.g., “abcdefhtijklmn”). This allows us to easily control the total sequence length n, the supporting facts size m, as well as easily estimate the supporting fact noise that each SPANDROP variant might introduce. To generate the synthetic training data of FINDCATS, we first generate a sequence consisting of lowercase letters (a to z) that does not contain “cat” as a subsequence. For half of these sequences, we label the tuple (cat,S) with a negative class to indicate that S does not contain “cat” as a subsequence; for the other half, we choose arbitrary (but not necessarily contiguous) positions in S to replace the letters with letters in “cat” from left to right to generate positive examples. In all of our experiments, we evaluate model performance on a held-out set of 10,000 examples to observe classification error. We set sequence length to n = 300 where each letter is a separate span, and chose positions for the letters in the animal name “cat” uniformly at random in the sequence unless otherwise mentioned. Model. We employ three-layer Transformer model (Vaswani et al., 2017) with position embeddings (Devlin et al., 2019) as the sequence encoder, which is implemented with HuggingFace Transformers (Wolf et al., 2019). For each example (“cat”,S, y), we feed “[CLS] cat [SEP] S [SEP]” to the sequence encoder and then construct binary classifier over the output representation of “[CLS]” to predict y. To investigate the effectiveness of SPANDROP, we simply apply SPANDROP to S first before feeding the resulting sequence into the Transformer classifier. 3.2 RESULTS AND ANALYSIS In each experiment, we compare SPANDROP and Beta-SPANDROP at the same drop ratio p. And we further use rejection sampling to remove examples that do not preserve the desired supporting facts to understand the effect of supporting fact noise. Data efficiency. We begin by analyzing the contribution of SPANDROP and Beta-SPANDROP to improving the sample efficiency of the baseline model. To achieve this goal, we vary the size of the training set from 10 to 50,000 and observe the prediction error on the held-out set. We observe from the results in Figure 2(a) that: 1) Both SPANDROP and Beta-SPANDROP significantly improve data efficiency in low-data settings. For instance, when trained on only 200 training examples, SPANDROP variants can achieve the generalization performance of the baseline model trained on 5x to even 20x data. 2) Removing supporting fact noise typically improves data efficiency further by about 2x. This indicates it is helpful not to drop spans in Ssup during training when possible, so that the model is always trained with true counterfactual examples rather than sometimes noisy ones. 3) Beta-SPANDROP consistently improves upon the baseline model even when data is abundant. This (a) Data efficiency (b) Noise in supporting facts (c) Varying sequence length Baseline SPANDROP SPANDROP (noise-free) Beta-SPANDROP Beta-SPANDROP (noise-free) is likely due to the difficulty of the task when n = 300 and m = 3. Similar to many real-world tasks, the task remains underspecified even when the generalization error is already very low thanks to the large amount of training data available. 4) SPANDROP introduces inconsistent training objective with the original training set, which leads to performance deterioration when there is sufficient training data, which is consistent with our theoretical observation. Effect of supporting fact noise and sequence length. Since SPANDROP introduces noise in the supporting facts (albeit with a low probability), it is natural to ask if such noise is negatively correlated with model performance. We study this by varying the drop ratio p from {0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5} on fixed training sets of size 1,000, and observe the resulting model performance and supporting fact error. As can be seen in Figure 2(b), supporting fact noise increases rapidly as p grows.1 However, we note that although the performance of SPANDROP deteriorates as p increases, that of Beta-SPANDROP stays relatively stable. Inspecting these results more closely, we find that even the performance of the noise-free variants follow a similar trend, which should not be affected by supporting fact noise. Recalling the observations from our data efficiency experiments, we next turn to the hypothesis that this discrepancy is mainly caused by the inconsistent length distribution SPANDROP introduces. To test this hypothesis, we conduct two separate sets of experiments: 1) training and testing the model on varying sequence lengths {10, 20, 30, 50, 100, 200, 300, 500}, where longer sequences suffer more from the discrepancy between SPANDROP-resulted sequence lengths and the original sequence length; and 2) testing the model trained on n = 300 on test sets of different lengths, and if our hypothesis about distribution drift were correct, we should see SPANDROP models’ performance peaking around n0 = n(1 p), while the performance of Beta-SPANDROP is less affected by sequence length. As can be seen from Figures 2(c) and 2(d), our experimental results seem to well supporting this hypothesis. Specifically, in Figure 2(c), while the performance of both SPANDROP variants deteriorates as n grows and the task becomes more challenging and underspecified, SPANDROP deteriorates at a faster speed even when we remove the effect of supporting fact noise. On the other hand, we can clearly see in Figure 2(d) that SPANDROP performance peak around sequences of length 270 (= n(1 p) = 300⇥ (1 0.1)) before rapidly deteriorating, while Beta-SPANDROP is unaffected until test sequence length exceeds that of all examples seen during training. 1Note that the noise in our experiments are lower than what would be predicted by theory, because in practice the initial sequence S might already contain parts of “cat” before it is inserted. This creates redundant sets of supporting facts for this task and reduces supporting fact noise especially when n is large. Mitigating position bias. Besides SPANDROP, replacement-based techniques like masking can also be applied to introduce counterfactual examples into sequence model training, where elements in the sequence are replaced by a special symbol that is not used at test time. We implement SPANMASK in the same way as SPANDROP except spans are replaced rather than removed when the sampled “drop mask” i is 0. We first inspect whether SPANMASK benefits from the same beta-Bernoulli distribution we use in SPANDROP. As can be seen in Figure 2(e), the gain from switching to a betaBernoulli distribution provides negligible benefit to SPANMASK, which does not alter the sequence length of the input to begin with. We also see that SPANMASK results in significantly higher error than both SPANDROP and Beta-SPANDROP in this setting. We further experiment with introducing position bias into the training data (but not the test data) to test whether these method help the model generalize to an unseen setting. Specifically, instead of selecting the position for the characters “cat” uniformly at random, we train the model with a “fixed position” dataset where they always occur at indices (10, 110, 210), and a “first 100” dataset where they are uniformly distributed among the first 100 letters. As can be seen in Figure 2(f), both the baseline and SPANMASK models overfit to the position bias in the “fixed” setting, while SPANDROP techniques significantly reduce zeroshot generalization error. In the “first 100” setting, Beta-SPANDROP consistently outperforms its Bernoulli counterpart and SPANMASK at improving the performance of the baseline model as well, indicating that SPANDROP variants are effective at reducing the position bias of the model. 4 EXPERIMENTS ON NATURAL LANGUAGE DATA To examine the efficacy of the proposed SPANDROP techniques on realistic data, we conduct experiments on four natural language processing datasets that represent the tasks of single- and multi-hop extractive question answering, multiple-choice question answering, and relation extraction. We focus on showing the effect of SPANDROP instead of pursuing the state of the art in these experiments. Datasets. We use four natural language processing datasets: SQuAD 1.1 (Rajpurkar et al., 2016), where models answer questions on a paragraph of text from Wikipedia; MultiRC (Khashabi et al., 2018), which is a multi-choice reading comprehension task in which questions can only be answered by taking into account information from multiple sentences; HotpotQA (Yang et al., 2018), which requires models to perform multi-hop reasoning over multiple Wikipedia pages to answer questions; and DocRED (Yao et al., 2019), which is a document-level data set for relation extraction. For the SQuAD dataset, we define spans as collections of one or more consecutive tokens to show that SPANDROP can be applied to different granularities. For the rest three datasets, we define spans to be sentences since supporting facts are provided at sentence level. For all of these tasks, we report standard exact match (EM) and F1 metrics where applicable, for which higher scores are better. We refer the reader to the appendix for details about the statistics and metrics of these datasets. Model. We build our models for these tasks using ELECTRA (Clark et al., 2019), since it is shown to perform well across a range of NLP tasks recently. We introduce randomly initialized taskspecific parameters designed for each task following prior work on each dataset, and finetune these models on each dataset to report results. We refer the reader to the appendix for training details and hyperparameter settings. Main results. We first present the performance of our implemented models and their combination with SPANDROP variants on the four natural language processing tasks. We also include results from representative prior work on each dataset for reference (detailed in the appendix), and summarize the results in Table 1. We observe that: 1) our implemented models achieve competitive and sometimes significantly better performance (in the cases of HotpotQA, SQuAD, and DocRED) compared to published results, especially considering that we do not tailor our models to each task too much; 2) SPANDROP improves the performance over these models even when the training set is large and that the model is already performing well; 3) Models trained with Beta-SPANDROP consistently perform better or equally well with their SPANDROP counterparts across all datasets, demonstrating that our observations on the synthetic datasets generalize well to real-world ones. We note that the performance gains on real-world data is less significant, which likely results from the fact spans in the synthetic task are independent from each other, which is not the case in natural language data. We further evaluate the performance of our trained models on the MultiRC testing data, and obtain results of EM/F1: 41.1/79.8, 39.9/78.5 and 39.1/78.2 for models with Beta-SPANDROP, SPANDROP, and without SPANDROP, respectively. This indicates that both Beta-SPANDROP and SPANDROP improve the model generalization ability, and Beta-SPANDROP is better than SPANDROP, improving EM/F1 with 2.0/1.6 absolute over the baseline. Next, to better understand whether the properties of SPANDROP and Beta-SPANDROP we observe on the synthetic data generalize to real-world data, we further perform a set of analysis experiments on SQuAD. Specifically, we are interested in studying the effect of the amount of training data, the span drop ratio p, and the choice of span size on performance. Effect of low data. To understand SPANDROP’s regularizing effect when training data is scarce, we study the model’s generalization performance when training on only 0.1% of the training data (around 100 examples) to using the entire training set (around 88k examples). As can be seen in Figure 3 (left), both SPANDROP and Beta-SPANDROP significantly improve model performance when the amount of training data is extremely low. As the amount of training data increases, this gap slowly closes but remains consistently positive. The final gap when 100% of the training data is used is still sufficient to separate top-2 performing systems on this dataset. Impact of drop ratio. We compare SPANDROP and Beta-SPANDROP by controlling how likely each span is dropped on average (drop ratio p). Recall from our experiments on FINDCATS that larger p will result in distribution drift from the original training set for SPANDROP but not BetaSPANDROP, thus the performance of the former deteriorates as p increases while the latter is virtually not affected. As can be seen in Figure 3 (middle), our observation on real-world data is consistent with this theoretical prediction, and indicate that Beta-SPANDROP is a better technique for data augmentation should one want to increase sequence diversity by setting p to a larger value. Impact of span size. We train the model with SPANDROP on SQuAD with varying span sizes of {1, 2, 4, 8, 16, 32, 64} tokens per span to understand the effect of this hyperparameter. We observe in Figure 3 (right) that as span size grows, the generalization performance of the model first holds roughly constant, then slowly deteriorates as span size grows too large. This suggests that the main contributors to generalization performance might have been the total number of spans in the entire sequence, which reduces with larger spans. This results in fewer potential augmented sequences for counterfactual learning, therefore lowering regularization strength. This observation is consistent with that on our synthetic data in our preliminary experiments, where we see that controlling for other factors, larger span sizes yield deteriorated generalization performance (data not shown due to space limit). This also suggests that while SPANDROP works with arbitrary span sizes, the optimal choice of spans for different tasks warrants further investigation, which we leave to future work. 5 RELATED WORK Long Sequence Inference. Many applications require the prediction/inference over long sequences, such as multi-hop reading comprehension (Yang et al., 2018; Welbl et al., 2018), long document summarization (Huang et al., 2021), document-level information extraction (Yao et al., 2019) in natural language processing, long sequence time-series prediction (Zhou et al., 2021a), promoter region and chromatin-profile prediction in DNA sequence (Oubounyt et al., 2019; Zhou & Troyanskaya, 2015) in Genomics etc, where not all elements in the long sequence contribute equally to the desired output. Aside from approaches we have discussed that attempt to approximate all pair-wise interactions between elements in a sequence, more recent work has also investigated compressing long sequences into shorter ones to distill the information therein for prediction or representation learning (Rae et al., 2020; Goyal et al., 2020; Kim & Cho, 2021). Sequence Data Augmentation. Data augmentation is an effective common technique for underspecified tasks like long sequence inference. Feng et al. (2021) propose to group common data augmentation techniques in natural language processing into three categories: 1) rule-based methods (Zhang et al., 2015; Wei & Zou, 2019; Şahin & Steedman, 2018), which apply a set of predefined operations over the raw input, such as removing, adding, shuffling and replacement; 2) example mixup-based methods (Guo et al., 2019; Guo, 2020; Chen et al., 2020; Jindal et al., 2020), which, inspired from Mixup in computer vision (Zhang et al., 2018), perform interpolation between continuous features like word embeddings and sentence embeddings; 3) model-based methods (Xie et al., 2020; Sennrich et al., 2016), which use trained models to generate new examples (e.g., back translation Xie et al., 2020). Most of existing rule-based data augmentation methods operate at the token/word level (Feng et al., 2021), such as word shuffle/replacement/addition (Wei & Zou, 2019). Shuffle-based techniques are less applicable when order information is crucial in the raw data (Lan et al., 2019, e.g., in natural language). Moreover, these operations might not be trivial in implementation over larger spans (e.g., at the phrase or sentence level). For example, while replacing tokens require selecting candidates from a fixed vocabulary which can be provided by well estimated language models (Clark et al., 2019), replacing phrases or sentences is significantly more challenging since the “vocabulary” is unbounded and marginal probability difficult to estimate. In contrast, our proposed SPANDROP supports data augmentation in multiple granularity as the spans in SPANDROP can be of any length, and is able to reserve sequence order since drop operation does not change the relative order of the original input. 6 CONCLUSION In this paper, we presented SPANDROP, a simple and effective method for learning from long sequences, which ablates parts of the sequence at random to generate counterfactual data to distill the sparse supervision signal that is predictive of the desired output. We show via theoretical analysis and carefully designed synthetic datasets that SPANDROP and its variant based on the beta-Bernoulli distribution help model achieve competitive performance with a fraction of the data by introducing diverse augmented training examples, and generalize better to previously unseen data. Our experiments on four real-world NLP datasets demonstrate that besides these benefits, SPANDROP can further improve upon powerful pretrained Transformer models even when data is abundant.
1. What is the focus and contribution of the paper on data augmentation? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity and effectiveness? 3. Do you have any concerns or suggestions regarding the method's assumption on independent drop decisions? 4. How does the reviewer assess the variety of span selection/splitting methods and their potential impact on the results? 5. Are there any questions regarding the application of SpanDrop to intermediate representations or its additional training cost and computational overhead? 6. Can the reviewer provide insights into the generalizability of SpanDrop to other tasks beyond NLP?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a data augmentation method, SpanDrop (and its variant Beta-SpanDrop), for long sequence data, especially where supporting facts take small portions. They provide a theoretical background on their method and evaluate the method on a synthetic task (FindCats) and four real natural language processing tasks requiring reasoning over long texts. SpanDrop is effectively improves the accuracy in both low-resource and abundant-resource settings. Review The method is simple yet effective. Experiments in the paper are well-designed to show the effectiveness of the method. The method generally assumes that the decision of drop can be done independently. However, the positions of salient information would usually exhibit certain patterns such as appearing close to each other. Do you have any idea to incorporate those characteristics into the method or learn those patterns? As the name of the method is SpanDrop, it would be much interesting if various selection/split methods of spans have been investigated. Of course, splitting by a sentence for passages is natural. SpanDrop is only applied at the input level. I am curious whether this drop could be applied to intermediate representations like done in LengthDrop (Kim and Cho, 2021). I wonder about the additional training cost of SpanDrop compared to the standard training. The expectation of length is the same for SpanDrop and Beta-SpanDrop. However, the expectation of length’s square would be different. Therefore, I guess their computational overhead will be also different. The sampling cost would be almost negligible. Does it require additional training steps to converge due to its regularization effect? Don’t it require modification of other regularization such as standard dropout? Do you have any thoughts about whether SpanDrop will be generalized seamlessly to other tasks than NLP?
ICLR
Title SpanDrop: Simple and Effective Counterfactual Learning for Long Sequences Abstract Distilling supervision signal from a long sequence to make predictions is a challenging task in machine learning, especially when not all elements in the input sequence contribute equally to the desired output. In this paper, we propose SPANDROP, a simple and effective data augmentation technique that helps models identify the true supervision signal in a long sequence with very few examples. By directly manipulating the input sequence, SPANDROP randomly ablates parts of the sequence at a time and ask the model to perform the same task to emulate counterfactual learning and achieve input attribution. Based on theoretical analysis of its properties, we also propose a variant of SPANDROP based on the beta-Bernoulli distribution, which yields diverse augmented sequences while providing a learning objective that is more consistent with the original dataset. We demonstrate the effectiveness of SPANDROP on a set of carefully designed toy tasks, as well as various natural language processing tasks that require reasoning over long sequences to arrive at the correct answer, and show that it helps models improve performance both when data is scarce and abundant. 1 INTRODUCTION Building effective machine learning systems for long sequences is a challenging and important task, which helps us better understand underlying patterns in naturally occurring sequential data like long texts (Radford et al., 2019), protein sequences (Jumper et al., 2021), financial time series (Bao et al., 2017), etc. Recently, there is growing interest in studying neural network models that can capture long-range correlations in sequential data with high computational, memory, and statistical efficiency, especially widely adopted Transformer models (Vaswani et al., 2017). Previous work approach long-sequence learning in Transformers largely by introducing computational approaches to replace the attention mechanism with more efficient counterparts. These approaches include limiting the input range over which the attention mechanism is applied (Kitaev et al., 2019) to limiting sequence-level attention to only a handful of positions (Beltagy et al., 2020; Zaheer et al., 2020). Other researchers make use of techniques akin to the kernel trick to eliminate the need to compute or instantiate the costly attention matrix (Peng et al., 2020; Katharopoulos et al., 2020; Choromanski et al., 2020). Essentially, these approaches aim to approximate the original pairwise interaction with lower cost, and are often interested in still capturing the interactions between every pair of input elements (e.g., the long sequence benchmark proposed by Tay et al., 2020). In this paper, we instead investigate learning problems for long sequences where not all input elements contribute equally to the desired output. Natural examples that take this form include sentiment classification for long customer review documents (where a few salient sentiment words contribute the most), question answering from a large document (where each question typically requires a small number of supporting sentences to answer), key phrase detection in audio processing (where a small number of recorded frames actually determine the prediction), as well as detecting a specific object from a complex scene (where, similarly, a small amount of pixels determine the outcome), to name a few. In these problems, it is usually counterproductive to try and make direct use of the entire input if the contributing portion is small or sparse, which results in a problem of underspecification (i.e., the data does not sufficiently define the goal for statistical models). One approach to address this problem is annotating the segments or neighborhoods that directly contribute to the outcome in the entire input. This could take the form of a subset of sentences that answer a question or describe the relation between entities in a paragraph (Yang et al., 2018; Yao et al., 2019), which function as explainable evidence that supplements the answer. When such annotation is not feasible, researchers and practitioners often need to resort to either collecting more input-output pairs or designing problem-specific data augmentation techniques to make up for the data gap. For real-valued data, this often translates to random transformations (e.g., shifting or flipping an image); for symbolic data like natural language, techniques like masking or substitution are more commonly used (e.g., randomly swapping words with a special mask token or other words). While these approaches have proven effective in some tasks, each has limitations that prevents it from being well-suited for the underspecification scenario. For instance, while global feature transformations enhance groupinvariance in learned representations, they do not directly help with better locating the underlying true stimulus. On the other hand, while replacement techniques like masking and substitution help ablate parts of the input, they are susceptible to the position bias of where the true stimulus might occur in the input. Furthermore, while substitution techniques can help create challenging contrastive examples, it is significantly more difficult to design them for complex symbolic sequences (e.g., replacing a phrase naturally in a sentence). To address these challenges, we propose SPANDROP, a simple and effective technique that helps models distill sparse supervision signal from long sequences when the problem is underspecified. Similar to replacement-based techniques such as masking and substitution, SPANDROP directly ablates parts of the input at random to construct counterfactual examples that preserve the original supervision signal with high probability. Instead of preserving the original sequence positions, however, SPANDROP directly removes ablated elements from the input to mitigate any bias that is related to the absolute positions of elements (rather than the relative positions between them) in the input. Upon closer examination of its theoretical and empirical properties, we further propose a more effective variant of SPANDROP based on the Beta-Bernoulli distribution that enhances the consistency of the augmented objective function with the original one. We demonstrate via carefully designed toy experiments that SPANDROP not only helps models achieve up to 20⇥ sample-efficiency in low-data settings, but also further reduces overfitting even when training data is abundant. We find that it is very effective at mitigating position bias compared to replacement-based counterfactual approaches, and enhances out-of-distribution generalization effectively. We further experiments on four natural language processing tasks that require models to answer question or extract entity relations from long texts, and demonstrate that SPANDROP can improve the performance of already competitive neural models without any change in model architecture. 2 METHOD In this section, we first formulate the problem of sequence inference, where the model takes sequential data as input to make predictions. Then, we introduce SPANDROP, a simple and effective data augmentation technique for long sequence inference, and analyze its theoretical properties. 2.1 PROBLEM DEFINITION Sequence Inference. We consider a task where a model takes a sequence S as input and predicts the output y. We assume that S consists of n disjoint but contiguous spans, each representing a part of the sequence in order S = (s1, . . . , sn). One example of sequence inference is sentiment classification from a paragraph of text, where S is the paragraph and y the desired sentiment label. Spans could be words, phrases, sentences, or a mixture of these in the paragraph. Another example is time series prediction, where S is historical data, y is the value at the next time step. Supporting facts. Given an input-output pair (S,y) for sequence prediction, we assume that y is truly determined by only a subset of spans in S. More formally, we assume that there is a subset of spans Ssup ⇢ {s1, s2, . . . , sn} such that y is independent of si, if si /2 Ssup. In sentiment classification, Ssup could consist of important sentiment words or conjunctions (like “good”, “bad”, “but”); in time series prediction, it could reflect the most recent time steps as well as those a few cycles away if the series is periodic. For simplicity, we will denote the size of this set m = |Ssup|, and restrict our attention to tasks where m ⌧ n, such as those described in the previous section. 2.2 SPANDROP In a long sequence inference task with sparse support facts (m ⌧ n), most of the spans in the input sequence will not contribute to the prediction of y, but they will introduce spurious correlation in a low-data scenario. SPANDROP generates new data instances (S̃,y) by ablating these spans at random, while preserving the supporting facts with high probability so that the model is still trained to make the correct prediction y. This is akin to counterfactually determining whether each span truly determines the outcome y by asking what the prediction would have been without it. Definition 1 (SPANDROP). Formally, given a sequence S that consists of spans (s1, s2, · · · sn), SPANDROP generates a new sequence S̃ as follows: i i.i.d.⇠ Bernoulli(1 p), S̃ =(si)ni=1, i=1, (1) where p is the hyperparameter that determines the probability to drop a span. Note that SPANDROP does not require introducing substitute spans or artificial symbols when ablating spans from the input sequence. It makes the most of the natural sequence as it occurs in the original training data, and preserves the relative order between spans that are not dropped, which is often helpful in understanding sequential data (e.g., time series or text). It is also not difficult to establish that the resulting sequence S̃ can preserve all of the m supporting facts with high probability regardless of how large n is. Remark 1. The new sequence length n0 = |S̃| and the number of preserved supporting facts m0 = |S̃ \ Ssup| follow binomial distributions with parameters (n, 1 p) and (m, 1 p), respectively: P (n0|n, p) = ✓ n n0 ◆ (1 p)n 0 pn n 0 , P (m0|m, p) = ✓ m m0 ◆ (1 p)m 0 pm m 0 . (2) Therefore, the proportion of sequences where all supporting facts are retained (i.e., m0 = m) is (1 p)m, which is independent of n. This means that as long as the total number of supporting facts in the sequence is bounded, then regardless of the sequence length, we can always choose p carefully such that we end up with many valid new examples with bounded noise introduced to supporting facts. Note that our analysis so far relies only on the assumption that m is known or can be estimated, and thus it can be applied to tasks where the precise set of supporting facts Ssup is unknown. More formally, the amount of new examples can be characterized by the size of the typical set of S̃, i.e.the set of sequences that the randomly ablated sequence will fall into with high probability. The size of the typical set for SPANDROP is approximately 2nH(p), where H(p) is the binary entropy of a Bernoulli random variable with probability p. Intuitively, these results indicate that the amount of total counterfactual examples generated by SPANDROP scales exponentially in n, but the level of supporting fact noise can be bounded as long as m is small. However, this formulation of SPANDROP does have a notable drawback that could potentially hinder its efficacy. Because the new sequence length n0 follows a binomial distribution, its mean is n(1 p) and its variance is np(1 p). For sufficiently large n, most of the resulting S̃ will have lengths that concentrate around the mean with a width of O( p n), which creates an artificial and permanent distribution drift from the original length (see Figure 1(a)). Furthermore, if we know the identity of Ssup and keep these spans during training, this length reduction will bias the training set towards easier examples to locate spans in Ssup. In the next subsection, we will introduce a variant of SPANDROP based on the beta-Bernoulli distribution that alleviates this issue. 2.3 BETA-SPANDROP To address the problem of distribution drift with SPANDROP, we introduce a variant that is based on the beta-Bernoulli distribution. The main idea is that instead of dropping each span in a sequence independently with a fixed probability p, we first sample a sequence-level probability ⇡ at which spans are dropped from a Beta distribution, then use this probability to perform SPANDROP. Definition 2 (Beta-SPANDROP). Let ↵ = , = · 1 p p , where > 0 is a scaling hyperparameter. Beta-SPANDROP generates S̃ over S as: ⇡ ⇠B(↵, ), i i.i.d.⇠ Bernoulli(1 ⇡), S̃ =(si)ni=1, i=1, (3) where B(↵, ) is the beta-distribution with parameters ↵ and . It can be easily demonstrated that in Beta-SPANDROP, the probability that each span is dropped is still controlled by p, same as in SPANDROP: E[ i|p] = E[E[ i|⇡]|p] = E[1 ⇡|p] = 1 ↵↵+ = 1 p. In fact, we can show that as ! 1, Beta-SPANDROP degenerates into SPANDROP since the beta-distribution would assign all probability mass on ⇡ = p. Despite the simplicity in its implementation, Beta-SPANDROP is significantly less likely to introduce unwanted data distribution drift, while is capable of generating diverse counterfactual examples to regularize the training of sequence inference models. This is due to the following properties: Remark 2. The new sequence length n0 = |S̃| and the number of preserved supporting facts m0 = |S̃ \ Ssup| follow binomial distributions with parameters (n, ,↵) and (m, ,↵), respectively: P (n0|n,↵, ) = (n+ 1) (n0 + 1) (n n0 + 1) (n0 + ) (n n0 + ↵) (n+ ↵+ ) (↵+ ) (↵) ( ) , (4) P (m0|m,↵, ) = (m+ 1) (m0 + 1) (m m0 + 1) (m0 + ) (m m0 + ↵) (m+ ↵+ ) (↵+ ) (↵) ( ) , (5) where (z) = R1 0 x z 1e xdx is the gamma function. As a result, we can show that the probability that Beta-SPANDROP preserves the entire original sequence with the following probability P (n0 = n|n,↵, ) = (n+ ) (↵+ ) (n+ ↵+ ) ( ) . (6) When = 1, this expression simply reduces to n+ ; when 6= 1, this quantity tends to O(n ) as n grows sufficiently large. Comparing this to the O((1 p)n) rate from SPANDROP, we can see that when n is large, Beta-SPANDROP recovers more of the original distribution represented by (S̃,y) compared to SPANDROP. In fact, as evidenced by Figure 1(a), the counterfactual sequences generated by Beta-SPANDROP are also more spread-out in their length distribution besides covering the original length n with significantly higher probability. A similar analysis can be performed by substituting n and n0 with m and m0, where we can conclude that as m grows, Beta-SPANDROP is much better at generating counterfactual sequences that preserve the entire supporting fact set Ssup. This is shown in Figure 1(b), where the proportion of “noise-free” examples (i.e., m0 = m) decays exponentially with SPANDROP ( = 1) while remaining much higher when is sufficiently small. For instance, when p = 0.1, = 1 and m = 10, the proportion of noise-free examples for SPANDROP is just 34.9%, while that for Beta-SPANDROP is 47.4%. As we have seen, Beta-SPANDROP is significantly better than its Bernoulli counterpart at assigning probability mass to the original data as well as generated sequences that contain the entire set of supporting facts. A natural question is, does this come at the cost of diverse counterfactual examples? To answer this question we study the entropy of the distribution that S̃ follows by varying and n, and normalize it by n to study the size of typical set of this distribution. As can be seen in Figure 1(c), as long as is large enough, the average entropy per span H̄ degrades very little from the theoretical maximum, which is H(p), attained when = 1. Therefore, to balance between introducing noise in the supporting facts and generating diverse examples, we set = 1 in our experiments. Using the beta-Bernoulli distribution in dropout. The beta-Bernoulli distribution has been studied in prior work in seeking replacements for the (Bernoulli) dropout mechanism (Srivastava et al., 2014). Liu et al. (2019a) set ↵ = for the beta distribution in their formulation, which limits the dropout rate to always be 0.5. Lee et al. (2018) fix = 1 and vary ↵ to control the sparsity of the result of dropout, which is similar to Beta-SPANDROP when = 1. However, we note that these approaches (as with dropout) are focused more on adding noise to internal representations of neural networks to introduce regularization, while SPANDROP operates directly on the input to ablate different components therein, and thus orthogonal (and potentially complementary) to these approaches. Further, SPANDROP has the benefit of not having to make any assumptions about the model or any changes to it during training, which makes it much more widely applicable. 3 FINDCATS: DISTILLING SUPERVISION FROM LONG-SEQUENCES In this section, we design a synthetic task of finding the animal name “cat” in a character sequence to a) demonstrate the effectiveness of SPANDROP and Beta-SPANDROP in promoting the performance over a series of problems with different settings, b) analyze the various factors that may affect the efficacy of these approaches, and c) compare it to other counterfactual augmentation techniques like masking on mitigating position bias. 3.1 EXPERIMENTAL SETUP FINDCATS. To understand the effectiveness of SPANDROP and Beta-SPANDROP in an experimental setting, we designed a synthetic task called FINDCATS where the model is trained to discern that given an animal name “cat”, whether a character string contains it as a subsequence (i.e., contains characters in “cat” in order, for instance, “abcdafgbijktma”) or not (e.g., “abcdefhtijklmn”). This allows us to easily control the total sequence length n, the supporting facts size m, as well as easily estimate the supporting fact noise that each SPANDROP variant might introduce. To generate the synthetic training data of FINDCATS, we first generate a sequence consisting of lowercase letters (a to z) that does not contain “cat” as a subsequence. For half of these sequences, we label the tuple (cat,S) with a negative class to indicate that S does not contain “cat” as a subsequence; for the other half, we choose arbitrary (but not necessarily contiguous) positions in S to replace the letters with letters in “cat” from left to right to generate positive examples. In all of our experiments, we evaluate model performance on a held-out set of 10,000 examples to observe classification error. We set sequence length to n = 300 where each letter is a separate span, and chose positions for the letters in the animal name “cat” uniformly at random in the sequence unless otherwise mentioned. Model. We employ three-layer Transformer model (Vaswani et al., 2017) with position embeddings (Devlin et al., 2019) as the sequence encoder, which is implemented with HuggingFace Transformers (Wolf et al., 2019). For each example (“cat”,S, y), we feed “[CLS] cat [SEP] S [SEP]” to the sequence encoder and then construct binary classifier over the output representation of “[CLS]” to predict y. To investigate the effectiveness of SPANDROP, we simply apply SPANDROP to S first before feeding the resulting sequence into the Transformer classifier. 3.2 RESULTS AND ANALYSIS In each experiment, we compare SPANDROP and Beta-SPANDROP at the same drop ratio p. And we further use rejection sampling to remove examples that do not preserve the desired supporting facts to understand the effect of supporting fact noise. Data efficiency. We begin by analyzing the contribution of SPANDROP and Beta-SPANDROP to improving the sample efficiency of the baseline model. To achieve this goal, we vary the size of the training set from 10 to 50,000 and observe the prediction error on the held-out set. We observe from the results in Figure 2(a) that: 1) Both SPANDROP and Beta-SPANDROP significantly improve data efficiency in low-data settings. For instance, when trained on only 200 training examples, SPANDROP variants can achieve the generalization performance of the baseline model trained on 5x to even 20x data. 2) Removing supporting fact noise typically improves data efficiency further by about 2x. This indicates it is helpful not to drop spans in Ssup during training when possible, so that the model is always trained with true counterfactual examples rather than sometimes noisy ones. 3) Beta-SPANDROP consistently improves upon the baseline model even when data is abundant. This (a) Data efficiency (b) Noise in supporting facts (c) Varying sequence length Baseline SPANDROP SPANDROP (noise-free) Beta-SPANDROP Beta-SPANDROP (noise-free) is likely due to the difficulty of the task when n = 300 and m = 3. Similar to many real-world tasks, the task remains underspecified even when the generalization error is already very low thanks to the large amount of training data available. 4) SPANDROP introduces inconsistent training objective with the original training set, which leads to performance deterioration when there is sufficient training data, which is consistent with our theoretical observation. Effect of supporting fact noise and sequence length. Since SPANDROP introduces noise in the supporting facts (albeit with a low probability), it is natural to ask if such noise is negatively correlated with model performance. We study this by varying the drop ratio p from {0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5} on fixed training sets of size 1,000, and observe the resulting model performance and supporting fact error. As can be seen in Figure 2(b), supporting fact noise increases rapidly as p grows.1 However, we note that although the performance of SPANDROP deteriorates as p increases, that of Beta-SPANDROP stays relatively stable. Inspecting these results more closely, we find that even the performance of the noise-free variants follow a similar trend, which should not be affected by supporting fact noise. Recalling the observations from our data efficiency experiments, we next turn to the hypothesis that this discrepancy is mainly caused by the inconsistent length distribution SPANDROP introduces. To test this hypothesis, we conduct two separate sets of experiments: 1) training and testing the model on varying sequence lengths {10, 20, 30, 50, 100, 200, 300, 500}, where longer sequences suffer more from the discrepancy between SPANDROP-resulted sequence lengths and the original sequence length; and 2) testing the model trained on n = 300 on test sets of different lengths, and if our hypothesis about distribution drift were correct, we should see SPANDROP models’ performance peaking around n0 = n(1 p), while the performance of Beta-SPANDROP is less affected by sequence length. As can be seen from Figures 2(c) and 2(d), our experimental results seem to well supporting this hypothesis. Specifically, in Figure 2(c), while the performance of both SPANDROP variants deteriorates as n grows and the task becomes more challenging and underspecified, SPANDROP deteriorates at a faster speed even when we remove the effect of supporting fact noise. On the other hand, we can clearly see in Figure 2(d) that SPANDROP performance peak around sequences of length 270 (= n(1 p) = 300⇥ (1 0.1)) before rapidly deteriorating, while Beta-SPANDROP is unaffected until test sequence length exceeds that of all examples seen during training. 1Note that the noise in our experiments are lower than what would be predicted by theory, because in practice the initial sequence S might already contain parts of “cat” before it is inserted. This creates redundant sets of supporting facts for this task and reduces supporting fact noise especially when n is large. Mitigating position bias. Besides SPANDROP, replacement-based techniques like masking can also be applied to introduce counterfactual examples into sequence model training, where elements in the sequence are replaced by a special symbol that is not used at test time. We implement SPANMASK in the same way as SPANDROP except spans are replaced rather than removed when the sampled “drop mask” i is 0. We first inspect whether SPANMASK benefits from the same beta-Bernoulli distribution we use in SPANDROP. As can be seen in Figure 2(e), the gain from switching to a betaBernoulli distribution provides negligible benefit to SPANMASK, which does not alter the sequence length of the input to begin with. We also see that SPANMASK results in significantly higher error than both SPANDROP and Beta-SPANDROP in this setting. We further experiment with introducing position bias into the training data (but not the test data) to test whether these method help the model generalize to an unseen setting. Specifically, instead of selecting the position for the characters “cat” uniformly at random, we train the model with a “fixed position” dataset where they always occur at indices (10, 110, 210), and a “first 100” dataset where they are uniformly distributed among the first 100 letters. As can be seen in Figure 2(f), both the baseline and SPANMASK models overfit to the position bias in the “fixed” setting, while SPANDROP techniques significantly reduce zeroshot generalization error. In the “first 100” setting, Beta-SPANDROP consistently outperforms its Bernoulli counterpart and SPANMASK at improving the performance of the baseline model as well, indicating that SPANDROP variants are effective at reducing the position bias of the model. 4 EXPERIMENTS ON NATURAL LANGUAGE DATA To examine the efficacy of the proposed SPANDROP techniques on realistic data, we conduct experiments on four natural language processing datasets that represent the tasks of single- and multi-hop extractive question answering, multiple-choice question answering, and relation extraction. We focus on showing the effect of SPANDROP instead of pursuing the state of the art in these experiments. Datasets. We use four natural language processing datasets: SQuAD 1.1 (Rajpurkar et al., 2016), where models answer questions on a paragraph of text from Wikipedia; MultiRC (Khashabi et al., 2018), which is a multi-choice reading comprehension task in which questions can only be answered by taking into account information from multiple sentences; HotpotQA (Yang et al., 2018), which requires models to perform multi-hop reasoning over multiple Wikipedia pages to answer questions; and DocRED (Yao et al., 2019), which is a document-level data set for relation extraction. For the SQuAD dataset, we define spans as collections of one or more consecutive tokens to show that SPANDROP can be applied to different granularities. For the rest three datasets, we define spans to be sentences since supporting facts are provided at sentence level. For all of these tasks, we report standard exact match (EM) and F1 metrics where applicable, for which higher scores are better. We refer the reader to the appendix for details about the statistics and metrics of these datasets. Model. We build our models for these tasks using ELECTRA (Clark et al., 2019), since it is shown to perform well across a range of NLP tasks recently. We introduce randomly initialized taskspecific parameters designed for each task following prior work on each dataset, and finetune these models on each dataset to report results. We refer the reader to the appendix for training details and hyperparameter settings. Main results. We first present the performance of our implemented models and their combination with SPANDROP variants on the four natural language processing tasks. We also include results from representative prior work on each dataset for reference (detailed in the appendix), and summarize the results in Table 1. We observe that: 1) our implemented models achieve competitive and sometimes significantly better performance (in the cases of HotpotQA, SQuAD, and DocRED) compared to published results, especially considering that we do not tailor our models to each task too much; 2) SPANDROP improves the performance over these models even when the training set is large and that the model is already performing well; 3) Models trained with Beta-SPANDROP consistently perform better or equally well with their SPANDROP counterparts across all datasets, demonstrating that our observations on the synthetic datasets generalize well to real-world ones. We note that the performance gains on real-world data is less significant, which likely results from the fact spans in the synthetic task are independent from each other, which is not the case in natural language data. We further evaluate the performance of our trained models on the MultiRC testing data, and obtain results of EM/F1: 41.1/79.8, 39.9/78.5 and 39.1/78.2 for models with Beta-SPANDROP, SPANDROP, and without SPANDROP, respectively. This indicates that both Beta-SPANDROP and SPANDROP improve the model generalization ability, and Beta-SPANDROP is better than SPANDROP, improving EM/F1 with 2.0/1.6 absolute over the baseline. Next, to better understand whether the properties of SPANDROP and Beta-SPANDROP we observe on the synthetic data generalize to real-world data, we further perform a set of analysis experiments on SQuAD. Specifically, we are interested in studying the effect of the amount of training data, the span drop ratio p, and the choice of span size on performance. Effect of low data. To understand SPANDROP’s regularizing effect when training data is scarce, we study the model’s generalization performance when training on only 0.1% of the training data (around 100 examples) to using the entire training set (around 88k examples). As can be seen in Figure 3 (left), both SPANDROP and Beta-SPANDROP significantly improve model performance when the amount of training data is extremely low. As the amount of training data increases, this gap slowly closes but remains consistently positive. The final gap when 100% of the training data is used is still sufficient to separate top-2 performing systems on this dataset. Impact of drop ratio. We compare SPANDROP and Beta-SPANDROP by controlling how likely each span is dropped on average (drop ratio p). Recall from our experiments on FINDCATS that larger p will result in distribution drift from the original training set for SPANDROP but not BetaSPANDROP, thus the performance of the former deteriorates as p increases while the latter is virtually not affected. As can be seen in Figure 3 (middle), our observation on real-world data is consistent with this theoretical prediction, and indicate that Beta-SPANDROP is a better technique for data augmentation should one want to increase sequence diversity by setting p to a larger value. Impact of span size. We train the model with SPANDROP on SQuAD with varying span sizes of {1, 2, 4, 8, 16, 32, 64} tokens per span to understand the effect of this hyperparameter. We observe in Figure 3 (right) that as span size grows, the generalization performance of the model first holds roughly constant, then slowly deteriorates as span size grows too large. This suggests that the main contributors to generalization performance might have been the total number of spans in the entire sequence, which reduces with larger spans. This results in fewer potential augmented sequences for counterfactual learning, therefore lowering regularization strength. This observation is consistent with that on our synthetic data in our preliminary experiments, where we see that controlling for other factors, larger span sizes yield deteriorated generalization performance (data not shown due to space limit). This also suggests that while SPANDROP works with arbitrary span sizes, the optimal choice of spans for different tasks warrants further investigation, which we leave to future work. 5 RELATED WORK Long Sequence Inference. Many applications require the prediction/inference over long sequences, such as multi-hop reading comprehension (Yang et al., 2018; Welbl et al., 2018), long document summarization (Huang et al., 2021), document-level information extraction (Yao et al., 2019) in natural language processing, long sequence time-series prediction (Zhou et al., 2021a), promoter region and chromatin-profile prediction in DNA sequence (Oubounyt et al., 2019; Zhou & Troyanskaya, 2015) in Genomics etc, where not all elements in the long sequence contribute equally to the desired output. Aside from approaches we have discussed that attempt to approximate all pair-wise interactions between elements in a sequence, more recent work has also investigated compressing long sequences into shorter ones to distill the information therein for prediction or representation learning (Rae et al., 2020; Goyal et al., 2020; Kim & Cho, 2021). Sequence Data Augmentation. Data augmentation is an effective common technique for underspecified tasks like long sequence inference. Feng et al. (2021) propose to group common data augmentation techniques in natural language processing into three categories: 1) rule-based methods (Zhang et al., 2015; Wei & Zou, 2019; Şahin & Steedman, 2018), which apply a set of predefined operations over the raw input, such as removing, adding, shuffling and replacement; 2) example mixup-based methods (Guo et al., 2019; Guo, 2020; Chen et al., 2020; Jindal et al., 2020), which, inspired from Mixup in computer vision (Zhang et al., 2018), perform interpolation between continuous features like word embeddings and sentence embeddings; 3) model-based methods (Xie et al., 2020; Sennrich et al., 2016), which use trained models to generate new examples (e.g., back translation Xie et al., 2020). Most of existing rule-based data augmentation methods operate at the token/word level (Feng et al., 2021), such as word shuffle/replacement/addition (Wei & Zou, 2019). Shuffle-based techniques are less applicable when order information is crucial in the raw data (Lan et al., 2019, e.g., in natural language). Moreover, these operations might not be trivial in implementation over larger spans (e.g., at the phrase or sentence level). For example, while replacing tokens require selecting candidates from a fixed vocabulary which can be provided by well estimated language models (Clark et al., 2019), replacing phrases or sentences is significantly more challenging since the “vocabulary” is unbounded and marginal probability difficult to estimate. In contrast, our proposed SPANDROP supports data augmentation in multiple granularity as the spans in SPANDROP can be of any length, and is able to reserve sequence order since drop operation does not change the relative order of the original input. 6 CONCLUSION In this paper, we presented SPANDROP, a simple and effective method for learning from long sequences, which ablates parts of the sequence at random to generate counterfactual data to distill the sparse supervision signal that is predictive of the desired output. We show via theoretical analysis and carefully designed synthetic datasets that SPANDROP and its variant based on the beta-Bernoulli distribution help model achieve competitive performance with a fraction of the data by introducing diverse augmented training examples, and generalize better to previously unseen data. Our experiments on four real-world NLP datasets demonstrate that besides these benefits, SPANDROP can further improve upon powerful pretrained Transformer models even when data is abundant.
1. What is the focus and contribution of the paper on SPANDROP? 2. What are the strengths of the proposed approach, particularly its simplicity and performance? 3. What are the weaknesses of the paper regarding technical details and comparisons with other works? 4. How does the reviewer assess the novelty and significance of the proposed method in addressing learning problems for long sequences? 5. What are the suggestions for improving and enhancing the experiments in the paper?
Summary Of The Paper Review
Summary Of The Paper This work proposes SPANDROP, a simple variant of dropout, working on the spans of long sequences. SPANDROP randomly ablates parts of a sequence at a time and asks the model to perform the same task to emulate counterfactual learning and achieve input attribution. The method is tested on both toy tasks and four NLP tasks. Review Strengths: The proposed method is simple and easy to understand. The method outperforms baseline methods. Weaknesses Several technical details are not discussed. How to partition a long sequence into spans? For text sequences? For time series? Will the final results be sensitive to the partition of spans? If each word in NLP sequences is a span, the proposed method is the same as the word-level dropout baseline studied in previous works, e.g., [1]. What's the tech novelty of this work? While the authors focus on the "learning problems for long sequences where not all input elements contribute equally to the desired output", the uniqueness of this kind of problems is not clear to me. In most (if not all) real-world sequence classification and prediction problems, not all input elements contribute equally to the desired output; otherwise, the problems will become much easier. Experiments need to be improved and enhanced, e.g., As reviewed in the second paragraph, there are many Transformer variants proposed for long sequences, but none of them is compared. Although these approaches aim to approximate the original pairwise interaction with lower cost and are often interested in still capturing the interactions between every pair of input elements (e.g., the long sequence benchmark proposed by Tay et al., 2020), they can still be directly applied to long sequences where not all input elements contribute equally to the desired output. The four NLP datasets used in experiments are not representative tasks for long sequences. I suggest to test on the public benchmark datasets, e.g., [2]. [1] Soft Contextual Data Augmentation for Neural Machine Translation, ACL 2019. [2] Long range arena: A benchmark for efficient transformers. In International Conference on Learning Representations, 2020.
ICLR
Title SpanDrop: Simple and Effective Counterfactual Learning for Long Sequences Abstract Distilling supervision signal from a long sequence to make predictions is a challenging task in machine learning, especially when not all elements in the input sequence contribute equally to the desired output. In this paper, we propose SPANDROP, a simple and effective data augmentation technique that helps models identify the true supervision signal in a long sequence with very few examples. By directly manipulating the input sequence, SPANDROP randomly ablates parts of the sequence at a time and ask the model to perform the same task to emulate counterfactual learning and achieve input attribution. Based on theoretical analysis of its properties, we also propose a variant of SPANDROP based on the beta-Bernoulli distribution, which yields diverse augmented sequences while providing a learning objective that is more consistent with the original dataset. We demonstrate the effectiveness of SPANDROP on a set of carefully designed toy tasks, as well as various natural language processing tasks that require reasoning over long sequences to arrive at the correct answer, and show that it helps models improve performance both when data is scarce and abundant. 1 INTRODUCTION Building effective machine learning systems for long sequences is a challenging and important task, which helps us better understand underlying patterns in naturally occurring sequential data like long texts (Radford et al., 2019), protein sequences (Jumper et al., 2021), financial time series (Bao et al., 2017), etc. Recently, there is growing interest in studying neural network models that can capture long-range correlations in sequential data with high computational, memory, and statistical efficiency, especially widely adopted Transformer models (Vaswani et al., 2017). Previous work approach long-sequence learning in Transformers largely by introducing computational approaches to replace the attention mechanism with more efficient counterparts. These approaches include limiting the input range over which the attention mechanism is applied (Kitaev et al., 2019) to limiting sequence-level attention to only a handful of positions (Beltagy et al., 2020; Zaheer et al., 2020). Other researchers make use of techniques akin to the kernel trick to eliminate the need to compute or instantiate the costly attention matrix (Peng et al., 2020; Katharopoulos et al., 2020; Choromanski et al., 2020). Essentially, these approaches aim to approximate the original pairwise interaction with lower cost, and are often interested in still capturing the interactions between every pair of input elements (e.g., the long sequence benchmark proposed by Tay et al., 2020). In this paper, we instead investigate learning problems for long sequences where not all input elements contribute equally to the desired output. Natural examples that take this form include sentiment classification for long customer review documents (where a few salient sentiment words contribute the most), question answering from a large document (where each question typically requires a small number of supporting sentences to answer), key phrase detection in audio processing (where a small number of recorded frames actually determine the prediction), as well as detecting a specific object from a complex scene (where, similarly, a small amount of pixels determine the outcome), to name a few. In these problems, it is usually counterproductive to try and make direct use of the entire input if the contributing portion is small or sparse, which results in a problem of underspecification (i.e., the data does not sufficiently define the goal for statistical models). One approach to address this problem is annotating the segments or neighborhoods that directly contribute to the outcome in the entire input. This could take the form of a subset of sentences that answer a question or describe the relation between entities in a paragraph (Yang et al., 2018; Yao et al., 2019), which function as explainable evidence that supplements the answer. When such annotation is not feasible, researchers and practitioners often need to resort to either collecting more input-output pairs or designing problem-specific data augmentation techniques to make up for the data gap. For real-valued data, this often translates to random transformations (e.g., shifting or flipping an image); for symbolic data like natural language, techniques like masking or substitution are more commonly used (e.g., randomly swapping words with a special mask token or other words). While these approaches have proven effective in some tasks, each has limitations that prevents it from being well-suited for the underspecification scenario. For instance, while global feature transformations enhance groupinvariance in learned representations, they do not directly help with better locating the underlying true stimulus. On the other hand, while replacement techniques like masking and substitution help ablate parts of the input, they are susceptible to the position bias of where the true stimulus might occur in the input. Furthermore, while substitution techniques can help create challenging contrastive examples, it is significantly more difficult to design them for complex symbolic sequences (e.g., replacing a phrase naturally in a sentence). To address these challenges, we propose SPANDROP, a simple and effective technique that helps models distill sparse supervision signal from long sequences when the problem is underspecified. Similar to replacement-based techniques such as masking and substitution, SPANDROP directly ablates parts of the input at random to construct counterfactual examples that preserve the original supervision signal with high probability. Instead of preserving the original sequence positions, however, SPANDROP directly removes ablated elements from the input to mitigate any bias that is related to the absolute positions of elements (rather than the relative positions between them) in the input. Upon closer examination of its theoretical and empirical properties, we further propose a more effective variant of SPANDROP based on the Beta-Bernoulli distribution that enhances the consistency of the augmented objective function with the original one. We demonstrate via carefully designed toy experiments that SPANDROP not only helps models achieve up to 20⇥ sample-efficiency in low-data settings, but also further reduces overfitting even when training data is abundant. We find that it is very effective at mitigating position bias compared to replacement-based counterfactual approaches, and enhances out-of-distribution generalization effectively. We further experiments on four natural language processing tasks that require models to answer question or extract entity relations from long texts, and demonstrate that SPANDROP can improve the performance of already competitive neural models without any change in model architecture. 2 METHOD In this section, we first formulate the problem of sequence inference, where the model takes sequential data as input to make predictions. Then, we introduce SPANDROP, a simple and effective data augmentation technique for long sequence inference, and analyze its theoretical properties. 2.1 PROBLEM DEFINITION Sequence Inference. We consider a task where a model takes a sequence S as input and predicts the output y. We assume that S consists of n disjoint but contiguous spans, each representing a part of the sequence in order S = (s1, . . . , sn). One example of sequence inference is sentiment classification from a paragraph of text, where S is the paragraph and y the desired sentiment label. Spans could be words, phrases, sentences, or a mixture of these in the paragraph. Another example is time series prediction, where S is historical data, y is the value at the next time step. Supporting facts. Given an input-output pair (S,y) for sequence prediction, we assume that y is truly determined by only a subset of spans in S. More formally, we assume that there is a subset of spans Ssup ⇢ {s1, s2, . . . , sn} such that y is independent of si, if si /2 Ssup. In sentiment classification, Ssup could consist of important sentiment words or conjunctions (like “good”, “bad”, “but”); in time series prediction, it could reflect the most recent time steps as well as those a few cycles away if the series is periodic. For simplicity, we will denote the size of this set m = |Ssup|, and restrict our attention to tasks where m ⌧ n, such as those described in the previous section. 2.2 SPANDROP In a long sequence inference task with sparse support facts (m ⌧ n), most of the spans in the input sequence will not contribute to the prediction of y, but they will introduce spurious correlation in a low-data scenario. SPANDROP generates new data instances (S̃,y) by ablating these spans at random, while preserving the supporting facts with high probability so that the model is still trained to make the correct prediction y. This is akin to counterfactually determining whether each span truly determines the outcome y by asking what the prediction would have been without it. Definition 1 (SPANDROP). Formally, given a sequence S that consists of spans (s1, s2, · · · sn), SPANDROP generates a new sequence S̃ as follows: i i.i.d.⇠ Bernoulli(1 p), S̃ =(si)ni=1, i=1, (1) where p is the hyperparameter that determines the probability to drop a span. Note that SPANDROP does not require introducing substitute spans or artificial symbols when ablating spans from the input sequence. It makes the most of the natural sequence as it occurs in the original training data, and preserves the relative order between spans that are not dropped, which is often helpful in understanding sequential data (e.g., time series or text). It is also not difficult to establish that the resulting sequence S̃ can preserve all of the m supporting facts with high probability regardless of how large n is. Remark 1. The new sequence length n0 = |S̃| and the number of preserved supporting facts m0 = |S̃ \ Ssup| follow binomial distributions with parameters (n, 1 p) and (m, 1 p), respectively: P (n0|n, p) = ✓ n n0 ◆ (1 p)n 0 pn n 0 , P (m0|m, p) = ✓ m m0 ◆ (1 p)m 0 pm m 0 . (2) Therefore, the proportion of sequences where all supporting facts are retained (i.e., m0 = m) is (1 p)m, which is independent of n. This means that as long as the total number of supporting facts in the sequence is bounded, then regardless of the sequence length, we can always choose p carefully such that we end up with many valid new examples with bounded noise introduced to supporting facts. Note that our analysis so far relies only on the assumption that m is known or can be estimated, and thus it can be applied to tasks where the precise set of supporting facts Ssup is unknown. More formally, the amount of new examples can be characterized by the size of the typical set of S̃, i.e.the set of sequences that the randomly ablated sequence will fall into with high probability. The size of the typical set for SPANDROP is approximately 2nH(p), where H(p) is the binary entropy of a Bernoulli random variable with probability p. Intuitively, these results indicate that the amount of total counterfactual examples generated by SPANDROP scales exponentially in n, but the level of supporting fact noise can be bounded as long as m is small. However, this formulation of SPANDROP does have a notable drawback that could potentially hinder its efficacy. Because the new sequence length n0 follows a binomial distribution, its mean is n(1 p) and its variance is np(1 p). For sufficiently large n, most of the resulting S̃ will have lengths that concentrate around the mean with a width of O( p n), which creates an artificial and permanent distribution drift from the original length (see Figure 1(a)). Furthermore, if we know the identity of Ssup and keep these spans during training, this length reduction will bias the training set towards easier examples to locate spans in Ssup. In the next subsection, we will introduce a variant of SPANDROP based on the beta-Bernoulli distribution that alleviates this issue. 2.3 BETA-SPANDROP To address the problem of distribution drift with SPANDROP, we introduce a variant that is based on the beta-Bernoulli distribution. The main idea is that instead of dropping each span in a sequence independently with a fixed probability p, we first sample a sequence-level probability ⇡ at which spans are dropped from a Beta distribution, then use this probability to perform SPANDROP. Definition 2 (Beta-SPANDROP). Let ↵ = , = · 1 p p , where > 0 is a scaling hyperparameter. Beta-SPANDROP generates S̃ over S as: ⇡ ⇠B(↵, ), i i.i.d.⇠ Bernoulli(1 ⇡), S̃ =(si)ni=1, i=1, (3) where B(↵, ) is the beta-distribution with parameters ↵ and . It can be easily demonstrated that in Beta-SPANDROP, the probability that each span is dropped is still controlled by p, same as in SPANDROP: E[ i|p] = E[E[ i|⇡]|p] = E[1 ⇡|p] = 1 ↵↵+ = 1 p. In fact, we can show that as ! 1, Beta-SPANDROP degenerates into SPANDROP since the beta-distribution would assign all probability mass on ⇡ = p. Despite the simplicity in its implementation, Beta-SPANDROP is significantly less likely to introduce unwanted data distribution drift, while is capable of generating diverse counterfactual examples to regularize the training of sequence inference models. This is due to the following properties: Remark 2. The new sequence length n0 = |S̃| and the number of preserved supporting facts m0 = |S̃ \ Ssup| follow binomial distributions with parameters (n, ,↵) and (m, ,↵), respectively: P (n0|n,↵, ) = (n+ 1) (n0 + 1) (n n0 + 1) (n0 + ) (n n0 + ↵) (n+ ↵+ ) (↵+ ) (↵) ( ) , (4) P (m0|m,↵, ) = (m+ 1) (m0 + 1) (m m0 + 1) (m0 + ) (m m0 + ↵) (m+ ↵+ ) (↵+ ) (↵) ( ) , (5) where (z) = R1 0 x z 1e xdx is the gamma function. As a result, we can show that the probability that Beta-SPANDROP preserves the entire original sequence with the following probability P (n0 = n|n,↵, ) = (n+ ) (↵+ ) (n+ ↵+ ) ( ) . (6) When = 1, this expression simply reduces to n+ ; when 6= 1, this quantity tends to O(n ) as n grows sufficiently large. Comparing this to the O((1 p)n) rate from SPANDROP, we can see that when n is large, Beta-SPANDROP recovers more of the original distribution represented by (S̃,y) compared to SPANDROP. In fact, as evidenced by Figure 1(a), the counterfactual sequences generated by Beta-SPANDROP are also more spread-out in their length distribution besides covering the original length n with significantly higher probability. A similar analysis can be performed by substituting n and n0 with m and m0, where we can conclude that as m grows, Beta-SPANDROP is much better at generating counterfactual sequences that preserve the entire supporting fact set Ssup. This is shown in Figure 1(b), where the proportion of “noise-free” examples (i.e., m0 = m) decays exponentially with SPANDROP ( = 1) while remaining much higher when is sufficiently small. For instance, when p = 0.1, = 1 and m = 10, the proportion of noise-free examples for SPANDROP is just 34.9%, while that for Beta-SPANDROP is 47.4%. As we have seen, Beta-SPANDROP is significantly better than its Bernoulli counterpart at assigning probability mass to the original data as well as generated sequences that contain the entire set of supporting facts. A natural question is, does this come at the cost of diverse counterfactual examples? To answer this question we study the entropy of the distribution that S̃ follows by varying and n, and normalize it by n to study the size of typical set of this distribution. As can be seen in Figure 1(c), as long as is large enough, the average entropy per span H̄ degrades very little from the theoretical maximum, which is H(p), attained when = 1. Therefore, to balance between introducing noise in the supporting facts and generating diverse examples, we set = 1 in our experiments. Using the beta-Bernoulli distribution in dropout. The beta-Bernoulli distribution has been studied in prior work in seeking replacements for the (Bernoulli) dropout mechanism (Srivastava et al., 2014). Liu et al. (2019a) set ↵ = for the beta distribution in their formulation, which limits the dropout rate to always be 0.5. Lee et al. (2018) fix = 1 and vary ↵ to control the sparsity of the result of dropout, which is similar to Beta-SPANDROP when = 1. However, we note that these approaches (as with dropout) are focused more on adding noise to internal representations of neural networks to introduce regularization, while SPANDROP operates directly on the input to ablate different components therein, and thus orthogonal (and potentially complementary) to these approaches. Further, SPANDROP has the benefit of not having to make any assumptions about the model or any changes to it during training, which makes it much more widely applicable. 3 FINDCATS: DISTILLING SUPERVISION FROM LONG-SEQUENCES In this section, we design a synthetic task of finding the animal name “cat” in a character sequence to a) demonstrate the effectiveness of SPANDROP and Beta-SPANDROP in promoting the performance over a series of problems with different settings, b) analyze the various factors that may affect the efficacy of these approaches, and c) compare it to other counterfactual augmentation techniques like masking on mitigating position bias. 3.1 EXPERIMENTAL SETUP FINDCATS. To understand the effectiveness of SPANDROP and Beta-SPANDROP in an experimental setting, we designed a synthetic task called FINDCATS where the model is trained to discern that given an animal name “cat”, whether a character string contains it as a subsequence (i.e., contains characters in “cat” in order, for instance, “abcdafgbijktma”) or not (e.g., “abcdefhtijklmn”). This allows us to easily control the total sequence length n, the supporting facts size m, as well as easily estimate the supporting fact noise that each SPANDROP variant might introduce. To generate the synthetic training data of FINDCATS, we first generate a sequence consisting of lowercase letters (a to z) that does not contain “cat” as a subsequence. For half of these sequences, we label the tuple (cat,S) with a negative class to indicate that S does not contain “cat” as a subsequence; for the other half, we choose arbitrary (but not necessarily contiguous) positions in S to replace the letters with letters in “cat” from left to right to generate positive examples. In all of our experiments, we evaluate model performance on a held-out set of 10,000 examples to observe classification error. We set sequence length to n = 300 where each letter is a separate span, and chose positions for the letters in the animal name “cat” uniformly at random in the sequence unless otherwise mentioned. Model. We employ three-layer Transformer model (Vaswani et al., 2017) with position embeddings (Devlin et al., 2019) as the sequence encoder, which is implemented with HuggingFace Transformers (Wolf et al., 2019). For each example (“cat”,S, y), we feed “[CLS] cat [SEP] S [SEP]” to the sequence encoder and then construct binary classifier over the output representation of “[CLS]” to predict y. To investigate the effectiveness of SPANDROP, we simply apply SPANDROP to S first before feeding the resulting sequence into the Transformer classifier. 3.2 RESULTS AND ANALYSIS In each experiment, we compare SPANDROP and Beta-SPANDROP at the same drop ratio p. And we further use rejection sampling to remove examples that do not preserve the desired supporting facts to understand the effect of supporting fact noise. Data efficiency. We begin by analyzing the contribution of SPANDROP and Beta-SPANDROP to improving the sample efficiency of the baseline model. To achieve this goal, we vary the size of the training set from 10 to 50,000 and observe the prediction error on the held-out set. We observe from the results in Figure 2(a) that: 1) Both SPANDROP and Beta-SPANDROP significantly improve data efficiency in low-data settings. For instance, when trained on only 200 training examples, SPANDROP variants can achieve the generalization performance of the baseline model trained on 5x to even 20x data. 2) Removing supporting fact noise typically improves data efficiency further by about 2x. This indicates it is helpful not to drop spans in Ssup during training when possible, so that the model is always trained with true counterfactual examples rather than sometimes noisy ones. 3) Beta-SPANDROP consistently improves upon the baseline model even when data is abundant. This (a) Data efficiency (b) Noise in supporting facts (c) Varying sequence length Baseline SPANDROP SPANDROP (noise-free) Beta-SPANDROP Beta-SPANDROP (noise-free) is likely due to the difficulty of the task when n = 300 and m = 3. Similar to many real-world tasks, the task remains underspecified even when the generalization error is already very low thanks to the large amount of training data available. 4) SPANDROP introduces inconsistent training objective with the original training set, which leads to performance deterioration when there is sufficient training data, which is consistent with our theoretical observation. Effect of supporting fact noise and sequence length. Since SPANDROP introduces noise in the supporting facts (albeit with a low probability), it is natural to ask if such noise is negatively correlated with model performance. We study this by varying the drop ratio p from {0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5} on fixed training sets of size 1,000, and observe the resulting model performance and supporting fact error. As can be seen in Figure 2(b), supporting fact noise increases rapidly as p grows.1 However, we note that although the performance of SPANDROP deteriorates as p increases, that of Beta-SPANDROP stays relatively stable. Inspecting these results more closely, we find that even the performance of the noise-free variants follow a similar trend, which should not be affected by supporting fact noise. Recalling the observations from our data efficiency experiments, we next turn to the hypothesis that this discrepancy is mainly caused by the inconsistent length distribution SPANDROP introduces. To test this hypothesis, we conduct two separate sets of experiments: 1) training and testing the model on varying sequence lengths {10, 20, 30, 50, 100, 200, 300, 500}, where longer sequences suffer more from the discrepancy between SPANDROP-resulted sequence lengths and the original sequence length; and 2) testing the model trained on n = 300 on test sets of different lengths, and if our hypothesis about distribution drift were correct, we should see SPANDROP models’ performance peaking around n0 = n(1 p), while the performance of Beta-SPANDROP is less affected by sequence length. As can be seen from Figures 2(c) and 2(d), our experimental results seem to well supporting this hypothesis. Specifically, in Figure 2(c), while the performance of both SPANDROP variants deteriorates as n grows and the task becomes more challenging and underspecified, SPANDROP deteriorates at a faster speed even when we remove the effect of supporting fact noise. On the other hand, we can clearly see in Figure 2(d) that SPANDROP performance peak around sequences of length 270 (= n(1 p) = 300⇥ (1 0.1)) before rapidly deteriorating, while Beta-SPANDROP is unaffected until test sequence length exceeds that of all examples seen during training. 1Note that the noise in our experiments are lower than what would be predicted by theory, because in practice the initial sequence S might already contain parts of “cat” before it is inserted. This creates redundant sets of supporting facts for this task and reduces supporting fact noise especially when n is large. Mitigating position bias. Besides SPANDROP, replacement-based techniques like masking can also be applied to introduce counterfactual examples into sequence model training, where elements in the sequence are replaced by a special symbol that is not used at test time. We implement SPANMASK in the same way as SPANDROP except spans are replaced rather than removed when the sampled “drop mask” i is 0. We first inspect whether SPANMASK benefits from the same beta-Bernoulli distribution we use in SPANDROP. As can be seen in Figure 2(e), the gain from switching to a betaBernoulli distribution provides negligible benefit to SPANMASK, which does not alter the sequence length of the input to begin with. We also see that SPANMASK results in significantly higher error than both SPANDROP and Beta-SPANDROP in this setting. We further experiment with introducing position bias into the training data (but not the test data) to test whether these method help the model generalize to an unseen setting. Specifically, instead of selecting the position for the characters “cat” uniformly at random, we train the model with a “fixed position” dataset where they always occur at indices (10, 110, 210), and a “first 100” dataset where they are uniformly distributed among the first 100 letters. As can be seen in Figure 2(f), both the baseline and SPANMASK models overfit to the position bias in the “fixed” setting, while SPANDROP techniques significantly reduce zeroshot generalization error. In the “first 100” setting, Beta-SPANDROP consistently outperforms its Bernoulli counterpart and SPANMASK at improving the performance of the baseline model as well, indicating that SPANDROP variants are effective at reducing the position bias of the model. 4 EXPERIMENTS ON NATURAL LANGUAGE DATA To examine the efficacy of the proposed SPANDROP techniques on realistic data, we conduct experiments on four natural language processing datasets that represent the tasks of single- and multi-hop extractive question answering, multiple-choice question answering, and relation extraction. We focus on showing the effect of SPANDROP instead of pursuing the state of the art in these experiments. Datasets. We use four natural language processing datasets: SQuAD 1.1 (Rajpurkar et al., 2016), where models answer questions on a paragraph of text from Wikipedia; MultiRC (Khashabi et al., 2018), which is a multi-choice reading comprehension task in which questions can only be answered by taking into account information from multiple sentences; HotpotQA (Yang et al., 2018), which requires models to perform multi-hop reasoning over multiple Wikipedia pages to answer questions; and DocRED (Yao et al., 2019), which is a document-level data set for relation extraction. For the SQuAD dataset, we define spans as collections of one or more consecutive tokens to show that SPANDROP can be applied to different granularities. For the rest three datasets, we define spans to be sentences since supporting facts are provided at sentence level. For all of these tasks, we report standard exact match (EM) and F1 metrics where applicable, for which higher scores are better. We refer the reader to the appendix for details about the statistics and metrics of these datasets. Model. We build our models for these tasks using ELECTRA (Clark et al., 2019), since it is shown to perform well across a range of NLP tasks recently. We introduce randomly initialized taskspecific parameters designed for each task following prior work on each dataset, and finetune these models on each dataset to report results. We refer the reader to the appendix for training details and hyperparameter settings. Main results. We first present the performance of our implemented models and their combination with SPANDROP variants on the four natural language processing tasks. We also include results from representative prior work on each dataset for reference (detailed in the appendix), and summarize the results in Table 1. We observe that: 1) our implemented models achieve competitive and sometimes significantly better performance (in the cases of HotpotQA, SQuAD, and DocRED) compared to published results, especially considering that we do not tailor our models to each task too much; 2) SPANDROP improves the performance over these models even when the training set is large and that the model is already performing well; 3) Models trained with Beta-SPANDROP consistently perform better or equally well with their SPANDROP counterparts across all datasets, demonstrating that our observations on the synthetic datasets generalize well to real-world ones. We note that the performance gains on real-world data is less significant, which likely results from the fact spans in the synthetic task are independent from each other, which is not the case in natural language data. We further evaluate the performance of our trained models on the MultiRC testing data, and obtain results of EM/F1: 41.1/79.8, 39.9/78.5 and 39.1/78.2 for models with Beta-SPANDROP, SPANDROP, and without SPANDROP, respectively. This indicates that both Beta-SPANDROP and SPANDROP improve the model generalization ability, and Beta-SPANDROP is better than SPANDROP, improving EM/F1 with 2.0/1.6 absolute over the baseline. Next, to better understand whether the properties of SPANDROP and Beta-SPANDROP we observe on the synthetic data generalize to real-world data, we further perform a set of analysis experiments on SQuAD. Specifically, we are interested in studying the effect of the amount of training data, the span drop ratio p, and the choice of span size on performance. Effect of low data. To understand SPANDROP’s regularizing effect when training data is scarce, we study the model’s generalization performance when training on only 0.1% of the training data (around 100 examples) to using the entire training set (around 88k examples). As can be seen in Figure 3 (left), both SPANDROP and Beta-SPANDROP significantly improve model performance when the amount of training data is extremely low. As the amount of training data increases, this gap slowly closes but remains consistently positive. The final gap when 100% of the training data is used is still sufficient to separate top-2 performing systems on this dataset. Impact of drop ratio. We compare SPANDROP and Beta-SPANDROP by controlling how likely each span is dropped on average (drop ratio p). Recall from our experiments on FINDCATS that larger p will result in distribution drift from the original training set for SPANDROP but not BetaSPANDROP, thus the performance of the former deteriorates as p increases while the latter is virtually not affected. As can be seen in Figure 3 (middle), our observation on real-world data is consistent with this theoretical prediction, and indicate that Beta-SPANDROP is a better technique for data augmentation should one want to increase sequence diversity by setting p to a larger value. Impact of span size. We train the model with SPANDROP on SQuAD with varying span sizes of {1, 2, 4, 8, 16, 32, 64} tokens per span to understand the effect of this hyperparameter. We observe in Figure 3 (right) that as span size grows, the generalization performance of the model first holds roughly constant, then slowly deteriorates as span size grows too large. This suggests that the main contributors to generalization performance might have been the total number of spans in the entire sequence, which reduces with larger spans. This results in fewer potential augmented sequences for counterfactual learning, therefore lowering regularization strength. This observation is consistent with that on our synthetic data in our preliminary experiments, where we see that controlling for other factors, larger span sizes yield deteriorated generalization performance (data not shown due to space limit). This also suggests that while SPANDROP works with arbitrary span sizes, the optimal choice of spans for different tasks warrants further investigation, which we leave to future work. 5 RELATED WORK Long Sequence Inference. Many applications require the prediction/inference over long sequences, such as multi-hop reading comprehension (Yang et al., 2018; Welbl et al., 2018), long document summarization (Huang et al., 2021), document-level information extraction (Yao et al., 2019) in natural language processing, long sequence time-series prediction (Zhou et al., 2021a), promoter region and chromatin-profile prediction in DNA sequence (Oubounyt et al., 2019; Zhou & Troyanskaya, 2015) in Genomics etc, where not all elements in the long sequence contribute equally to the desired output. Aside from approaches we have discussed that attempt to approximate all pair-wise interactions between elements in a sequence, more recent work has also investigated compressing long sequences into shorter ones to distill the information therein for prediction or representation learning (Rae et al., 2020; Goyal et al., 2020; Kim & Cho, 2021). Sequence Data Augmentation. Data augmentation is an effective common technique for underspecified tasks like long sequence inference. Feng et al. (2021) propose to group common data augmentation techniques in natural language processing into three categories: 1) rule-based methods (Zhang et al., 2015; Wei & Zou, 2019; Şahin & Steedman, 2018), which apply a set of predefined operations over the raw input, such as removing, adding, shuffling and replacement; 2) example mixup-based methods (Guo et al., 2019; Guo, 2020; Chen et al., 2020; Jindal et al., 2020), which, inspired from Mixup in computer vision (Zhang et al., 2018), perform interpolation between continuous features like word embeddings and sentence embeddings; 3) model-based methods (Xie et al., 2020; Sennrich et al., 2016), which use trained models to generate new examples (e.g., back translation Xie et al., 2020). Most of existing rule-based data augmentation methods operate at the token/word level (Feng et al., 2021), such as word shuffle/replacement/addition (Wei & Zou, 2019). Shuffle-based techniques are less applicable when order information is crucial in the raw data (Lan et al., 2019, e.g., in natural language). Moreover, these operations might not be trivial in implementation over larger spans (e.g., at the phrase or sentence level). For example, while replacing tokens require selecting candidates from a fixed vocabulary which can be provided by well estimated language models (Clark et al., 2019), replacing phrases or sentences is significantly more challenging since the “vocabulary” is unbounded and marginal probability difficult to estimate. In contrast, our proposed SPANDROP supports data augmentation in multiple granularity as the spans in SPANDROP can be of any length, and is able to reserve sequence order since drop operation does not change the relative order of the original input. 6 CONCLUSION In this paper, we presented SPANDROP, a simple and effective method for learning from long sequences, which ablates parts of the sequence at random to generate counterfactual data to distill the sparse supervision signal that is predictive of the desired output. We show via theoretical analysis and carefully designed synthetic datasets that SPANDROP and its variant based on the beta-Bernoulli distribution help model achieve competitive performance with a fraction of the data by introducing diverse augmented training examples, and generalize better to previously unseen data. Our experiments on four real-world NLP datasets demonstrate that besides these benefits, SPANDROP can further improve upon powerful pretrained Transformer models even when data is abundant.
1. What is the main contribution of the paper? 2. What are the strengths of the proposed approach, particularly its mathematical foundation? 3. Do you have any concerns or comparisons with other methods, such as word dropout? 4. How do you assess the effectiveness of the method on real datasets?
Summary Of The Paper Review
Summary Of The Paper The paper focuses on distilling supervision signal from long sequences. They focus on cases where the input is a long sequence of length n, but the target prediction is determined by a small subset of size m of sequence fragments, where m << n. The authors propose augmenting data by randomly dropping spans from the input sequence. They first propose SpanDrop which removes each span with a probability p. This process might result in shorter sequences, resulting in shift of training data distributions. As a fix, they also propose Beta-SpanDrop where the spans are dropped using probabilities sampled from beta bernoulli distribution. Beta SpanDrop preserves the length of the original utterance with higher probability, while generating similar variety of augmented utterances. Review Strengths: 1. Simple procedure to improve accuracy for long sequence input problems. 2. Mathematically sound. The authors prove the claims regarding augmented sequence lengths. Weaknesses: The procedure is fairly simple, and similar ideas have been used for regularizing models. I find the solution similar to word dropout, and would be interesting to see comparison to that. The gains on real datasets are not high.
ICLR
Title SpanDrop: Simple and Effective Counterfactual Learning for Long Sequences Abstract Distilling supervision signal from a long sequence to make predictions is a challenging task in machine learning, especially when not all elements in the input sequence contribute equally to the desired output. In this paper, we propose SPANDROP, a simple and effective data augmentation technique that helps models identify the true supervision signal in a long sequence with very few examples. By directly manipulating the input sequence, SPANDROP randomly ablates parts of the sequence at a time and ask the model to perform the same task to emulate counterfactual learning and achieve input attribution. Based on theoretical analysis of its properties, we also propose a variant of SPANDROP based on the beta-Bernoulli distribution, which yields diverse augmented sequences while providing a learning objective that is more consistent with the original dataset. We demonstrate the effectiveness of SPANDROP on a set of carefully designed toy tasks, as well as various natural language processing tasks that require reasoning over long sequences to arrive at the correct answer, and show that it helps models improve performance both when data is scarce and abundant. 1 INTRODUCTION Building effective machine learning systems for long sequences is a challenging and important task, which helps us better understand underlying patterns in naturally occurring sequential data like long texts (Radford et al., 2019), protein sequences (Jumper et al., 2021), financial time series (Bao et al., 2017), etc. Recently, there is growing interest in studying neural network models that can capture long-range correlations in sequential data with high computational, memory, and statistical efficiency, especially widely adopted Transformer models (Vaswani et al., 2017). Previous work approach long-sequence learning in Transformers largely by introducing computational approaches to replace the attention mechanism with more efficient counterparts. These approaches include limiting the input range over which the attention mechanism is applied (Kitaev et al., 2019) to limiting sequence-level attention to only a handful of positions (Beltagy et al., 2020; Zaheer et al., 2020). Other researchers make use of techniques akin to the kernel trick to eliminate the need to compute or instantiate the costly attention matrix (Peng et al., 2020; Katharopoulos et al., 2020; Choromanski et al., 2020). Essentially, these approaches aim to approximate the original pairwise interaction with lower cost, and are often interested in still capturing the interactions between every pair of input elements (e.g., the long sequence benchmark proposed by Tay et al., 2020). In this paper, we instead investigate learning problems for long sequences where not all input elements contribute equally to the desired output. Natural examples that take this form include sentiment classification for long customer review documents (where a few salient sentiment words contribute the most), question answering from a large document (where each question typically requires a small number of supporting sentences to answer), key phrase detection in audio processing (where a small number of recorded frames actually determine the prediction), as well as detecting a specific object from a complex scene (where, similarly, a small amount of pixels determine the outcome), to name a few. In these problems, it is usually counterproductive to try and make direct use of the entire input if the contributing portion is small or sparse, which results in a problem of underspecification (i.e., the data does not sufficiently define the goal for statistical models). One approach to address this problem is annotating the segments or neighborhoods that directly contribute to the outcome in the entire input. This could take the form of a subset of sentences that answer a question or describe the relation between entities in a paragraph (Yang et al., 2018; Yao et al., 2019), which function as explainable evidence that supplements the answer. When such annotation is not feasible, researchers and practitioners often need to resort to either collecting more input-output pairs or designing problem-specific data augmentation techniques to make up for the data gap. For real-valued data, this often translates to random transformations (e.g., shifting or flipping an image); for symbolic data like natural language, techniques like masking or substitution are more commonly used (e.g., randomly swapping words with a special mask token or other words). While these approaches have proven effective in some tasks, each has limitations that prevents it from being well-suited for the underspecification scenario. For instance, while global feature transformations enhance groupinvariance in learned representations, they do not directly help with better locating the underlying true stimulus. On the other hand, while replacement techniques like masking and substitution help ablate parts of the input, they are susceptible to the position bias of where the true stimulus might occur in the input. Furthermore, while substitution techniques can help create challenging contrastive examples, it is significantly more difficult to design them for complex symbolic sequences (e.g., replacing a phrase naturally in a sentence). To address these challenges, we propose SPANDROP, a simple and effective technique that helps models distill sparse supervision signal from long sequences when the problem is underspecified. Similar to replacement-based techniques such as masking and substitution, SPANDROP directly ablates parts of the input at random to construct counterfactual examples that preserve the original supervision signal with high probability. Instead of preserving the original sequence positions, however, SPANDROP directly removes ablated elements from the input to mitigate any bias that is related to the absolute positions of elements (rather than the relative positions between them) in the input. Upon closer examination of its theoretical and empirical properties, we further propose a more effective variant of SPANDROP based on the Beta-Bernoulli distribution that enhances the consistency of the augmented objective function with the original one. We demonstrate via carefully designed toy experiments that SPANDROP not only helps models achieve up to 20⇥ sample-efficiency in low-data settings, but also further reduces overfitting even when training data is abundant. We find that it is very effective at mitigating position bias compared to replacement-based counterfactual approaches, and enhances out-of-distribution generalization effectively. We further experiments on four natural language processing tasks that require models to answer question or extract entity relations from long texts, and demonstrate that SPANDROP can improve the performance of already competitive neural models without any change in model architecture. 2 METHOD In this section, we first formulate the problem of sequence inference, where the model takes sequential data as input to make predictions. Then, we introduce SPANDROP, a simple and effective data augmentation technique for long sequence inference, and analyze its theoretical properties. 2.1 PROBLEM DEFINITION Sequence Inference. We consider a task where a model takes a sequence S as input and predicts the output y. We assume that S consists of n disjoint but contiguous spans, each representing a part of the sequence in order S = (s1, . . . , sn). One example of sequence inference is sentiment classification from a paragraph of text, where S is the paragraph and y the desired sentiment label. Spans could be words, phrases, sentences, or a mixture of these in the paragraph. Another example is time series prediction, where S is historical data, y is the value at the next time step. Supporting facts. Given an input-output pair (S,y) for sequence prediction, we assume that y is truly determined by only a subset of spans in S. More formally, we assume that there is a subset of spans Ssup ⇢ {s1, s2, . . . , sn} such that y is independent of si, if si /2 Ssup. In sentiment classification, Ssup could consist of important sentiment words or conjunctions (like “good”, “bad”, “but”); in time series prediction, it could reflect the most recent time steps as well as those a few cycles away if the series is periodic. For simplicity, we will denote the size of this set m = |Ssup|, and restrict our attention to tasks where m ⌧ n, such as those described in the previous section. 2.2 SPANDROP In a long sequence inference task with sparse support facts (m ⌧ n), most of the spans in the input sequence will not contribute to the prediction of y, but they will introduce spurious correlation in a low-data scenario. SPANDROP generates new data instances (S̃,y) by ablating these spans at random, while preserving the supporting facts with high probability so that the model is still trained to make the correct prediction y. This is akin to counterfactually determining whether each span truly determines the outcome y by asking what the prediction would have been without it. Definition 1 (SPANDROP). Formally, given a sequence S that consists of spans (s1, s2, · · · sn), SPANDROP generates a new sequence S̃ as follows: i i.i.d.⇠ Bernoulli(1 p), S̃ =(si)ni=1, i=1, (1) where p is the hyperparameter that determines the probability to drop a span. Note that SPANDROP does not require introducing substitute spans or artificial symbols when ablating spans from the input sequence. It makes the most of the natural sequence as it occurs in the original training data, and preserves the relative order between spans that are not dropped, which is often helpful in understanding sequential data (e.g., time series or text). It is also not difficult to establish that the resulting sequence S̃ can preserve all of the m supporting facts with high probability regardless of how large n is. Remark 1. The new sequence length n0 = |S̃| and the number of preserved supporting facts m0 = |S̃ \ Ssup| follow binomial distributions with parameters (n, 1 p) and (m, 1 p), respectively: P (n0|n, p) = ✓ n n0 ◆ (1 p)n 0 pn n 0 , P (m0|m, p) = ✓ m m0 ◆ (1 p)m 0 pm m 0 . (2) Therefore, the proportion of sequences where all supporting facts are retained (i.e., m0 = m) is (1 p)m, which is independent of n. This means that as long as the total number of supporting facts in the sequence is bounded, then regardless of the sequence length, we can always choose p carefully such that we end up with many valid new examples with bounded noise introduced to supporting facts. Note that our analysis so far relies only on the assumption that m is known or can be estimated, and thus it can be applied to tasks where the precise set of supporting facts Ssup is unknown. More formally, the amount of new examples can be characterized by the size of the typical set of S̃, i.e.the set of sequences that the randomly ablated sequence will fall into with high probability. The size of the typical set for SPANDROP is approximately 2nH(p), where H(p) is the binary entropy of a Bernoulli random variable with probability p. Intuitively, these results indicate that the amount of total counterfactual examples generated by SPANDROP scales exponentially in n, but the level of supporting fact noise can be bounded as long as m is small. However, this formulation of SPANDROP does have a notable drawback that could potentially hinder its efficacy. Because the new sequence length n0 follows a binomial distribution, its mean is n(1 p) and its variance is np(1 p). For sufficiently large n, most of the resulting S̃ will have lengths that concentrate around the mean with a width of O( p n), which creates an artificial and permanent distribution drift from the original length (see Figure 1(a)). Furthermore, if we know the identity of Ssup and keep these spans during training, this length reduction will bias the training set towards easier examples to locate spans in Ssup. In the next subsection, we will introduce a variant of SPANDROP based on the beta-Bernoulli distribution that alleviates this issue. 2.3 BETA-SPANDROP To address the problem of distribution drift with SPANDROP, we introduce a variant that is based on the beta-Bernoulli distribution. The main idea is that instead of dropping each span in a sequence independently with a fixed probability p, we first sample a sequence-level probability ⇡ at which spans are dropped from a Beta distribution, then use this probability to perform SPANDROP. Definition 2 (Beta-SPANDROP). Let ↵ = , = · 1 p p , where > 0 is a scaling hyperparameter. Beta-SPANDROP generates S̃ over S as: ⇡ ⇠B(↵, ), i i.i.d.⇠ Bernoulli(1 ⇡), S̃ =(si)ni=1, i=1, (3) where B(↵, ) is the beta-distribution with parameters ↵ and . It can be easily demonstrated that in Beta-SPANDROP, the probability that each span is dropped is still controlled by p, same as in SPANDROP: E[ i|p] = E[E[ i|⇡]|p] = E[1 ⇡|p] = 1 ↵↵+ = 1 p. In fact, we can show that as ! 1, Beta-SPANDROP degenerates into SPANDROP since the beta-distribution would assign all probability mass on ⇡ = p. Despite the simplicity in its implementation, Beta-SPANDROP is significantly less likely to introduce unwanted data distribution drift, while is capable of generating diverse counterfactual examples to regularize the training of sequence inference models. This is due to the following properties: Remark 2. The new sequence length n0 = |S̃| and the number of preserved supporting facts m0 = |S̃ \ Ssup| follow binomial distributions with parameters (n, ,↵) and (m, ,↵), respectively: P (n0|n,↵, ) = (n+ 1) (n0 + 1) (n n0 + 1) (n0 + ) (n n0 + ↵) (n+ ↵+ ) (↵+ ) (↵) ( ) , (4) P (m0|m,↵, ) = (m+ 1) (m0 + 1) (m m0 + 1) (m0 + ) (m m0 + ↵) (m+ ↵+ ) (↵+ ) (↵) ( ) , (5) where (z) = R1 0 x z 1e xdx is the gamma function. As a result, we can show that the probability that Beta-SPANDROP preserves the entire original sequence with the following probability P (n0 = n|n,↵, ) = (n+ ) (↵+ ) (n+ ↵+ ) ( ) . (6) When = 1, this expression simply reduces to n+ ; when 6= 1, this quantity tends to O(n ) as n grows sufficiently large. Comparing this to the O((1 p)n) rate from SPANDROP, we can see that when n is large, Beta-SPANDROP recovers more of the original distribution represented by (S̃,y) compared to SPANDROP. In fact, as evidenced by Figure 1(a), the counterfactual sequences generated by Beta-SPANDROP are also more spread-out in their length distribution besides covering the original length n with significantly higher probability. A similar analysis can be performed by substituting n and n0 with m and m0, where we can conclude that as m grows, Beta-SPANDROP is much better at generating counterfactual sequences that preserve the entire supporting fact set Ssup. This is shown in Figure 1(b), where the proportion of “noise-free” examples (i.e., m0 = m) decays exponentially with SPANDROP ( = 1) while remaining much higher when is sufficiently small. For instance, when p = 0.1, = 1 and m = 10, the proportion of noise-free examples for SPANDROP is just 34.9%, while that for Beta-SPANDROP is 47.4%. As we have seen, Beta-SPANDROP is significantly better than its Bernoulli counterpart at assigning probability mass to the original data as well as generated sequences that contain the entire set of supporting facts. A natural question is, does this come at the cost of diverse counterfactual examples? To answer this question we study the entropy of the distribution that S̃ follows by varying and n, and normalize it by n to study the size of typical set of this distribution. As can be seen in Figure 1(c), as long as is large enough, the average entropy per span H̄ degrades very little from the theoretical maximum, which is H(p), attained when = 1. Therefore, to balance between introducing noise in the supporting facts and generating diverse examples, we set = 1 in our experiments. Using the beta-Bernoulli distribution in dropout. The beta-Bernoulli distribution has been studied in prior work in seeking replacements for the (Bernoulli) dropout mechanism (Srivastava et al., 2014). Liu et al. (2019a) set ↵ = for the beta distribution in their formulation, which limits the dropout rate to always be 0.5. Lee et al. (2018) fix = 1 and vary ↵ to control the sparsity of the result of dropout, which is similar to Beta-SPANDROP when = 1. However, we note that these approaches (as with dropout) are focused more on adding noise to internal representations of neural networks to introduce regularization, while SPANDROP operates directly on the input to ablate different components therein, and thus orthogonal (and potentially complementary) to these approaches. Further, SPANDROP has the benefit of not having to make any assumptions about the model or any changes to it during training, which makes it much more widely applicable. 3 FINDCATS: DISTILLING SUPERVISION FROM LONG-SEQUENCES In this section, we design a synthetic task of finding the animal name “cat” in a character sequence to a) demonstrate the effectiveness of SPANDROP and Beta-SPANDROP in promoting the performance over a series of problems with different settings, b) analyze the various factors that may affect the efficacy of these approaches, and c) compare it to other counterfactual augmentation techniques like masking on mitigating position bias. 3.1 EXPERIMENTAL SETUP FINDCATS. To understand the effectiveness of SPANDROP and Beta-SPANDROP in an experimental setting, we designed a synthetic task called FINDCATS where the model is trained to discern that given an animal name “cat”, whether a character string contains it as a subsequence (i.e., contains characters in “cat” in order, for instance, “abcdafgbijktma”) or not (e.g., “abcdefhtijklmn”). This allows us to easily control the total sequence length n, the supporting facts size m, as well as easily estimate the supporting fact noise that each SPANDROP variant might introduce. To generate the synthetic training data of FINDCATS, we first generate a sequence consisting of lowercase letters (a to z) that does not contain “cat” as a subsequence. For half of these sequences, we label the tuple (cat,S) with a negative class to indicate that S does not contain “cat” as a subsequence; for the other half, we choose arbitrary (but not necessarily contiguous) positions in S to replace the letters with letters in “cat” from left to right to generate positive examples. In all of our experiments, we evaluate model performance on a held-out set of 10,000 examples to observe classification error. We set sequence length to n = 300 where each letter is a separate span, and chose positions for the letters in the animal name “cat” uniformly at random in the sequence unless otherwise mentioned. Model. We employ three-layer Transformer model (Vaswani et al., 2017) with position embeddings (Devlin et al., 2019) as the sequence encoder, which is implemented with HuggingFace Transformers (Wolf et al., 2019). For each example (“cat”,S, y), we feed “[CLS] cat [SEP] S [SEP]” to the sequence encoder and then construct binary classifier over the output representation of “[CLS]” to predict y. To investigate the effectiveness of SPANDROP, we simply apply SPANDROP to S first before feeding the resulting sequence into the Transformer classifier. 3.2 RESULTS AND ANALYSIS In each experiment, we compare SPANDROP and Beta-SPANDROP at the same drop ratio p. And we further use rejection sampling to remove examples that do not preserve the desired supporting facts to understand the effect of supporting fact noise. Data efficiency. We begin by analyzing the contribution of SPANDROP and Beta-SPANDROP to improving the sample efficiency of the baseline model. To achieve this goal, we vary the size of the training set from 10 to 50,000 and observe the prediction error on the held-out set. We observe from the results in Figure 2(a) that: 1) Both SPANDROP and Beta-SPANDROP significantly improve data efficiency in low-data settings. For instance, when trained on only 200 training examples, SPANDROP variants can achieve the generalization performance of the baseline model trained on 5x to even 20x data. 2) Removing supporting fact noise typically improves data efficiency further by about 2x. This indicates it is helpful not to drop spans in Ssup during training when possible, so that the model is always trained with true counterfactual examples rather than sometimes noisy ones. 3) Beta-SPANDROP consistently improves upon the baseline model even when data is abundant. This (a) Data efficiency (b) Noise in supporting facts (c) Varying sequence length Baseline SPANDROP SPANDROP (noise-free) Beta-SPANDROP Beta-SPANDROP (noise-free) is likely due to the difficulty of the task when n = 300 and m = 3. Similar to many real-world tasks, the task remains underspecified even when the generalization error is already very low thanks to the large amount of training data available. 4) SPANDROP introduces inconsistent training objective with the original training set, which leads to performance deterioration when there is sufficient training data, which is consistent with our theoretical observation. Effect of supporting fact noise and sequence length. Since SPANDROP introduces noise in the supporting facts (albeit with a low probability), it is natural to ask if such noise is negatively correlated with model performance. We study this by varying the drop ratio p from {0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5} on fixed training sets of size 1,000, and observe the resulting model performance and supporting fact error. As can be seen in Figure 2(b), supporting fact noise increases rapidly as p grows.1 However, we note that although the performance of SPANDROP deteriorates as p increases, that of Beta-SPANDROP stays relatively stable. Inspecting these results more closely, we find that even the performance of the noise-free variants follow a similar trend, which should not be affected by supporting fact noise. Recalling the observations from our data efficiency experiments, we next turn to the hypothesis that this discrepancy is mainly caused by the inconsistent length distribution SPANDROP introduces. To test this hypothesis, we conduct two separate sets of experiments: 1) training and testing the model on varying sequence lengths {10, 20, 30, 50, 100, 200, 300, 500}, where longer sequences suffer more from the discrepancy between SPANDROP-resulted sequence lengths and the original sequence length; and 2) testing the model trained on n = 300 on test sets of different lengths, and if our hypothesis about distribution drift were correct, we should see SPANDROP models’ performance peaking around n0 = n(1 p), while the performance of Beta-SPANDROP is less affected by sequence length. As can be seen from Figures 2(c) and 2(d), our experimental results seem to well supporting this hypothesis. Specifically, in Figure 2(c), while the performance of both SPANDROP variants deteriorates as n grows and the task becomes more challenging and underspecified, SPANDROP deteriorates at a faster speed even when we remove the effect of supporting fact noise. On the other hand, we can clearly see in Figure 2(d) that SPANDROP performance peak around sequences of length 270 (= n(1 p) = 300⇥ (1 0.1)) before rapidly deteriorating, while Beta-SPANDROP is unaffected until test sequence length exceeds that of all examples seen during training. 1Note that the noise in our experiments are lower than what would be predicted by theory, because in practice the initial sequence S might already contain parts of “cat” before it is inserted. This creates redundant sets of supporting facts for this task and reduces supporting fact noise especially when n is large. Mitigating position bias. Besides SPANDROP, replacement-based techniques like masking can also be applied to introduce counterfactual examples into sequence model training, where elements in the sequence are replaced by a special symbol that is not used at test time. We implement SPANMASK in the same way as SPANDROP except spans are replaced rather than removed when the sampled “drop mask” i is 0. We first inspect whether SPANMASK benefits from the same beta-Bernoulli distribution we use in SPANDROP. As can be seen in Figure 2(e), the gain from switching to a betaBernoulli distribution provides negligible benefit to SPANMASK, which does not alter the sequence length of the input to begin with. We also see that SPANMASK results in significantly higher error than both SPANDROP and Beta-SPANDROP in this setting. We further experiment with introducing position bias into the training data (but not the test data) to test whether these method help the model generalize to an unseen setting. Specifically, instead of selecting the position for the characters “cat” uniformly at random, we train the model with a “fixed position” dataset where they always occur at indices (10, 110, 210), and a “first 100” dataset where they are uniformly distributed among the first 100 letters. As can be seen in Figure 2(f), both the baseline and SPANMASK models overfit to the position bias in the “fixed” setting, while SPANDROP techniques significantly reduce zeroshot generalization error. In the “first 100” setting, Beta-SPANDROP consistently outperforms its Bernoulli counterpart and SPANMASK at improving the performance of the baseline model as well, indicating that SPANDROP variants are effective at reducing the position bias of the model. 4 EXPERIMENTS ON NATURAL LANGUAGE DATA To examine the efficacy of the proposed SPANDROP techniques on realistic data, we conduct experiments on four natural language processing datasets that represent the tasks of single- and multi-hop extractive question answering, multiple-choice question answering, and relation extraction. We focus on showing the effect of SPANDROP instead of pursuing the state of the art in these experiments. Datasets. We use four natural language processing datasets: SQuAD 1.1 (Rajpurkar et al., 2016), where models answer questions on a paragraph of text from Wikipedia; MultiRC (Khashabi et al., 2018), which is a multi-choice reading comprehension task in which questions can only be answered by taking into account information from multiple sentences; HotpotQA (Yang et al., 2018), which requires models to perform multi-hop reasoning over multiple Wikipedia pages to answer questions; and DocRED (Yao et al., 2019), which is a document-level data set for relation extraction. For the SQuAD dataset, we define spans as collections of one or more consecutive tokens to show that SPANDROP can be applied to different granularities. For the rest three datasets, we define spans to be sentences since supporting facts are provided at sentence level. For all of these tasks, we report standard exact match (EM) and F1 metrics where applicable, for which higher scores are better. We refer the reader to the appendix for details about the statistics and metrics of these datasets. Model. We build our models for these tasks using ELECTRA (Clark et al., 2019), since it is shown to perform well across a range of NLP tasks recently. We introduce randomly initialized taskspecific parameters designed for each task following prior work on each dataset, and finetune these models on each dataset to report results. We refer the reader to the appendix for training details and hyperparameter settings. Main results. We first present the performance of our implemented models and their combination with SPANDROP variants on the four natural language processing tasks. We also include results from representative prior work on each dataset for reference (detailed in the appendix), and summarize the results in Table 1. We observe that: 1) our implemented models achieve competitive and sometimes significantly better performance (in the cases of HotpotQA, SQuAD, and DocRED) compared to published results, especially considering that we do not tailor our models to each task too much; 2) SPANDROP improves the performance over these models even when the training set is large and that the model is already performing well; 3) Models trained with Beta-SPANDROP consistently perform better or equally well with their SPANDROP counterparts across all datasets, demonstrating that our observations on the synthetic datasets generalize well to real-world ones. We note that the performance gains on real-world data is less significant, which likely results from the fact spans in the synthetic task are independent from each other, which is not the case in natural language data. We further evaluate the performance of our trained models on the MultiRC testing data, and obtain results of EM/F1: 41.1/79.8, 39.9/78.5 and 39.1/78.2 for models with Beta-SPANDROP, SPANDROP, and without SPANDROP, respectively. This indicates that both Beta-SPANDROP and SPANDROP improve the model generalization ability, and Beta-SPANDROP is better than SPANDROP, improving EM/F1 with 2.0/1.6 absolute over the baseline. Next, to better understand whether the properties of SPANDROP and Beta-SPANDROP we observe on the synthetic data generalize to real-world data, we further perform a set of analysis experiments on SQuAD. Specifically, we are interested in studying the effect of the amount of training data, the span drop ratio p, and the choice of span size on performance. Effect of low data. To understand SPANDROP’s regularizing effect when training data is scarce, we study the model’s generalization performance when training on only 0.1% of the training data (around 100 examples) to using the entire training set (around 88k examples). As can be seen in Figure 3 (left), both SPANDROP and Beta-SPANDROP significantly improve model performance when the amount of training data is extremely low. As the amount of training data increases, this gap slowly closes but remains consistently positive. The final gap when 100% of the training data is used is still sufficient to separate top-2 performing systems on this dataset. Impact of drop ratio. We compare SPANDROP and Beta-SPANDROP by controlling how likely each span is dropped on average (drop ratio p). Recall from our experiments on FINDCATS that larger p will result in distribution drift from the original training set for SPANDROP but not BetaSPANDROP, thus the performance of the former deteriorates as p increases while the latter is virtually not affected. As can be seen in Figure 3 (middle), our observation on real-world data is consistent with this theoretical prediction, and indicate that Beta-SPANDROP is a better technique for data augmentation should one want to increase sequence diversity by setting p to a larger value. Impact of span size. We train the model with SPANDROP on SQuAD with varying span sizes of {1, 2, 4, 8, 16, 32, 64} tokens per span to understand the effect of this hyperparameter. We observe in Figure 3 (right) that as span size grows, the generalization performance of the model first holds roughly constant, then slowly deteriorates as span size grows too large. This suggests that the main contributors to generalization performance might have been the total number of spans in the entire sequence, which reduces with larger spans. This results in fewer potential augmented sequences for counterfactual learning, therefore lowering regularization strength. This observation is consistent with that on our synthetic data in our preliminary experiments, where we see that controlling for other factors, larger span sizes yield deteriorated generalization performance (data not shown due to space limit). This also suggests that while SPANDROP works with arbitrary span sizes, the optimal choice of spans for different tasks warrants further investigation, which we leave to future work. 5 RELATED WORK Long Sequence Inference. Many applications require the prediction/inference over long sequences, such as multi-hop reading comprehension (Yang et al., 2018; Welbl et al., 2018), long document summarization (Huang et al., 2021), document-level information extraction (Yao et al., 2019) in natural language processing, long sequence time-series prediction (Zhou et al., 2021a), promoter region and chromatin-profile prediction in DNA sequence (Oubounyt et al., 2019; Zhou & Troyanskaya, 2015) in Genomics etc, where not all elements in the long sequence contribute equally to the desired output. Aside from approaches we have discussed that attempt to approximate all pair-wise interactions between elements in a sequence, more recent work has also investigated compressing long sequences into shorter ones to distill the information therein for prediction or representation learning (Rae et al., 2020; Goyal et al., 2020; Kim & Cho, 2021). Sequence Data Augmentation. Data augmentation is an effective common technique for underspecified tasks like long sequence inference. Feng et al. (2021) propose to group common data augmentation techniques in natural language processing into three categories: 1) rule-based methods (Zhang et al., 2015; Wei & Zou, 2019; Şahin & Steedman, 2018), which apply a set of predefined operations over the raw input, such as removing, adding, shuffling and replacement; 2) example mixup-based methods (Guo et al., 2019; Guo, 2020; Chen et al., 2020; Jindal et al., 2020), which, inspired from Mixup in computer vision (Zhang et al., 2018), perform interpolation between continuous features like word embeddings and sentence embeddings; 3) model-based methods (Xie et al., 2020; Sennrich et al., 2016), which use trained models to generate new examples (e.g., back translation Xie et al., 2020). Most of existing rule-based data augmentation methods operate at the token/word level (Feng et al., 2021), such as word shuffle/replacement/addition (Wei & Zou, 2019). Shuffle-based techniques are less applicable when order information is crucial in the raw data (Lan et al., 2019, e.g., in natural language). Moreover, these operations might not be trivial in implementation over larger spans (e.g., at the phrase or sentence level). For example, while replacing tokens require selecting candidates from a fixed vocabulary which can be provided by well estimated language models (Clark et al., 2019), replacing phrases or sentences is significantly more challenging since the “vocabulary” is unbounded and marginal probability difficult to estimate. In contrast, our proposed SPANDROP supports data augmentation in multiple granularity as the spans in SPANDROP can be of any length, and is able to reserve sequence order since drop operation does not change the relative order of the original input. 6 CONCLUSION In this paper, we presented SPANDROP, a simple and effective method for learning from long sequences, which ablates parts of the sequence at random to generate counterfactual data to distill the sparse supervision signal that is predictive of the desired output. We show via theoretical analysis and carefully designed synthetic datasets that SPANDROP and its variant based on the beta-Bernoulli distribution help model achieve competitive performance with a fraction of the data by introducing diverse augmented training examples, and generalize better to previously unseen data. Our experiments on four real-world NLP datasets demonstrate that besides these benefits, SPANDROP can further improve upon powerful pretrained Transformer models even when data is abundant.
1. What is the focus and contribution of the paper regarding learning problems for long sequences? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity and effectiveness? 3. What are the weaknesses of the paper, especially regarding its novelty and comparisons with other works? 4. Do you have any concerns about the methodology or assumptions made in the paper? 5. How does the reviewer assess the clarity and quality of the paper's content?
Summary Of The Paper Review
Summary Of The Paper In this paper, we instead investigate learning problems for long sequences where not all input elements contribute equally to the desired output. SCANDROP is a simple algorithm to randomly drop segments in a sequence. The authors first establish that when the number of contributing segments is sparse, the algorithm will preserve them with a relatively large probability. Then, Beta-SCANDROP is proposed to preserve the original sequence length with higher probability. In experiments, consistent improvement is shown. Review Strength: Effective data augmentation is a relevant topic. The paper is well written. Weakness: Limited novelty. My major concern is the lack of comparison of other baselines. In section5, the author discuss a bunch of related works on data augmentation, however, in natural language experiments these are not compared. I know that SCANDROP is easier to implement, but that does not mean you don't need to compare with other algorithms. It's not surprising that this data augmentation is effective, especially for the catfinding task. While the improvement over electra-base is consistent, it's not large. Below are detailed comments: Sec1. I don't quite understand why when in data the contributing portion is small or sparse, then the problem is "underspecified". It's still specified by the small contributing portion I think? Are you suggesting the contributing part is so small, that for example, a classifier can not reach a decision? What not span-replace? If you replace a span with a random token, it gives counterfactual sequence with the same length. Minor comments: Sec1. "the data does not sufficiently define the goal for statistical models", I don't understand what global is here.
ICLR
Title Global-Local Bayesian Transformer for Semantic Correspondence Abstract Cost aggregation is the key to finding semantic correspondence between a pair of similar images. Transformer-based cost aggregators have recently shown strong performance in obtaining high-quality correlation maps due to their capability of capturing long-range dependencies between matching points. However, such models are data-hungry and prone to over-fitting when training data is not sufficiently large. Besides, they easily incur incorrect matches when finding correspondences in the local semantic context. To address these issues, we propose a Global-Local Bayesian Transformer (GLBT) for cost aggregation. Specifically, GLBT introduces one global Bayesian self-attention module, whose weights are sampled from a learnable Bayesian posterior distribution, to mitigate over-fitting while modeling the long-range interaction from correlation maps. Furthermore, to model the short-range interaction between candidate matches, GLBT introduces another local Bayesian self-attention module, which factorizes both correlation maps and Bayesian attention weights into pairs of patches and conducts a matrix multiplication on individuals rather than a direct dot-product. Two self-attention modules are joined together to model the long-range and short-range interactions from correlation maps. Ultimately, GLBT is hierarchically aggregated for the refinement of correlation maps before feeding it to the flow estimator. We conduct extensive experiments to show the superiority of our proposed network to the state-of-the-art methods on datasets, including SPair-71k, PF-PASCAL, and PF-WILLOW. 1 INTRODUCTION Establishing dense semantic correspondences between images is a fundamental problem facilitating many vision tasks, including semantic segmentation (Min et al., 2021; Xie et al., 2021), 3D reconstruction (Kokkinos & Kokkinos, 2021a;b; Li et al., 2020b), and optical flow estimation(Yang & Ramanan, 2019). In contrast to the classical pixel-wise correspondence problems (Kim et al., 2003) that require images to be geometrically normalized and aligned, semantic correspondence considers unconstrained image pairs, posing additional challenges from large intra-class variations in appearance and geometry. Recent methods (Bristow et al., 2015; Cho et al., 2021; Zhao et al., 2021) for semantic correspondence generally follow the classical matching pipeline, including feature extraction, cost aggregation, and flow estimation. Some works (Rublee et al., 2011; Tola et al., 2010) attempted to find the semantic similarity between images by focusing on the feature extraction stage. These methods disregard the pixel-wise relationship between correlation features, resulting in sub-optimal performance. To overcome this issue, several methods (Jeon et al., 2020; Rocco et al., 2017; Truong et al., 2020b; Hong & Kim, 2021) introduced a regression network at the flow estimation stage to infer dense correspondences from correlation maps. However, such approaches rely on high-quality initial matching scores. Thereby, the latest methods (Min & Cho, 2021; Min et al., 2019a; Li et al., 2020a; Rocco et al., 2020; Min et al., 2020; Rocco et al., 2018b) have focused on designing an efficient cost aggregation module to improve the quality of correlation maps before feeding them into the flow estimation, proving the importance of cost aggregation networks. The core of the cost aggregation stage is to produce reliable correlation maps via the refinement of matching scores. Some models (Min & Cho, 2021; Rocco et al., 2018b) refined the local consistent matches from the initial correlation maps with high-dimensional 4D or 6D convolutions. However, such models lack the ability to achieve long-range context aggregation due to the inherently limited receptive fields. To tackle this problem, CATs (Cho et al., 2021) leveraged the vision transformer for cost aggregation to effectively refine the ambiguous matching scores in consideration of the global consensus. Nonetheless, it overlooks the spatial structure of the correlation map, leading to sub-optimal results. To further boost the performance, VAT (Hong et al., 2022) proposed a 4D Convolutional Swin Transformer as a cost aggregator to preserve the spatial structure of correlation maps, while providing an efficient self-attention to model long-range interaction between candidate matches. However, the existing Transformer-based cost aggregators (Hong et al., 2022; Casey et al., 2021; Cho et al., 2021) are infeasible to model the short-range pixel-to-pixel interaction, resulting in redundant noisy matches when dealing with the local semantic matches. In addition, since transformer architecture is prone to over-fitting, these transformer-based aggregators are data-hungry (Hassani et al., 2021), i.e., requiring enormous amounts of training data to obtain a good performance. To address these limitations, we propose a Global-Local Bayesian Transformer (GLBT) cost aggregator for semantic correspondence. Inspired by BayesNN (Blundell et al., 2015), which applied a variational inference on the weights of a neural network to prevent over-fitting, our proposed GLBT introduces the Global-Local Bayesian Self-Attention (GLB-SA) into the transformer aggregator for capturing the long-range and short-range match-to-match interaction from correlation maps simultaneously. Compared to the raw self-attention in the transformer (Cho et al., 2021; Vaswani et al., 2017), which suffers from a data-hungry issue due to the operation of dense matrix-vector multiplication, GLBT leverages the sparse matrix factorization (Dao et al., 2019) on the self-attention operation to avoid over-fitting via a reduction in its learnable parameters. The proposed GLBT module is then leveraged to hierarchically aggregate the multi-level matching correspondences on the different semantic contexts, achieving the refinement of correlation maps. Consequently, the refined correlation maps are applied in the decoder to infer the semantic correspondences from image pairs. We validate the effectiveness of our GLBT method on public benchmark datasets (Ham et al., 2016; 2017; Min et al., 2019b). Extensive experimental results demonstrate that our proposed method for semantic correspondence outperforms the previous state-of-the-art methods on several benchmarks. We also provide a detailed ablation analysis to verify the main components in GLBT. 2 RELATED WORK Semantic Correspondence. Finding semantic correspondences between image pairs poses additional challenges to intra-class appearance and shape variations among different instances from the same object or scene category. To address these challenges, approaches to semantic correspondence can be roughly categorized into hand-crafted feature-based methods (Bay et al., 2006; Dalal & Triggs, 2005; Ham et al., 2016; Liu et al., 2011; LoweDavid, 2004; Rublee et al., 2011; Tola et al., 2010) and learnable feature-based methods (Choy et al., 2016; Kim et al., 2018; 2017; Lee et al., 2019; Li et al., 2020a; Rocco et al., 2017; Seo et al., 2018). Hand-crafted techniques leverage the low-level feature descriptors, such as SIFT (LoweDavid, 2004), HOG (Taniai et al., 2016), and DAISY (Tola et al., 2010), to measure dense correspondences, lacking the capture of high-level semantics. To tackle this problem, most learnable techniques focus on building dense correspondences on highlevel semantic features of deep convolutional neural networks, such as NC-Net (Rocco et al., 2018b), ANC-Net (Li et al., 2020a), and GOCor (Truong et al., 2020a). However, solely relying on the deep learnable features limits the performance of semantic correspondences due to the direct output of the similarity scores from the correlation maps. To address this issue, (Rocco et al., 2017) proposed a regression network to estimate the parameters from the matching features, coping with incorrect matches from the initial learnable features at the flow estimation stage. Their success encourages many variant methods, e.g., GSF (Jeon et al., 2020) and GLU-Net (Truong et al., 2020b), to directly regress semantic correspondences from the feature matches. Cost Aggregation. To alleviate the requirement of high-quality initial matching scores, HPF (Min et al., 2020) introduced the RHM (Min et al., 2019a) cost aggregator into the learnable feature methods for geometric consistency enhancement. Later, numerous CNN-based feature-learnable variants (Min & Cho, 2021; Rocco et al., 2018b) utilized 4D or 6D convolution-based geometric matching algorithms to refine the local consistency of the initial correlation maps. Nonetheless, CNN-based aggregation networks fail to model global matches due to the limited receptive fields of convolutions. Transformer-based aggregators (Cho et al., 2021; Sun et al., 2021), which leveraged the self-attention mechanism (Vaswani et al., 2017) to capture the global match-to-match interaction from the initial correlation map, can solve this problem. However, such a self-attention is prone to introduce redundant noisy matches when modeling the short-range interaction in a small region, because it does not consider local contexts. Besides, Transformer-based deep networks starve huge amounts of training data to avoid over-confident decisions. Bayesian Neural Network. Applying Bayesian approaches (Shridhar et al., 2019; Fan et al., 2021; Zhang et al., 2021) to neural networks is an alternative to mitigating the over-fitting issue by offering uncertainty estimates so that Bayesian Neural Networks (BayesNNs) can easily learn from small datasets and are robust to over-fitting. In the past years, several methods, such as Variational Inference (Blundell et al., 2015; Graves, 2011), Laplace Approximation (MacKay, 1992), and MC Dropout (Gal & Ghahramani, 2015; 2016), have been widely applied to estimate the parameter uncertainty, which is propagated for predictions. Instead of selecting a single point estimate, BayesNNs use the Bayes rule to average results over parameter values and thus have a strong reasoning ability. 3 PRELIMINARY Let Is ∈ RHs×Ws×3 and It ∈ RHt×Wt×3 denote a pair of source and target images, respectively. The goal of dense semantic correspondence is to find the optimal f∗ that generates a correspondence flow containing the offsets between corresponding keypoints in the two images, i.e., Kpred = f∗(Is, It), where the correspondence flow Kpred = {(∆xsi ,∆ysi )} Hs×Ws i=1 contains the predicted offsets for all pixels in the source image. Following previous works, we consider learning of f∗ in the supervised setting. More specifically, we are given a dataset D = {(Isj , Itj ,K gt j )}Mj=1 containing M image pairs and the associated ground-truth correspondence flows. Due to sparse annotations, the ground-truth flow Kgtj = {(∆xsi ,∆ysi )} Hs×Ws i=1 is only non-zero at a subset of locations. We aim to learn an approximate fn by minimizing the distance between the predicted and the ground-truth correspondence flows: fM = argmin f 1 M ∑M j=1 ||Φ(f(Isj , Itj))−K gt j ||, where Φ is a logical metric that sets the offsets at locations without ground-truth offsets to zero. The pipeline to design the function f involves several basic steps, including feature extraction, cost aggregation, and flow estimation. Specifically, dense feature maps Ds ∈ RHs×Ws×C and Dt ∈ RHt×Wt×C are extracted from each image pair Is and It, respectively. Directly matching similarity between Ds and Dt without introducing any prior often undergoes ambiguous matches due to limited local repetitive patterns. To address this issue, cost aggregation techniques are employed to refine matches from initial correlation maps. The correspondence flow is, consequently, inferred from refined matching scores. Our approach follows this common framework for semantic correspondence. As shown in Figure 1(a), we follow the previous works (Min et al., 2021; 2020; Truong et al., 2020b) to construct the correlation maps, using cosine similarity, C(Dsx, Dty) ∈ RHs×Ws×Ht×Wt = Dsx,:·D t y,: ||Dsx,:||·||Dty,:|| . The result is a 4D tensor, representing the initial matching scores between an image pair. To capture rich semantic information, 4D convolutions (Min & Cho, 2021; Rocco et al., 2018b) are employed to extract multi-scale features at different levels of a backbone network. However, such dense feature points are weak to identify global semantics alignment due to the limited receptive fields of convolutions. To address this issue, a cost aggregator is introduced to refine the correlation maps before feeding them for flow estimation. The current cost aggregators (Cho et al., 2021; Hong et al., 2022) leverage the transformer to refine correlation maps due to its global receptive fields. However, such methods discard the ability to capture the short-range interaction between candidate matches, leading to extra noisy matches when matching the semantic correspondences in a small region. Besides, it is data-hungry and requires large amounts of training data to avoid over-fitting. 4 GLOBAL-LOCAL BAYESIAN TRANSFORMER Given the limitations of existing methods, we propose a Global-Local Bayesian Transformer (GLBT) to refine the correlation maps by considering the local and global interactions between candidate matches simultaneously. As visualized in Figure 1(b), GLBT stacks a group of the Global-Local Bayesian Self-Attention (GLB-SA) module, layer normalization, and multilayer linear perceptron, to refine the final correlation matches: C′ = GLBT(X), (1) where X ∈ RL×C is the correlation map unfolded from the result of Conv4D(C(Ds, Dt)), L = Hs ×Ws ×Ht ×Wt, and C denotes the channels. Self-attention, which obtains key, value and query from the initial correlation map X, is the core of GLBT. Instead of using the standard self-attention mechanism (Vaswani et al., 2017), we introduce one Global Bayesian Self-Attention (GB-SA) in Figure 2(a) to model the global match-to-match interaction on the large semantic displacement, and another Local Bayesian Self-Attention (LB-SA) in Figure 2(b) to model the local match-to-match interaction on the small semantic displacement. Both are then joined together to reason about the final correlation maps at the same time. 4.1 GLOBAL BAYESIAN SELF-ATTENTION The classical self-attention (Dosovitskiy et al., 2021) performs a dot-product on all pixels, prone to the issue of data-hungry (Liu et al., 2021; Yuan et al., 2021). BayesCNNs (Shridhar et al., 2019) averages models sampled from the posterior distribution of convolution kernels and have the potential to prevent the requisite of large data and be robust to over-fitting. Inspired by this, we introduce a Global Bayesian Self-Attention (GB-SA), which directly operates the matrix-multiplication on the input and the Bayesian weight to learn the global interaction between candidate matches from the correlation maps. Let θ denote the network parameters in the computation of the correlation map X = hθ(Is, It), and W ∈ Rd denote the parameters in a Bayesian self-attention module. Our Bayesian model considers W as a random variable and our goal is to infer the posterior distribution p(W |D) and learn the parameters θ simultaneously. The whole proposed network can be viewed as the following probabilistic model: pθ(Kgt|Is, It,W ) = N (Kgt|Φ(G(hθ(Is, It),W )), σ20), (2) where G stands for the probability function in the GB-SA module, hθ is the network computing the correlation map, and σ0 is the standard deviation of the Gaussian distribution. To avoid yielding a slow convergence and prevent a strange local minima (Blundell et al., 2015), we use the mixture Gaussian distribution with zero mean for the prior distribution p(W ): p(W ) = d∏ i=1 N (Wi|0, σ21) + (1− π)N (Wi|0, σ22), (3) where σ1 and σ2 correspond to the standard deviations of two Gaussian distributions, respectively. To infer the Bayesian posterior distribution p(W |D) on the weights in self-attention, we follow the variational inference procedure (Shridhar et al., 2019) to estimate an approximate variational posterior qϕ(W ) by minimizing the Kullback-Leibler (KL) divergence (Kullback & Leibler, 1951): θ̂, ϕ̂ = arg min θ,ϕ KL[qϕ(W )||pθ(W |D)] = arg min θ,ϕ KL [qϕ(W )∥p(W )]− Eqϕ(W )[log pθ(D|W )], (4) where the likelihood pθ(D|W ) = ∏M j=1 pθ(K gt j |Isj , Itj ,W ). In addition, we use a Gaussian distribution for the variational posterior, so the parameter ϕ = (µ,σ), where µ is the mean vector and σ is the standard deviation vector. To achieve the GB-SA operation visible in Figure 2(a), we sample the attention weights W bayesg from the learnable Bayesian posterior distribution qϕ(W |D) and, directly compute it and the correlation map X via the general matrix multiplication: WGB = G(X,W bayesg ) = X ∗W bayesg . (5) Recall the self-attention mechanism (Vaswani et al., 2017), the attention weight is a non-zero matrix with each row summing to one. Therefore, we leverage a softmax function to obtain attention weight Wg = softmax(WGB), and the resulting weight Wg is then used to refine the initial matching features. Such a GB-SA, which is leveraged as a regularisation on the weights of the network, can learn the global matches on small data and is robust to over-fitting. 4.2 LOCAL BAYESIAN SELF-ATTENTION Besides, the traditional self-attention (Dosovitskiy et al., 2021) is prone to introducing extra noisy matches when capturing the short-range interaction from the correlation maps in a small region. Inspired by the sliding windows used in convolutions, we introduce another Local Bayesian SelfAttention (LB-SA), which conducts the dot-product of the input and the sparse Bayesian weight according to the matrix factorization, to reason about the short-range matches from semantic context. To achieve the LB-SA, we leverage the butterfly matrix (Dao et al., 2019) to generate a boolean matrix B and, sample the attention weight W bayesl from the learnable Bayesian posterior distribution qϕ(W |D) which is inferred by the similar rules as Equation 4. As shown in Figure 2(b), we employ the boolean matrix B to sparsify the Bayesian attention weight W bayesl : A = B ⊙W bayesl , (6) where ⊙ is the Hadamard product of B and W bayesl , and the resulting A is a sparse Bayesian weight. To capture the local correspondences on the limited receptive field, we leverage the matrix factorization technique to divide both the input X and the sparse Bayesian matrix A into n pairs of patches Xij and Aij with a window size S × S, where 1 ≤ i ≤ n, 1 ≤ j ≤ n and n = HsS = Ws S = Ht S = Wt S Afterwards, each pair of sub-matrices is computed separately via the matrix multiplication, to generate the final Bayesian attention weight WLB . Let L denote the function in the LB-SA module, we have: WLB = L(X,A) = X ∗A = [ X11 X12 X21 X22 ] ∗ [ A11 A12 A21 A22 ] = [ X11 ∗A11 X12 ∗A12 X21 ∗A21 X22 ∗A22 ] . (7) Compared to direct matrix-multiplication, such a process has a strong capability of modeling the local patterns while reducing the computational complexity. To efficiently model the long-range and short-range interactions between candidate matches from correlation maps, the resulting local attention weight WLB is integrated with the global attention weight WGB to obtain the final globallocal attention weight Wgl = softmax(WGB +WLB) in our proposed GLBT. Consequently, the GLBT is hierarchically aggregated as a cost aggregator to refine the initial correlation maps before feeding it into the decoder for flow estimation. 5 EXPERIMENTS 5.1 EXPERIMENTAL SETTINGS AND IMPLEMENTATION DETAILS Datasets. We conduct comprehensive experiments on three widely-used benchmark datasets for semantic correspondence, including SPair-71k (Min et al., 2019b), PF-PASCAL (Ham et al., 2017) and PF-WILLOW (Ham et al., 2016). The SPair-71k dataset contains 70,958 image pairs with diverse variations in viewpoint and scale, splitting into 53,340 pairs for training, 5,384 pairs for validation and 12,234 pairs for testing. The PF-PASCAL dataset contains 1,351 image pairs from 20 categories, augmented to 2,940 training pairs, 308 validation pairs and 299 testing pairs. The PF-WILLOW dataset contains 900 image pairs from 4 categories, used for testing. Evaluation Metric. The percentage of correct keypoints (PCK) is the standard evaluation metric for category-level matching. Given a pair of predicted keypoint Kpred and ground-truth keypoint Kgt, PCK computes the ratio of correctly predicted keypoints by PCK = 1N ∑N i=1[||K pred i − K gt i || ≤ α ·max(H,W )], where H and W denote height and width of an entire image or an object bounding box, and α is a threshold to tolerate the distance between the predicted keypoint and the ground-truth. Implementation Details. We follow the recent method (Min et al., 2019a) to extract the features from the best sub-layers of ResNet101 (He et al., 2016) pre-trained on the ImageNet (Deng et al., 2009) dataset. In training process, batch-size is set to 8 for all experiments and AdamW (Kingma & Ba, 2015) with a weight decay of 0.05 is adopted for optimization. The data augmentation techniques introduced in (Cho et al., 2021) are also used in our method. The learning rate for backbone features is set to 1e-6. The learning rate for the cost aggregation layers is initialized as 1e-5 and gradually decreased during training. We train the model for 300 epochs. All experiments are implemented with PyTorch (Paszke et al., 2019) and our method costs 38.6 ms inference time on V100 GPUs. 5.2 BENCHMARK RESULTS AND ANALYSIS To provide a fair comparison of our proposed GLBT and other state-of-the-arts, including CNNGeo (Rocco et al., 2017), A2Net (Seo et al., 2018), NC-Net (Rocco et al., 2018b), WeakAlign (Rocco et al., 2018a), HPF (Min et al., 2019a), SCOT (Liu et al., 2020), DHPF (Min et al., 2020), CHM (Min & Cho, 2021), CATs (Cho et al., 2021), MMNet (Zhao et al., 2021), and VAT (Hong et al., 2022), we use the same backbone ResNet101 (He et al., 2016) to extract the features from a pair of images. All results are measured under the same PCK evaluation indications on the benchmark datasets. Table 1 and Table 2 report the quantitative comparison of the proposed GLBT with the previous state-of-the-art methods on SPair-71k (Min et al., 2019b), PF-PASCAL (Ham et al., 2017), and PF-WILLOW (Ham et al., 2016) respectively. In Table 1, we find that the transformer-based cost aggregators outperform others by a wide margin, due to the capability of long-range matches for self-attention in the transformer. Compared to the previous best transformer-based VAT, the overall performance of our GLBT surpasses it by 2.0% @ αbbox = 0.1, 1.0% @ αimg = 0.1 and 1.1% @ αimg = 0.1, on SPair-71k, PF-PASCAL, and PF-WILLOW, respectively. Moreover, we also compare the results of each class on SPair-71k in Table 2. GLBT achieves the best performance in most categories, such as aeroplane, bike and boat, because it integrates both global and local self-attention to learn the long-range and short-range matches between images when refining the matching scores. Figure 3 provides the visual comparison of results obtained from GLBT and the recent state-of-the-art methods, namely VAT (Hong et al., 2022), MMNet (Zhao et al., 2021) and CATs (Cho et al., 2021). The visual examples demonstrate that GLBT can match more accurate points between a pair of images than other methods. The results also present that GLBT has smaller offsets than others for the correspondences between image pairs, further validating the effectiveness of our proposed method. 5.3 ABLATION STUDY AND ANALYSIS In this section, we provide an ablation analysis to investigate the importance of the cost aggregation stage during the entire pipeline. We also show the details of our proposed GLBT, including one Global Bayesian Self-Attention (GB-SA) and another Local Bayesian Self-Attention (LB-SA). For a fair comparison, we conduct all ablation study experiments with the same backbone ResNet101 (He et al., 2016) and each experiment is trained from scratch under the same settings. Overall Pipeline. Table 3 explores the impact of three modules, including feature extraction, cost aggregation, and flow estimation, for semantic correspondence. To validate that cost aggregation plays an essential role in the whole pipeline, we conduct ablation studies based on the different combinations of these modules. The results shown in Table 3 report the performance of involved models on SPair-71k (Min et al., 2019b), PF-PASCAL (Ham et al., 2017), and PF-WILLOW (Ham et al., 2016) in terms of PCK evaluation indicators with different thresholds. The results summarize that cost aggregation network contributes the most improvements to the final performance. Effect on GLB Self-Attention. As visible in Table 4, we explore the effectiveness of the Global and Local Bayesian Self-Attention (GLB-SA) for transformer-based cost aggregation network, on the SPair-71k (Min et al., 2019b), PF-PASCAL (Ham et al., 2017), and PF-WILLOW (Ham et al., 2016) benchmark datasets in terms of PCK @ α = 0.1. The baseline method adopts the Global Self-Attention (G-SA) Vaswani et al. (2017) based transformer to model the long-range matches between images for the refinement of correlation maps at the cost aggregation stage. To process the local semantic matches, we leverage matrix factorization (Ocker & Buice, 2021; Shah et al., 2015) to implement the Local Self-Attention (L-SA). Table 4 shows that the result of L-SA outper- forms the G-SA. Besides, the Global-Local Self-Attention (GL-SA), which is a combination of G-SA and L-SA, has a better performance than both G-SA and L-SA. To further investigate the effects of Bayesian self-attention for the transformer cost aggregator, we conduct the extra ablation experiments, including G-SA vs. the Global Bayesian Self-Attention (GB-SA), L-SA vs. the Local Bayesian Self-Attention (LB-SA), and the GL-SA vs. GLB-SA, respectively. Compared to the results shown in Table 4, we find that the application of the Bayesian inference to self-attention in the transformer outperforms the non-Bayesian self-attention, because such a Bayesian self-attention mechanism acts like a regularization. Among them, our proposed GLB-SA achieves the best performance on the refinement of the correlation, further validating its effect on finding semantic correspondence. Effect on Over-fitting for GLBT. To verify that the proposed Bayesian self-attention for the GLBT model can alleviate over-fitting, Figure 4 compares the loss curves obtained by the GLT and the GLBT. As shown in Figure 4(a), the loss curve of GLT fluctuates up and down in 50-150 epochs. The fluctuations are caused by the large intra-class variations in appearance and geometry for unconstrained image pairs. Figure 4(b) reports the loss curve of the GLBT, which is much more smooth than the GLT. We find that such a Bayesian self-attention can be regarded as a regularization mechanism to prevent the transformer-based model from over-fitting, when refining the correlation maps of challenging image pairs. Memory and Run-time. Table 5 compares the memory and run-time of DHPF (Min et al., 2020), CHM (Min & Cho, 2021), CATs (Cho et al., 2021), MMNet (Zhao et al., 2021), VAT (Hong et al., 2022) and GLBT. For a fair comparison, all methods employ the backbone ResNet101 for feature extraction, and the results are obtained using the same machine. Compared to other methods, GLBT and VAT methods lever- age transformer-based cost aggregators, exploit larger resolution and more memory than others, and surpass other methods by a large margin. We also find that compared to the previous state-of-the-art method VAT, our proposed method outperforms it in terms of PCK @ α = 0.1, while reducing the memory and run-time by 0.4 GB and 18.7 ms, respectively. 6 CONCLUSION In this paper, we have proposed a global-local Bayesian Transformer-based cost aggregation network, dubbed GLBT, for semantic correspondence. It integrates the global and local Bayesian self-attentions to infer the long-and-short range relationship between the correlation matches based on Bayes’ rule, achieving both global and local match-to-match interaction at the same time. We have demonstrated that our proposed method outperforms the existing state-of-the-art by a large margin on public benchmark datasets. Moreover, we have also conducted extensive ablation studies to validate the effect of our proposed global-local Bayesian self-attention which is applied for Transformer-based cost aggregator. We hope that our findings can inspire further research work for other domains.
1. What is the main contribution of the paper regarding dense semantic matching? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to prevent overfitting in transformers? 3. Do you have any concerns about the clarity and logical coherence of the paper's introduction and methodology? 4. How does the reviewer assess the novelty and reproducibility of the paper's content? 5. Are there any specific questions or areas of confusion regarding the paper's experiments or technical details?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper solves the problem of dense semantic matching with ResNet 101 backbone and transformer-based 4-D correlation aggregation. The paper stated that their Bayesian self-attention module can mitigate over-fitting while modeling the long-range interaction from correlation maps. For the short-range interaction, this paper adopts local Bayesian self-attention. Experiments show that the proposed method gets impressive results over state-of-the-art methods. However, the logic of this paper is not clear. The author said that the Bayesian attention can alleviate the over-fitting problem of transformers, but no evidence is shown to support this. Besides, the method is not introduced clearly and I get even more confused when reading the codes. Strengths And Weaknesses *Strength The idea of using Bayesian attention to prevent over-fitting in transformers is interesting and reasonable. The authors designed two kinds of attention based on the Bayesian scheme. Experiments show that the proposed method gets an impressive improvement over state-of-the-art algorithms. *Weakness The introduction of this paper is not logically clear. The authors stated that the proposed GLBT can be used to prevent the over-fitting of attention in transformer models. But why Bayesian attention will work is not explained. Moreover, the sentence "Inspired by BayesNN (Blundell et al., 2015), which applied a variational inference ..., our proposed GLBT introduces ... for capturing the long-range and short-range match-to-match interaction " makes me confused. Is there any connection between BayesNN and global-local self-attention? Moreover, it's not the attention part but the feed-forward net part in a transformer block that has too many parameters. This paper doesn't take the FFN part into consideration which makes the motivation of this paper become problematic. The methods of this paper are not introduced clearly. I can't find exact words that can explain the cost aggregation part and the flow estimation part clearly. Figure 1 is not well drafted. It's hard for readers to understand what has been done in GLBT. How GLBT works is not described in detail. I read the codes and find that the implementation is just adding some Bayesian noise on the attention matrices of transformer blocks. It seems that it was the noise that brings the performance gain. Clarity, Quality, Novelty And Reproducibility Overall, I think this paper is just an incremental work on VAT (ECCV 2022). The motivation of this paper is not explained logically in the introduction part. The method section is not given with a clear overall framework and technical details. I read the codes and think they can't match the methods described.
ICLR
Title Global-Local Bayesian Transformer for Semantic Correspondence Abstract Cost aggregation is the key to finding semantic correspondence between a pair of similar images. Transformer-based cost aggregators have recently shown strong performance in obtaining high-quality correlation maps due to their capability of capturing long-range dependencies between matching points. However, such models are data-hungry and prone to over-fitting when training data is not sufficiently large. Besides, they easily incur incorrect matches when finding correspondences in the local semantic context. To address these issues, we propose a Global-Local Bayesian Transformer (GLBT) for cost aggregation. Specifically, GLBT introduces one global Bayesian self-attention module, whose weights are sampled from a learnable Bayesian posterior distribution, to mitigate over-fitting while modeling the long-range interaction from correlation maps. Furthermore, to model the short-range interaction between candidate matches, GLBT introduces another local Bayesian self-attention module, which factorizes both correlation maps and Bayesian attention weights into pairs of patches and conducts a matrix multiplication on individuals rather than a direct dot-product. Two self-attention modules are joined together to model the long-range and short-range interactions from correlation maps. Ultimately, GLBT is hierarchically aggregated for the refinement of correlation maps before feeding it to the flow estimator. We conduct extensive experiments to show the superiority of our proposed network to the state-of-the-art methods on datasets, including SPair-71k, PF-PASCAL, and PF-WILLOW. 1 INTRODUCTION Establishing dense semantic correspondences between images is a fundamental problem facilitating many vision tasks, including semantic segmentation (Min et al., 2021; Xie et al., 2021), 3D reconstruction (Kokkinos & Kokkinos, 2021a;b; Li et al., 2020b), and optical flow estimation(Yang & Ramanan, 2019). In contrast to the classical pixel-wise correspondence problems (Kim et al., 2003) that require images to be geometrically normalized and aligned, semantic correspondence considers unconstrained image pairs, posing additional challenges from large intra-class variations in appearance and geometry. Recent methods (Bristow et al., 2015; Cho et al., 2021; Zhao et al., 2021) for semantic correspondence generally follow the classical matching pipeline, including feature extraction, cost aggregation, and flow estimation. Some works (Rublee et al., 2011; Tola et al., 2010) attempted to find the semantic similarity between images by focusing on the feature extraction stage. These methods disregard the pixel-wise relationship between correlation features, resulting in sub-optimal performance. To overcome this issue, several methods (Jeon et al., 2020; Rocco et al., 2017; Truong et al., 2020b; Hong & Kim, 2021) introduced a regression network at the flow estimation stage to infer dense correspondences from correlation maps. However, such approaches rely on high-quality initial matching scores. Thereby, the latest methods (Min & Cho, 2021; Min et al., 2019a; Li et al., 2020a; Rocco et al., 2020; Min et al., 2020; Rocco et al., 2018b) have focused on designing an efficient cost aggregation module to improve the quality of correlation maps before feeding them into the flow estimation, proving the importance of cost aggregation networks. The core of the cost aggregation stage is to produce reliable correlation maps via the refinement of matching scores. Some models (Min & Cho, 2021; Rocco et al., 2018b) refined the local consistent matches from the initial correlation maps with high-dimensional 4D or 6D convolutions. However, such models lack the ability to achieve long-range context aggregation due to the inherently limited receptive fields. To tackle this problem, CATs (Cho et al., 2021) leveraged the vision transformer for cost aggregation to effectively refine the ambiguous matching scores in consideration of the global consensus. Nonetheless, it overlooks the spatial structure of the correlation map, leading to sub-optimal results. To further boost the performance, VAT (Hong et al., 2022) proposed a 4D Convolutional Swin Transformer as a cost aggregator to preserve the spatial structure of correlation maps, while providing an efficient self-attention to model long-range interaction between candidate matches. However, the existing Transformer-based cost aggregators (Hong et al., 2022; Casey et al., 2021; Cho et al., 2021) are infeasible to model the short-range pixel-to-pixel interaction, resulting in redundant noisy matches when dealing with the local semantic matches. In addition, since transformer architecture is prone to over-fitting, these transformer-based aggregators are data-hungry (Hassani et al., 2021), i.e., requiring enormous amounts of training data to obtain a good performance. To address these limitations, we propose a Global-Local Bayesian Transformer (GLBT) cost aggregator for semantic correspondence. Inspired by BayesNN (Blundell et al., 2015), which applied a variational inference on the weights of a neural network to prevent over-fitting, our proposed GLBT introduces the Global-Local Bayesian Self-Attention (GLB-SA) into the transformer aggregator for capturing the long-range and short-range match-to-match interaction from correlation maps simultaneously. Compared to the raw self-attention in the transformer (Cho et al., 2021; Vaswani et al., 2017), which suffers from a data-hungry issue due to the operation of dense matrix-vector multiplication, GLBT leverages the sparse matrix factorization (Dao et al., 2019) on the self-attention operation to avoid over-fitting via a reduction in its learnable parameters. The proposed GLBT module is then leveraged to hierarchically aggregate the multi-level matching correspondences on the different semantic contexts, achieving the refinement of correlation maps. Consequently, the refined correlation maps are applied in the decoder to infer the semantic correspondences from image pairs. We validate the effectiveness of our GLBT method on public benchmark datasets (Ham et al., 2016; 2017; Min et al., 2019b). Extensive experimental results demonstrate that our proposed method for semantic correspondence outperforms the previous state-of-the-art methods on several benchmarks. We also provide a detailed ablation analysis to verify the main components in GLBT. 2 RELATED WORK Semantic Correspondence. Finding semantic correspondences between image pairs poses additional challenges to intra-class appearance and shape variations among different instances from the same object or scene category. To address these challenges, approaches to semantic correspondence can be roughly categorized into hand-crafted feature-based methods (Bay et al., 2006; Dalal & Triggs, 2005; Ham et al., 2016; Liu et al., 2011; LoweDavid, 2004; Rublee et al., 2011; Tola et al., 2010) and learnable feature-based methods (Choy et al., 2016; Kim et al., 2018; 2017; Lee et al., 2019; Li et al., 2020a; Rocco et al., 2017; Seo et al., 2018). Hand-crafted techniques leverage the low-level feature descriptors, such as SIFT (LoweDavid, 2004), HOG (Taniai et al., 2016), and DAISY (Tola et al., 2010), to measure dense correspondences, lacking the capture of high-level semantics. To tackle this problem, most learnable techniques focus on building dense correspondences on highlevel semantic features of deep convolutional neural networks, such as NC-Net (Rocco et al., 2018b), ANC-Net (Li et al., 2020a), and GOCor (Truong et al., 2020a). However, solely relying on the deep learnable features limits the performance of semantic correspondences due to the direct output of the similarity scores from the correlation maps. To address this issue, (Rocco et al., 2017) proposed a regression network to estimate the parameters from the matching features, coping with incorrect matches from the initial learnable features at the flow estimation stage. Their success encourages many variant methods, e.g., GSF (Jeon et al., 2020) and GLU-Net (Truong et al., 2020b), to directly regress semantic correspondences from the feature matches. Cost Aggregation. To alleviate the requirement of high-quality initial matching scores, HPF (Min et al., 2020) introduced the RHM (Min et al., 2019a) cost aggregator into the learnable feature methods for geometric consistency enhancement. Later, numerous CNN-based feature-learnable variants (Min & Cho, 2021; Rocco et al., 2018b) utilized 4D or 6D convolution-based geometric matching algorithms to refine the local consistency of the initial correlation maps. Nonetheless, CNN-based aggregation networks fail to model global matches due to the limited receptive fields of convolutions. Transformer-based aggregators (Cho et al., 2021; Sun et al., 2021), which leveraged the self-attention mechanism (Vaswani et al., 2017) to capture the global match-to-match interaction from the initial correlation map, can solve this problem. However, such a self-attention is prone to introduce redundant noisy matches when modeling the short-range interaction in a small region, because it does not consider local contexts. Besides, Transformer-based deep networks starve huge amounts of training data to avoid over-confident decisions. Bayesian Neural Network. Applying Bayesian approaches (Shridhar et al., 2019; Fan et al., 2021; Zhang et al., 2021) to neural networks is an alternative to mitigating the over-fitting issue by offering uncertainty estimates so that Bayesian Neural Networks (BayesNNs) can easily learn from small datasets and are robust to over-fitting. In the past years, several methods, such as Variational Inference (Blundell et al., 2015; Graves, 2011), Laplace Approximation (MacKay, 1992), and MC Dropout (Gal & Ghahramani, 2015; 2016), have been widely applied to estimate the parameter uncertainty, which is propagated for predictions. Instead of selecting a single point estimate, BayesNNs use the Bayes rule to average results over parameter values and thus have a strong reasoning ability. 3 PRELIMINARY Let Is ∈ RHs×Ws×3 and It ∈ RHt×Wt×3 denote a pair of source and target images, respectively. The goal of dense semantic correspondence is to find the optimal f∗ that generates a correspondence flow containing the offsets between corresponding keypoints in the two images, i.e., Kpred = f∗(Is, It), where the correspondence flow Kpred = {(∆xsi ,∆ysi )} Hs×Ws i=1 contains the predicted offsets for all pixels in the source image. Following previous works, we consider learning of f∗ in the supervised setting. More specifically, we are given a dataset D = {(Isj , Itj ,K gt j )}Mj=1 containing M image pairs and the associated ground-truth correspondence flows. Due to sparse annotations, the ground-truth flow Kgtj = {(∆xsi ,∆ysi )} Hs×Ws i=1 is only non-zero at a subset of locations. We aim to learn an approximate fn by minimizing the distance between the predicted and the ground-truth correspondence flows: fM = argmin f 1 M ∑M j=1 ||Φ(f(Isj , Itj))−K gt j ||, where Φ is a logical metric that sets the offsets at locations without ground-truth offsets to zero. The pipeline to design the function f involves several basic steps, including feature extraction, cost aggregation, and flow estimation. Specifically, dense feature maps Ds ∈ RHs×Ws×C and Dt ∈ RHt×Wt×C are extracted from each image pair Is and It, respectively. Directly matching similarity between Ds and Dt without introducing any prior often undergoes ambiguous matches due to limited local repetitive patterns. To address this issue, cost aggregation techniques are employed to refine matches from initial correlation maps. The correspondence flow is, consequently, inferred from refined matching scores. Our approach follows this common framework for semantic correspondence. As shown in Figure 1(a), we follow the previous works (Min et al., 2021; 2020; Truong et al., 2020b) to construct the correlation maps, using cosine similarity, C(Dsx, Dty) ∈ RHs×Ws×Ht×Wt = Dsx,:·D t y,: ||Dsx,:||·||Dty,:|| . The result is a 4D tensor, representing the initial matching scores between an image pair. To capture rich semantic information, 4D convolutions (Min & Cho, 2021; Rocco et al., 2018b) are employed to extract multi-scale features at different levels of a backbone network. However, such dense feature points are weak to identify global semantics alignment due to the limited receptive fields of convolutions. To address this issue, a cost aggregator is introduced to refine the correlation maps before feeding them for flow estimation. The current cost aggregators (Cho et al., 2021; Hong et al., 2022) leverage the transformer to refine correlation maps due to its global receptive fields. However, such methods discard the ability to capture the short-range interaction between candidate matches, leading to extra noisy matches when matching the semantic correspondences in a small region. Besides, it is data-hungry and requires large amounts of training data to avoid over-fitting. 4 GLOBAL-LOCAL BAYESIAN TRANSFORMER Given the limitations of existing methods, we propose a Global-Local Bayesian Transformer (GLBT) to refine the correlation maps by considering the local and global interactions between candidate matches simultaneously. As visualized in Figure 1(b), GLBT stacks a group of the Global-Local Bayesian Self-Attention (GLB-SA) module, layer normalization, and multilayer linear perceptron, to refine the final correlation matches: C′ = GLBT(X), (1) where X ∈ RL×C is the correlation map unfolded from the result of Conv4D(C(Ds, Dt)), L = Hs ×Ws ×Ht ×Wt, and C denotes the channels. Self-attention, which obtains key, value and query from the initial correlation map X, is the core of GLBT. Instead of using the standard self-attention mechanism (Vaswani et al., 2017), we introduce one Global Bayesian Self-Attention (GB-SA) in Figure 2(a) to model the global match-to-match interaction on the large semantic displacement, and another Local Bayesian Self-Attention (LB-SA) in Figure 2(b) to model the local match-to-match interaction on the small semantic displacement. Both are then joined together to reason about the final correlation maps at the same time. 4.1 GLOBAL BAYESIAN SELF-ATTENTION The classical self-attention (Dosovitskiy et al., 2021) performs a dot-product on all pixels, prone to the issue of data-hungry (Liu et al., 2021; Yuan et al., 2021). BayesCNNs (Shridhar et al., 2019) averages models sampled from the posterior distribution of convolution kernels and have the potential to prevent the requisite of large data and be robust to over-fitting. Inspired by this, we introduce a Global Bayesian Self-Attention (GB-SA), which directly operates the matrix-multiplication on the input and the Bayesian weight to learn the global interaction between candidate matches from the correlation maps. Let θ denote the network parameters in the computation of the correlation map X = hθ(Is, It), and W ∈ Rd denote the parameters in a Bayesian self-attention module. Our Bayesian model considers W as a random variable and our goal is to infer the posterior distribution p(W |D) and learn the parameters θ simultaneously. The whole proposed network can be viewed as the following probabilistic model: pθ(Kgt|Is, It,W ) = N (Kgt|Φ(G(hθ(Is, It),W )), σ20), (2) where G stands for the probability function in the GB-SA module, hθ is the network computing the correlation map, and σ0 is the standard deviation of the Gaussian distribution. To avoid yielding a slow convergence and prevent a strange local minima (Blundell et al., 2015), we use the mixture Gaussian distribution with zero mean for the prior distribution p(W ): p(W ) = d∏ i=1 N (Wi|0, σ21) + (1− π)N (Wi|0, σ22), (3) where σ1 and σ2 correspond to the standard deviations of two Gaussian distributions, respectively. To infer the Bayesian posterior distribution p(W |D) on the weights in self-attention, we follow the variational inference procedure (Shridhar et al., 2019) to estimate an approximate variational posterior qϕ(W ) by minimizing the Kullback-Leibler (KL) divergence (Kullback & Leibler, 1951): θ̂, ϕ̂ = arg min θ,ϕ KL[qϕ(W )||pθ(W |D)] = arg min θ,ϕ KL [qϕ(W )∥p(W )]− Eqϕ(W )[log pθ(D|W )], (4) where the likelihood pθ(D|W ) = ∏M j=1 pθ(K gt j |Isj , Itj ,W ). In addition, we use a Gaussian distribution for the variational posterior, so the parameter ϕ = (µ,σ), where µ is the mean vector and σ is the standard deviation vector. To achieve the GB-SA operation visible in Figure 2(a), we sample the attention weights W bayesg from the learnable Bayesian posterior distribution qϕ(W |D) and, directly compute it and the correlation map X via the general matrix multiplication: WGB = G(X,W bayesg ) = X ∗W bayesg . (5) Recall the self-attention mechanism (Vaswani et al., 2017), the attention weight is a non-zero matrix with each row summing to one. Therefore, we leverage a softmax function to obtain attention weight Wg = softmax(WGB), and the resulting weight Wg is then used to refine the initial matching features. Such a GB-SA, which is leveraged as a regularisation on the weights of the network, can learn the global matches on small data and is robust to over-fitting. 4.2 LOCAL BAYESIAN SELF-ATTENTION Besides, the traditional self-attention (Dosovitskiy et al., 2021) is prone to introducing extra noisy matches when capturing the short-range interaction from the correlation maps in a small region. Inspired by the sliding windows used in convolutions, we introduce another Local Bayesian SelfAttention (LB-SA), which conducts the dot-product of the input and the sparse Bayesian weight according to the matrix factorization, to reason about the short-range matches from semantic context. To achieve the LB-SA, we leverage the butterfly matrix (Dao et al., 2019) to generate a boolean matrix B and, sample the attention weight W bayesl from the learnable Bayesian posterior distribution qϕ(W |D) which is inferred by the similar rules as Equation 4. As shown in Figure 2(b), we employ the boolean matrix B to sparsify the Bayesian attention weight W bayesl : A = B ⊙W bayesl , (6) where ⊙ is the Hadamard product of B and W bayesl , and the resulting A is a sparse Bayesian weight. To capture the local correspondences on the limited receptive field, we leverage the matrix factorization technique to divide both the input X and the sparse Bayesian matrix A into n pairs of patches Xij and Aij with a window size S × S, where 1 ≤ i ≤ n, 1 ≤ j ≤ n and n = HsS = Ws S = Ht S = Wt S Afterwards, each pair of sub-matrices is computed separately via the matrix multiplication, to generate the final Bayesian attention weight WLB . Let L denote the function in the LB-SA module, we have: WLB = L(X,A) = X ∗A = [ X11 X12 X21 X22 ] ∗ [ A11 A12 A21 A22 ] = [ X11 ∗A11 X12 ∗A12 X21 ∗A21 X22 ∗A22 ] . (7) Compared to direct matrix-multiplication, such a process has a strong capability of modeling the local patterns while reducing the computational complexity. To efficiently model the long-range and short-range interactions between candidate matches from correlation maps, the resulting local attention weight WLB is integrated with the global attention weight WGB to obtain the final globallocal attention weight Wgl = softmax(WGB +WLB) in our proposed GLBT. Consequently, the GLBT is hierarchically aggregated as a cost aggregator to refine the initial correlation maps before feeding it into the decoder for flow estimation. 5 EXPERIMENTS 5.1 EXPERIMENTAL SETTINGS AND IMPLEMENTATION DETAILS Datasets. We conduct comprehensive experiments on three widely-used benchmark datasets for semantic correspondence, including SPair-71k (Min et al., 2019b), PF-PASCAL (Ham et al., 2017) and PF-WILLOW (Ham et al., 2016). The SPair-71k dataset contains 70,958 image pairs with diverse variations in viewpoint and scale, splitting into 53,340 pairs for training, 5,384 pairs for validation and 12,234 pairs for testing. The PF-PASCAL dataset contains 1,351 image pairs from 20 categories, augmented to 2,940 training pairs, 308 validation pairs and 299 testing pairs. The PF-WILLOW dataset contains 900 image pairs from 4 categories, used for testing. Evaluation Metric. The percentage of correct keypoints (PCK) is the standard evaluation metric for category-level matching. Given a pair of predicted keypoint Kpred and ground-truth keypoint Kgt, PCK computes the ratio of correctly predicted keypoints by PCK = 1N ∑N i=1[||K pred i − K gt i || ≤ α ·max(H,W )], where H and W denote height and width of an entire image or an object bounding box, and α is a threshold to tolerate the distance between the predicted keypoint and the ground-truth. Implementation Details. We follow the recent method (Min et al., 2019a) to extract the features from the best sub-layers of ResNet101 (He et al., 2016) pre-trained on the ImageNet (Deng et al., 2009) dataset. In training process, batch-size is set to 8 for all experiments and AdamW (Kingma & Ba, 2015) with a weight decay of 0.05 is adopted for optimization. The data augmentation techniques introduced in (Cho et al., 2021) are also used in our method. The learning rate for backbone features is set to 1e-6. The learning rate for the cost aggregation layers is initialized as 1e-5 and gradually decreased during training. We train the model for 300 epochs. All experiments are implemented with PyTorch (Paszke et al., 2019) and our method costs 38.6 ms inference time on V100 GPUs. 5.2 BENCHMARK RESULTS AND ANALYSIS To provide a fair comparison of our proposed GLBT and other state-of-the-arts, including CNNGeo (Rocco et al., 2017), A2Net (Seo et al., 2018), NC-Net (Rocco et al., 2018b), WeakAlign (Rocco et al., 2018a), HPF (Min et al., 2019a), SCOT (Liu et al., 2020), DHPF (Min et al., 2020), CHM (Min & Cho, 2021), CATs (Cho et al., 2021), MMNet (Zhao et al., 2021), and VAT (Hong et al., 2022), we use the same backbone ResNet101 (He et al., 2016) to extract the features from a pair of images. All results are measured under the same PCK evaluation indications on the benchmark datasets. Table 1 and Table 2 report the quantitative comparison of the proposed GLBT with the previous state-of-the-art methods on SPair-71k (Min et al., 2019b), PF-PASCAL (Ham et al., 2017), and PF-WILLOW (Ham et al., 2016) respectively. In Table 1, we find that the transformer-based cost aggregators outperform others by a wide margin, due to the capability of long-range matches for self-attention in the transformer. Compared to the previous best transformer-based VAT, the overall performance of our GLBT surpasses it by 2.0% @ αbbox = 0.1, 1.0% @ αimg = 0.1 and 1.1% @ αimg = 0.1, on SPair-71k, PF-PASCAL, and PF-WILLOW, respectively. Moreover, we also compare the results of each class on SPair-71k in Table 2. GLBT achieves the best performance in most categories, such as aeroplane, bike and boat, because it integrates both global and local self-attention to learn the long-range and short-range matches between images when refining the matching scores. Figure 3 provides the visual comparison of results obtained from GLBT and the recent state-of-the-art methods, namely VAT (Hong et al., 2022), MMNet (Zhao et al., 2021) and CATs (Cho et al., 2021). The visual examples demonstrate that GLBT can match more accurate points between a pair of images than other methods. The results also present that GLBT has smaller offsets than others for the correspondences between image pairs, further validating the effectiveness of our proposed method. 5.3 ABLATION STUDY AND ANALYSIS In this section, we provide an ablation analysis to investigate the importance of the cost aggregation stage during the entire pipeline. We also show the details of our proposed GLBT, including one Global Bayesian Self-Attention (GB-SA) and another Local Bayesian Self-Attention (LB-SA). For a fair comparison, we conduct all ablation study experiments with the same backbone ResNet101 (He et al., 2016) and each experiment is trained from scratch under the same settings. Overall Pipeline. Table 3 explores the impact of three modules, including feature extraction, cost aggregation, and flow estimation, for semantic correspondence. To validate that cost aggregation plays an essential role in the whole pipeline, we conduct ablation studies based on the different combinations of these modules. The results shown in Table 3 report the performance of involved models on SPair-71k (Min et al., 2019b), PF-PASCAL (Ham et al., 2017), and PF-WILLOW (Ham et al., 2016) in terms of PCK evaluation indicators with different thresholds. The results summarize that cost aggregation network contributes the most improvements to the final performance. Effect on GLB Self-Attention. As visible in Table 4, we explore the effectiveness of the Global and Local Bayesian Self-Attention (GLB-SA) for transformer-based cost aggregation network, on the SPair-71k (Min et al., 2019b), PF-PASCAL (Ham et al., 2017), and PF-WILLOW (Ham et al., 2016) benchmark datasets in terms of PCK @ α = 0.1. The baseline method adopts the Global Self-Attention (G-SA) Vaswani et al. (2017) based transformer to model the long-range matches between images for the refinement of correlation maps at the cost aggregation stage. To process the local semantic matches, we leverage matrix factorization (Ocker & Buice, 2021; Shah et al., 2015) to implement the Local Self-Attention (L-SA). Table 4 shows that the result of L-SA outper- forms the G-SA. Besides, the Global-Local Self-Attention (GL-SA), which is a combination of G-SA and L-SA, has a better performance than both G-SA and L-SA. To further investigate the effects of Bayesian self-attention for the transformer cost aggregator, we conduct the extra ablation experiments, including G-SA vs. the Global Bayesian Self-Attention (GB-SA), L-SA vs. the Local Bayesian Self-Attention (LB-SA), and the GL-SA vs. GLB-SA, respectively. Compared to the results shown in Table 4, we find that the application of the Bayesian inference to self-attention in the transformer outperforms the non-Bayesian self-attention, because such a Bayesian self-attention mechanism acts like a regularization. Among them, our proposed GLB-SA achieves the best performance on the refinement of the correlation, further validating its effect on finding semantic correspondence. Effect on Over-fitting for GLBT. To verify that the proposed Bayesian self-attention for the GLBT model can alleviate over-fitting, Figure 4 compares the loss curves obtained by the GLT and the GLBT. As shown in Figure 4(a), the loss curve of GLT fluctuates up and down in 50-150 epochs. The fluctuations are caused by the large intra-class variations in appearance and geometry for unconstrained image pairs. Figure 4(b) reports the loss curve of the GLBT, which is much more smooth than the GLT. We find that such a Bayesian self-attention can be regarded as a regularization mechanism to prevent the transformer-based model from over-fitting, when refining the correlation maps of challenging image pairs. Memory and Run-time. Table 5 compares the memory and run-time of DHPF (Min et al., 2020), CHM (Min & Cho, 2021), CATs (Cho et al., 2021), MMNet (Zhao et al., 2021), VAT (Hong et al., 2022) and GLBT. For a fair comparison, all methods employ the backbone ResNet101 for feature extraction, and the results are obtained using the same machine. Compared to other methods, GLBT and VAT methods lever- age transformer-based cost aggregators, exploit larger resolution and more memory than others, and surpass other methods by a large margin. We also find that compared to the previous state-of-the-art method VAT, our proposed method outperforms it in terms of PCK @ α = 0.1, while reducing the memory and run-time by 0.4 GB and 18.7 ms, respectively. 6 CONCLUSION In this paper, we have proposed a global-local Bayesian Transformer-based cost aggregation network, dubbed GLBT, for semantic correspondence. It integrates the global and local Bayesian self-attentions to infer the long-and-short range relationship between the correlation matches based on Bayes’ rule, achieving both global and local match-to-match interaction at the same time. We have demonstrated that our proposed method outperforms the existing state-of-the-art by a large margin on public benchmark datasets. Moreover, we have also conducted extensive ablation studies to validate the effect of our proposed global-local Bayesian self-attention which is applied for Transformer-based cost aggregator. We hope that our findings can inspire further research work for other domains.
1. What is the focus and contribution of the paper on semantic correspondence? 2. What are the strengths of the proposed approach, particularly in terms of preventing overfitting and reducing computational complexity? 3. What are the weaknesses of the paper, especially regarding the lack of experiments to demonstrate its effectiveness? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper introduces a novel architecture, GLBT (Global Local Bayesian Transformer), to address the problem of finding semantic correspondences between pairs of similar images. The proposed architecture consists of a global Bayesian self-attention module - to prevent over-fitting while modelling long-range interactions from correlation maps, and a local self-attention module for short-range interactions. The authors propose factorizing both correlation maps and Bayesian attention weights and using matrix multiplication for the components, instead of direct dot products. The proposed method shows impressive results on PF-Pascal, PF-Willow and SPair71k. Strengths And Weaknesses Strengths: robust method to overfitting, (using Bayesian Neural Network) suitable for small datasets; detailed benchmark and ablation study. reducing computational complexity by matrix factorization. comparison + ablation of traditional self-attention (which introduces noisy matches) vs LB-SA. Weaknesses: (minor) Why is there a (1-\pi) term in Eqn 3? It would be beneficial to add an experiment to show that the GLBT indeed prevents overfitting (e.g. using less training data). no details about number of stacked GLB-SA layers and the impact of increasing or reducing this number; Clarity, Quality, Novelty And Reproducibility method is clearly explained; sufficient details are included in experiments section (backbone architecture, model hyperparameters). the proposed method combines prior ideas, resulting in a novel architecture which addresses the limitations of prior models: long range context aggregation for 4D-convolution based models, and short range interactions, for transformer-based models, which result in redundant, noisy matches for local correspondences.
ICLR
Title Global-Local Bayesian Transformer for Semantic Correspondence Abstract Cost aggregation is the key to finding semantic correspondence between a pair of similar images. Transformer-based cost aggregators have recently shown strong performance in obtaining high-quality correlation maps due to their capability of capturing long-range dependencies between matching points. However, such models are data-hungry and prone to over-fitting when training data is not sufficiently large. Besides, they easily incur incorrect matches when finding correspondences in the local semantic context. To address these issues, we propose a Global-Local Bayesian Transformer (GLBT) for cost aggregation. Specifically, GLBT introduces one global Bayesian self-attention module, whose weights are sampled from a learnable Bayesian posterior distribution, to mitigate over-fitting while modeling the long-range interaction from correlation maps. Furthermore, to model the short-range interaction between candidate matches, GLBT introduces another local Bayesian self-attention module, which factorizes both correlation maps and Bayesian attention weights into pairs of patches and conducts a matrix multiplication on individuals rather than a direct dot-product. Two self-attention modules are joined together to model the long-range and short-range interactions from correlation maps. Ultimately, GLBT is hierarchically aggregated for the refinement of correlation maps before feeding it to the flow estimator. We conduct extensive experiments to show the superiority of our proposed network to the state-of-the-art methods on datasets, including SPair-71k, PF-PASCAL, and PF-WILLOW. 1 INTRODUCTION Establishing dense semantic correspondences between images is a fundamental problem facilitating many vision tasks, including semantic segmentation (Min et al., 2021; Xie et al., 2021), 3D reconstruction (Kokkinos & Kokkinos, 2021a;b; Li et al., 2020b), and optical flow estimation(Yang & Ramanan, 2019). In contrast to the classical pixel-wise correspondence problems (Kim et al., 2003) that require images to be geometrically normalized and aligned, semantic correspondence considers unconstrained image pairs, posing additional challenges from large intra-class variations in appearance and geometry. Recent methods (Bristow et al., 2015; Cho et al., 2021; Zhao et al., 2021) for semantic correspondence generally follow the classical matching pipeline, including feature extraction, cost aggregation, and flow estimation. Some works (Rublee et al., 2011; Tola et al., 2010) attempted to find the semantic similarity between images by focusing on the feature extraction stage. These methods disregard the pixel-wise relationship between correlation features, resulting in sub-optimal performance. To overcome this issue, several methods (Jeon et al., 2020; Rocco et al., 2017; Truong et al., 2020b; Hong & Kim, 2021) introduced a regression network at the flow estimation stage to infer dense correspondences from correlation maps. However, such approaches rely on high-quality initial matching scores. Thereby, the latest methods (Min & Cho, 2021; Min et al., 2019a; Li et al., 2020a; Rocco et al., 2020; Min et al., 2020; Rocco et al., 2018b) have focused on designing an efficient cost aggregation module to improve the quality of correlation maps before feeding them into the flow estimation, proving the importance of cost aggregation networks. The core of the cost aggregation stage is to produce reliable correlation maps via the refinement of matching scores. Some models (Min & Cho, 2021; Rocco et al., 2018b) refined the local consistent matches from the initial correlation maps with high-dimensional 4D or 6D convolutions. However, such models lack the ability to achieve long-range context aggregation due to the inherently limited receptive fields. To tackle this problem, CATs (Cho et al., 2021) leveraged the vision transformer for cost aggregation to effectively refine the ambiguous matching scores in consideration of the global consensus. Nonetheless, it overlooks the spatial structure of the correlation map, leading to sub-optimal results. To further boost the performance, VAT (Hong et al., 2022) proposed a 4D Convolutional Swin Transformer as a cost aggregator to preserve the spatial structure of correlation maps, while providing an efficient self-attention to model long-range interaction between candidate matches. However, the existing Transformer-based cost aggregators (Hong et al., 2022; Casey et al., 2021; Cho et al., 2021) are infeasible to model the short-range pixel-to-pixel interaction, resulting in redundant noisy matches when dealing with the local semantic matches. In addition, since transformer architecture is prone to over-fitting, these transformer-based aggregators are data-hungry (Hassani et al., 2021), i.e., requiring enormous amounts of training data to obtain a good performance. To address these limitations, we propose a Global-Local Bayesian Transformer (GLBT) cost aggregator for semantic correspondence. Inspired by BayesNN (Blundell et al., 2015), which applied a variational inference on the weights of a neural network to prevent over-fitting, our proposed GLBT introduces the Global-Local Bayesian Self-Attention (GLB-SA) into the transformer aggregator for capturing the long-range and short-range match-to-match interaction from correlation maps simultaneously. Compared to the raw self-attention in the transformer (Cho et al., 2021; Vaswani et al., 2017), which suffers from a data-hungry issue due to the operation of dense matrix-vector multiplication, GLBT leverages the sparse matrix factorization (Dao et al., 2019) on the self-attention operation to avoid over-fitting via a reduction in its learnable parameters. The proposed GLBT module is then leveraged to hierarchically aggregate the multi-level matching correspondences on the different semantic contexts, achieving the refinement of correlation maps. Consequently, the refined correlation maps are applied in the decoder to infer the semantic correspondences from image pairs. We validate the effectiveness of our GLBT method on public benchmark datasets (Ham et al., 2016; 2017; Min et al., 2019b). Extensive experimental results demonstrate that our proposed method for semantic correspondence outperforms the previous state-of-the-art methods on several benchmarks. We also provide a detailed ablation analysis to verify the main components in GLBT. 2 RELATED WORK Semantic Correspondence. Finding semantic correspondences between image pairs poses additional challenges to intra-class appearance and shape variations among different instances from the same object or scene category. To address these challenges, approaches to semantic correspondence can be roughly categorized into hand-crafted feature-based methods (Bay et al., 2006; Dalal & Triggs, 2005; Ham et al., 2016; Liu et al., 2011; LoweDavid, 2004; Rublee et al., 2011; Tola et al., 2010) and learnable feature-based methods (Choy et al., 2016; Kim et al., 2018; 2017; Lee et al., 2019; Li et al., 2020a; Rocco et al., 2017; Seo et al., 2018). Hand-crafted techniques leverage the low-level feature descriptors, such as SIFT (LoweDavid, 2004), HOG (Taniai et al., 2016), and DAISY (Tola et al., 2010), to measure dense correspondences, lacking the capture of high-level semantics. To tackle this problem, most learnable techniques focus on building dense correspondences on highlevel semantic features of deep convolutional neural networks, such as NC-Net (Rocco et al., 2018b), ANC-Net (Li et al., 2020a), and GOCor (Truong et al., 2020a). However, solely relying on the deep learnable features limits the performance of semantic correspondences due to the direct output of the similarity scores from the correlation maps. To address this issue, (Rocco et al., 2017) proposed a regression network to estimate the parameters from the matching features, coping with incorrect matches from the initial learnable features at the flow estimation stage. Their success encourages many variant methods, e.g., GSF (Jeon et al., 2020) and GLU-Net (Truong et al., 2020b), to directly regress semantic correspondences from the feature matches. Cost Aggregation. To alleviate the requirement of high-quality initial matching scores, HPF (Min et al., 2020) introduced the RHM (Min et al., 2019a) cost aggregator into the learnable feature methods for geometric consistency enhancement. Later, numerous CNN-based feature-learnable variants (Min & Cho, 2021; Rocco et al., 2018b) utilized 4D or 6D convolution-based geometric matching algorithms to refine the local consistency of the initial correlation maps. Nonetheless, CNN-based aggregation networks fail to model global matches due to the limited receptive fields of convolutions. Transformer-based aggregators (Cho et al., 2021; Sun et al., 2021), which leveraged the self-attention mechanism (Vaswani et al., 2017) to capture the global match-to-match interaction from the initial correlation map, can solve this problem. However, such a self-attention is prone to introduce redundant noisy matches when modeling the short-range interaction in a small region, because it does not consider local contexts. Besides, Transformer-based deep networks starve huge amounts of training data to avoid over-confident decisions. Bayesian Neural Network. Applying Bayesian approaches (Shridhar et al., 2019; Fan et al., 2021; Zhang et al., 2021) to neural networks is an alternative to mitigating the over-fitting issue by offering uncertainty estimates so that Bayesian Neural Networks (BayesNNs) can easily learn from small datasets and are robust to over-fitting. In the past years, several methods, such as Variational Inference (Blundell et al., 2015; Graves, 2011), Laplace Approximation (MacKay, 1992), and MC Dropout (Gal & Ghahramani, 2015; 2016), have been widely applied to estimate the parameter uncertainty, which is propagated for predictions. Instead of selecting a single point estimate, BayesNNs use the Bayes rule to average results over parameter values and thus have a strong reasoning ability. 3 PRELIMINARY Let Is ∈ RHs×Ws×3 and It ∈ RHt×Wt×3 denote a pair of source and target images, respectively. The goal of dense semantic correspondence is to find the optimal f∗ that generates a correspondence flow containing the offsets between corresponding keypoints in the two images, i.e., Kpred = f∗(Is, It), where the correspondence flow Kpred = {(∆xsi ,∆ysi )} Hs×Ws i=1 contains the predicted offsets for all pixels in the source image. Following previous works, we consider learning of f∗ in the supervised setting. More specifically, we are given a dataset D = {(Isj , Itj ,K gt j )}Mj=1 containing M image pairs and the associated ground-truth correspondence flows. Due to sparse annotations, the ground-truth flow Kgtj = {(∆xsi ,∆ysi )} Hs×Ws i=1 is only non-zero at a subset of locations. We aim to learn an approximate fn by minimizing the distance between the predicted and the ground-truth correspondence flows: fM = argmin f 1 M ∑M j=1 ||Φ(f(Isj , Itj))−K gt j ||, where Φ is a logical metric that sets the offsets at locations without ground-truth offsets to zero. The pipeline to design the function f involves several basic steps, including feature extraction, cost aggregation, and flow estimation. Specifically, dense feature maps Ds ∈ RHs×Ws×C and Dt ∈ RHt×Wt×C are extracted from each image pair Is and It, respectively. Directly matching similarity between Ds and Dt without introducing any prior often undergoes ambiguous matches due to limited local repetitive patterns. To address this issue, cost aggregation techniques are employed to refine matches from initial correlation maps. The correspondence flow is, consequently, inferred from refined matching scores. Our approach follows this common framework for semantic correspondence. As shown in Figure 1(a), we follow the previous works (Min et al., 2021; 2020; Truong et al., 2020b) to construct the correlation maps, using cosine similarity, C(Dsx, Dty) ∈ RHs×Ws×Ht×Wt = Dsx,:·D t y,: ||Dsx,:||·||Dty,:|| . The result is a 4D tensor, representing the initial matching scores between an image pair. To capture rich semantic information, 4D convolutions (Min & Cho, 2021; Rocco et al., 2018b) are employed to extract multi-scale features at different levels of a backbone network. However, such dense feature points are weak to identify global semantics alignment due to the limited receptive fields of convolutions. To address this issue, a cost aggregator is introduced to refine the correlation maps before feeding them for flow estimation. The current cost aggregators (Cho et al., 2021; Hong et al., 2022) leverage the transformer to refine correlation maps due to its global receptive fields. However, such methods discard the ability to capture the short-range interaction between candidate matches, leading to extra noisy matches when matching the semantic correspondences in a small region. Besides, it is data-hungry and requires large amounts of training data to avoid over-fitting. 4 GLOBAL-LOCAL BAYESIAN TRANSFORMER Given the limitations of existing methods, we propose a Global-Local Bayesian Transformer (GLBT) to refine the correlation maps by considering the local and global interactions between candidate matches simultaneously. As visualized in Figure 1(b), GLBT stacks a group of the Global-Local Bayesian Self-Attention (GLB-SA) module, layer normalization, and multilayer linear perceptron, to refine the final correlation matches: C′ = GLBT(X), (1) where X ∈ RL×C is the correlation map unfolded from the result of Conv4D(C(Ds, Dt)), L = Hs ×Ws ×Ht ×Wt, and C denotes the channels. Self-attention, which obtains key, value and query from the initial correlation map X, is the core of GLBT. Instead of using the standard self-attention mechanism (Vaswani et al., 2017), we introduce one Global Bayesian Self-Attention (GB-SA) in Figure 2(a) to model the global match-to-match interaction on the large semantic displacement, and another Local Bayesian Self-Attention (LB-SA) in Figure 2(b) to model the local match-to-match interaction on the small semantic displacement. Both are then joined together to reason about the final correlation maps at the same time. 4.1 GLOBAL BAYESIAN SELF-ATTENTION The classical self-attention (Dosovitskiy et al., 2021) performs a dot-product on all pixels, prone to the issue of data-hungry (Liu et al., 2021; Yuan et al., 2021). BayesCNNs (Shridhar et al., 2019) averages models sampled from the posterior distribution of convolution kernels and have the potential to prevent the requisite of large data and be robust to over-fitting. Inspired by this, we introduce a Global Bayesian Self-Attention (GB-SA), which directly operates the matrix-multiplication on the input and the Bayesian weight to learn the global interaction between candidate matches from the correlation maps. Let θ denote the network parameters in the computation of the correlation map X = hθ(Is, It), and W ∈ Rd denote the parameters in a Bayesian self-attention module. Our Bayesian model considers W as a random variable and our goal is to infer the posterior distribution p(W |D) and learn the parameters θ simultaneously. The whole proposed network can be viewed as the following probabilistic model: pθ(Kgt|Is, It,W ) = N (Kgt|Φ(G(hθ(Is, It),W )), σ20), (2) where G stands for the probability function in the GB-SA module, hθ is the network computing the correlation map, and σ0 is the standard deviation of the Gaussian distribution. To avoid yielding a slow convergence and prevent a strange local minima (Blundell et al., 2015), we use the mixture Gaussian distribution with zero mean for the prior distribution p(W ): p(W ) = d∏ i=1 N (Wi|0, σ21) + (1− π)N (Wi|0, σ22), (3) where σ1 and σ2 correspond to the standard deviations of two Gaussian distributions, respectively. To infer the Bayesian posterior distribution p(W |D) on the weights in self-attention, we follow the variational inference procedure (Shridhar et al., 2019) to estimate an approximate variational posterior qϕ(W ) by minimizing the Kullback-Leibler (KL) divergence (Kullback & Leibler, 1951): θ̂, ϕ̂ = arg min θ,ϕ KL[qϕ(W )||pθ(W |D)] = arg min θ,ϕ KL [qϕ(W )∥p(W )]− Eqϕ(W )[log pθ(D|W )], (4) where the likelihood pθ(D|W ) = ∏M j=1 pθ(K gt j |Isj , Itj ,W ). In addition, we use a Gaussian distribution for the variational posterior, so the parameter ϕ = (µ,σ), where µ is the mean vector and σ is the standard deviation vector. To achieve the GB-SA operation visible in Figure 2(a), we sample the attention weights W bayesg from the learnable Bayesian posterior distribution qϕ(W |D) and, directly compute it and the correlation map X via the general matrix multiplication: WGB = G(X,W bayesg ) = X ∗W bayesg . (5) Recall the self-attention mechanism (Vaswani et al., 2017), the attention weight is a non-zero matrix with each row summing to one. Therefore, we leverage a softmax function to obtain attention weight Wg = softmax(WGB), and the resulting weight Wg is then used to refine the initial matching features. Such a GB-SA, which is leveraged as a regularisation on the weights of the network, can learn the global matches on small data and is robust to over-fitting. 4.2 LOCAL BAYESIAN SELF-ATTENTION Besides, the traditional self-attention (Dosovitskiy et al., 2021) is prone to introducing extra noisy matches when capturing the short-range interaction from the correlation maps in a small region. Inspired by the sliding windows used in convolutions, we introduce another Local Bayesian SelfAttention (LB-SA), which conducts the dot-product of the input and the sparse Bayesian weight according to the matrix factorization, to reason about the short-range matches from semantic context. To achieve the LB-SA, we leverage the butterfly matrix (Dao et al., 2019) to generate a boolean matrix B and, sample the attention weight W bayesl from the learnable Bayesian posterior distribution qϕ(W |D) which is inferred by the similar rules as Equation 4. As shown in Figure 2(b), we employ the boolean matrix B to sparsify the Bayesian attention weight W bayesl : A = B ⊙W bayesl , (6) where ⊙ is the Hadamard product of B and W bayesl , and the resulting A is a sparse Bayesian weight. To capture the local correspondences on the limited receptive field, we leverage the matrix factorization technique to divide both the input X and the sparse Bayesian matrix A into n pairs of patches Xij and Aij with a window size S × S, where 1 ≤ i ≤ n, 1 ≤ j ≤ n and n = HsS = Ws S = Ht S = Wt S Afterwards, each pair of sub-matrices is computed separately via the matrix multiplication, to generate the final Bayesian attention weight WLB . Let L denote the function in the LB-SA module, we have: WLB = L(X,A) = X ∗A = [ X11 X12 X21 X22 ] ∗ [ A11 A12 A21 A22 ] = [ X11 ∗A11 X12 ∗A12 X21 ∗A21 X22 ∗A22 ] . (7) Compared to direct matrix-multiplication, such a process has a strong capability of modeling the local patterns while reducing the computational complexity. To efficiently model the long-range and short-range interactions between candidate matches from correlation maps, the resulting local attention weight WLB is integrated with the global attention weight WGB to obtain the final globallocal attention weight Wgl = softmax(WGB +WLB) in our proposed GLBT. Consequently, the GLBT is hierarchically aggregated as a cost aggregator to refine the initial correlation maps before feeding it into the decoder for flow estimation. 5 EXPERIMENTS 5.1 EXPERIMENTAL SETTINGS AND IMPLEMENTATION DETAILS Datasets. We conduct comprehensive experiments on three widely-used benchmark datasets for semantic correspondence, including SPair-71k (Min et al., 2019b), PF-PASCAL (Ham et al., 2017) and PF-WILLOW (Ham et al., 2016). The SPair-71k dataset contains 70,958 image pairs with diverse variations in viewpoint and scale, splitting into 53,340 pairs for training, 5,384 pairs for validation and 12,234 pairs for testing. The PF-PASCAL dataset contains 1,351 image pairs from 20 categories, augmented to 2,940 training pairs, 308 validation pairs and 299 testing pairs. The PF-WILLOW dataset contains 900 image pairs from 4 categories, used for testing. Evaluation Metric. The percentage of correct keypoints (PCK) is the standard evaluation metric for category-level matching. Given a pair of predicted keypoint Kpred and ground-truth keypoint Kgt, PCK computes the ratio of correctly predicted keypoints by PCK = 1N ∑N i=1[||K pred i − K gt i || ≤ α ·max(H,W )], where H and W denote height and width of an entire image or an object bounding box, and α is a threshold to tolerate the distance between the predicted keypoint and the ground-truth. Implementation Details. We follow the recent method (Min et al., 2019a) to extract the features from the best sub-layers of ResNet101 (He et al., 2016) pre-trained on the ImageNet (Deng et al., 2009) dataset. In training process, batch-size is set to 8 for all experiments and AdamW (Kingma & Ba, 2015) with a weight decay of 0.05 is adopted for optimization. The data augmentation techniques introduced in (Cho et al., 2021) are also used in our method. The learning rate for backbone features is set to 1e-6. The learning rate for the cost aggregation layers is initialized as 1e-5 and gradually decreased during training. We train the model for 300 epochs. All experiments are implemented with PyTorch (Paszke et al., 2019) and our method costs 38.6 ms inference time on V100 GPUs. 5.2 BENCHMARK RESULTS AND ANALYSIS To provide a fair comparison of our proposed GLBT and other state-of-the-arts, including CNNGeo (Rocco et al., 2017), A2Net (Seo et al., 2018), NC-Net (Rocco et al., 2018b), WeakAlign (Rocco et al., 2018a), HPF (Min et al., 2019a), SCOT (Liu et al., 2020), DHPF (Min et al., 2020), CHM (Min & Cho, 2021), CATs (Cho et al., 2021), MMNet (Zhao et al., 2021), and VAT (Hong et al., 2022), we use the same backbone ResNet101 (He et al., 2016) to extract the features from a pair of images. All results are measured under the same PCK evaluation indications on the benchmark datasets. Table 1 and Table 2 report the quantitative comparison of the proposed GLBT with the previous state-of-the-art methods on SPair-71k (Min et al., 2019b), PF-PASCAL (Ham et al., 2017), and PF-WILLOW (Ham et al., 2016) respectively. In Table 1, we find that the transformer-based cost aggregators outperform others by a wide margin, due to the capability of long-range matches for self-attention in the transformer. Compared to the previous best transformer-based VAT, the overall performance of our GLBT surpasses it by 2.0% @ αbbox = 0.1, 1.0% @ αimg = 0.1 and 1.1% @ αimg = 0.1, on SPair-71k, PF-PASCAL, and PF-WILLOW, respectively. Moreover, we also compare the results of each class on SPair-71k in Table 2. GLBT achieves the best performance in most categories, such as aeroplane, bike and boat, because it integrates both global and local self-attention to learn the long-range and short-range matches between images when refining the matching scores. Figure 3 provides the visual comparison of results obtained from GLBT and the recent state-of-the-art methods, namely VAT (Hong et al., 2022), MMNet (Zhao et al., 2021) and CATs (Cho et al., 2021). The visual examples demonstrate that GLBT can match more accurate points between a pair of images than other methods. The results also present that GLBT has smaller offsets than others for the correspondences between image pairs, further validating the effectiveness of our proposed method. 5.3 ABLATION STUDY AND ANALYSIS In this section, we provide an ablation analysis to investigate the importance of the cost aggregation stage during the entire pipeline. We also show the details of our proposed GLBT, including one Global Bayesian Self-Attention (GB-SA) and another Local Bayesian Self-Attention (LB-SA). For a fair comparison, we conduct all ablation study experiments with the same backbone ResNet101 (He et al., 2016) and each experiment is trained from scratch under the same settings. Overall Pipeline. Table 3 explores the impact of three modules, including feature extraction, cost aggregation, and flow estimation, for semantic correspondence. To validate that cost aggregation plays an essential role in the whole pipeline, we conduct ablation studies based on the different combinations of these modules. The results shown in Table 3 report the performance of involved models on SPair-71k (Min et al., 2019b), PF-PASCAL (Ham et al., 2017), and PF-WILLOW (Ham et al., 2016) in terms of PCK evaluation indicators with different thresholds. The results summarize that cost aggregation network contributes the most improvements to the final performance. Effect on GLB Self-Attention. As visible in Table 4, we explore the effectiveness of the Global and Local Bayesian Self-Attention (GLB-SA) for transformer-based cost aggregation network, on the SPair-71k (Min et al., 2019b), PF-PASCAL (Ham et al., 2017), and PF-WILLOW (Ham et al., 2016) benchmark datasets in terms of PCK @ α = 0.1. The baseline method adopts the Global Self-Attention (G-SA) Vaswani et al. (2017) based transformer to model the long-range matches between images for the refinement of correlation maps at the cost aggregation stage. To process the local semantic matches, we leverage matrix factorization (Ocker & Buice, 2021; Shah et al., 2015) to implement the Local Self-Attention (L-SA). Table 4 shows that the result of L-SA outper- forms the G-SA. Besides, the Global-Local Self-Attention (GL-SA), which is a combination of G-SA and L-SA, has a better performance than both G-SA and L-SA. To further investigate the effects of Bayesian self-attention for the transformer cost aggregator, we conduct the extra ablation experiments, including G-SA vs. the Global Bayesian Self-Attention (GB-SA), L-SA vs. the Local Bayesian Self-Attention (LB-SA), and the GL-SA vs. GLB-SA, respectively. Compared to the results shown in Table 4, we find that the application of the Bayesian inference to self-attention in the transformer outperforms the non-Bayesian self-attention, because such a Bayesian self-attention mechanism acts like a regularization. Among them, our proposed GLB-SA achieves the best performance on the refinement of the correlation, further validating its effect on finding semantic correspondence. Effect on Over-fitting for GLBT. To verify that the proposed Bayesian self-attention for the GLBT model can alleviate over-fitting, Figure 4 compares the loss curves obtained by the GLT and the GLBT. As shown in Figure 4(a), the loss curve of GLT fluctuates up and down in 50-150 epochs. The fluctuations are caused by the large intra-class variations in appearance and geometry for unconstrained image pairs. Figure 4(b) reports the loss curve of the GLBT, which is much more smooth than the GLT. We find that such a Bayesian self-attention can be regarded as a regularization mechanism to prevent the transformer-based model from over-fitting, when refining the correlation maps of challenging image pairs. Memory and Run-time. Table 5 compares the memory and run-time of DHPF (Min et al., 2020), CHM (Min & Cho, 2021), CATs (Cho et al., 2021), MMNet (Zhao et al., 2021), VAT (Hong et al., 2022) and GLBT. For a fair comparison, all methods employ the backbone ResNet101 for feature extraction, and the results are obtained using the same machine. Compared to other methods, GLBT and VAT methods lever- age transformer-based cost aggregators, exploit larger resolution and more memory than others, and surpass other methods by a large margin. We also find that compared to the previous state-of-the-art method VAT, our proposed method outperforms it in terms of PCK @ α = 0.1, while reducing the memory and run-time by 0.4 GB and 18.7 ms, respectively. 6 CONCLUSION In this paper, we have proposed a global-local Bayesian Transformer-based cost aggregation network, dubbed GLBT, for semantic correspondence. It integrates the global and local Bayesian self-attentions to infer the long-and-short range relationship between the correlation matches based on Bayes’ rule, achieving both global and local match-to-match interaction at the same time. We have demonstrated that our proposed method outperforms the existing state-of-the-art by a large margin on public benchmark datasets. Moreover, we have also conducted extensive ablation studies to validate the effect of our proposed global-local Bayesian self-attention which is applied for Transformer-based cost aggregator. We hope that our findings can inspire further research work for other domains.
1. What is the main contribution of the paper regarding semantic correspondence? 2. What are the strengths and weaknesses of the proposed Bayesian global-local transformer module? 3. Do you have any concerns regarding the presentation and explanation of the paper's content? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper? 5. Are there any questions or concerns regarding the experimental setup and comparisons with other works?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a Bayesian global-local transformer network for semantic correspondences. It adopts the widely pipeline of feature extraction, cost aggregation, flow estimation for this task. The contribution lies in a Bayesian global-local transformer module for cost aggregation. It uses BayesianCNNs for extracting self-attention and relies on sparse matrix factorization to implement local self-attention. Other operations are similar as the standard Transformer layer. Experiments are conducted on three datasets, with results of similar works for comparison. Strengths And Weaknesses +: The motivation of addressing the short-range modeling drawback in Transformer is reasonable when dealing with semantic correspondence problem. +: The proposed solution of using BayesianCNNs for global and local self attention is new to the community. -: While the paper provides ablation study, it is less informative about insight of the proposed modules. For instance, (1) the influence of different sparsity in LB-SA is unclear, it would be more insightful if the paper can discuss and validate different ways of setting the sparsity matrix in Equ.(7) (2) the influence of the standard deviations of Gaussion distributions in GB-SA. Whether to manually set or learn these parameters is unclear. -: The presentation could be improved. (1) Not all symbols are explained in Equ.(2)(3). (2) It is unclear why there are two Gaussians in Equ.(3) (3) it is unclear how the obtained Bayesian weights can be regarded as the attentional weight? The reviewer can not capture the underlying principles and assumptions from the paper. (4) Figure 2 (especially Figure 2(a)) is less informative about illustrating the difference between Bayesian SA and standard SA, the reviewer can not understand from this figure about how using Bayesian SA can address the problem of data hungry. (5) the reported metrics in Tables are not consistent to those in text, \alpha_bbox in PF-WILLOW in Table 1 while it says \alpha_img in text, \alpha_bbox of SPair-71k in Table 1 while it is \alpha_img is Table 2. (6) It says "additional challenges" of semantic correspondence in the beginning of Related Work, however, there is no description about the basic challenge? if we do not know the "basic" one, then what "additional" means? -: Experimental setups are unclear and can be improved in some aspects, (1) Do all the compared methods implemented and trained by authors? If so, how is quality of these re-implementations? Did the authors double-checked and compared of their re-implementations to the original results? (2) In addition, the paper only says that it uses the same backbone for other competitive methods, does it imply that the other parts of these works are using the released weights but only replacing the backbone features? (3) To support the argument of alleviating over-fitting of GLBT, it is better to conduct experiments with fewer labeled data. (4) The paper states many times that the current transformer based methods will suffer from noisy and inferior matches when dealing with local semantic correspondences, however, the paper lacks a solid evidence to support this key claim. Clarity, Quality, Novelty And Reproducibility Clarity: The presentation could be improved, please refer to my specific comments listed in the about weakness. Quality: The paper is not well written, besides the clarity issues, there are also some unsupported arguements. Novelty: The motivation of incorporating local self-attention into global transformer to address the problem of semantic correspondence is reasonable. However, another agruement about this could be that, since feature extraction part already considers local information, shall we still need to consider such issue in the following aggregation stage? or, whether it is better to put this short-range modeling using transformer in feature extraction part? The paper does not consider and discuss about this possiblity, and analyze its potential in the context. Reproducibility: The implementation setction only describes the basic network and parameters of the whole architecture, the details and setups about the core components of GB-SA, LB-SA are unclear. Reproduce this work is not an easy job.
ICLR
Title Global Convergence of Three-layer Neural Networks in the Mean Field Regime Abstract In the mean field regime, neural networks are appropriately scaled so that as the width tends to infinity, the learning dynamics tends to a nonlinear and nontrivial dynamical limit, known as the mean field limit. This lends a way to study large-width neural networks via analyzing the mean field limit. Recent works have successfully applied such analysis to two-layer networks and provided global convergence guarantees. The extension to multilayer ones however has been a highly challenging puzzle, and little is known about the optimization efficiency in the mean field regime when there are more than two layers. In this work, we prove a global convergence result for unregularized feedforward three-layer networks in the mean field regime. We first develop a rigorous framework to establish the mean field limit of three-layer networks under stochastic gradient descent training. To that end, we propose the idea of a neuronal embedding, which comprises of a fixed probability space that encapsulates neural networks of arbitrary sizes. The identified mean field limit is then used to prove a global convergence guarantee under suitable regularity and convergence mode assumptions, which – unlike previous works on two-layer networks – does not rely critically on convexity. Underlying the result is a universal approximation property, natural of neural networks, which importantly is shown to hold at any finite training time (not necessarily at convergence) via an algebraic topology argument. 1 INTRODUCTION Interests in the theoretical understanding of the training of neural networks have led to the recent discovery of a new operating regime: the neural network and its learning rates are scaled appropriately, such that as the width tends to infinity, the network admits a limiting learning dynamics in which all parameters evolve nonlinearly with time1. This is known as the mean field (MF) limit (Mei et al. (2018); Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018); Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019)). The four works Mei et al. (2018); Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018) led the first wave of efforts in 2018 and analyzed two-layer neural networks. They established a connection between the network under training and its MF limit. They then used the MF limit to prove that two-layer networks could be trained to find (near) global optima using variants of gradient descent, despite non-convexity (Mei et al. (2018); Chizat & Bach (2018)). The MF limit identified by these works assumes the form of gradient flows in the measure space, which factors out the invariance from the action of a symmetry group on the model. Interestingly, by lifting to the measure space, with a convex loss function (e.g. squared loss), one obtains a limiting optimization problem that is convex (Bengio et al. (2006); Bach (2017)). The analyses of Mei et al. (2018); ∗This paper is a conference submission. We refer to the work Nguyen & Pham (2020) and its companion note Pham & Nguyen (2020) for generalizations as well as other conditions for global convergence in the case of multilayer neural networks. †Department of Mathematics, Stanford University. This work was done in parts while H. T. Pham was at the University of Cambridge. ‡The Voleon Group. This work was done while P.-M. Nguyen was at Stanford University. §The author ordering is randomized. 1This is to be contrasted with another major operating regime (the NTK regime) where parameters essentially do not evolve and the model behaves like a kernel method (Jacot et al. (2018); Chizat et al. (2019); Du et al. (2019); Allen-Zhu et al. (2019); Zou et al. (2018); Lee et al. (2019)). Chizat & Bach (2018) utilize convexity, although the mechanisms to attain global convergence in these works are more sophisticated than the usual convex optimization setup in Euclidean spaces. The extension to multilayer networks has enjoyed much less progresses. The works Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019) argued, heuristically or rigorously, for the existence of a MF limiting behavior under gradient descent training with different assumptions. In fact, it has been argued that the difficulty is not simply technical, but rather conceptual (Nguyen (2019)): for instance, the presence of intermediate layers exhibits multiple symmetry groups with intertwined actions on the model. Convergence to the global optimum of the model under gradientbased optimization has not been established when there are more than two layers. In this work, we prove a global convergence guarantee for feedforward three-layer networks trained with unregularized stochastic gradient descent (SGD) in the MF regime. After an introduction of the three-layer setup and its MF limit in Section 2, our development proceeds in two main steps: Step 1 (Theorem 3 in Section 3): We first develop a rigorous framework that describes the MF limit and establishes its connection with a large-width SGD-trained three-layer network. Here we propose the new idea of a neuronal embedding, which comprises of an appropriate non-evolving probability space that encapsulates neural networks of arbitrary sizes. This probability space is in general abstract and is constructed according to the (not necessarily i.i.d.) initialization scheme of the neural network. This idea addresses directly the intertwined action of multiple symmetry groups, which is the aforementioned conceptual obstacle (Nguyen (2019)), thereby covering setups that cannot be handled by formulations in Araújo et al. (2019); Sirignano & Spiliopoulos (2019) (see also Section 5 for a comparison). Our analysis follows the technique from Sznitman (1991); Mei et al. (2018) and gives a quantitative statement: in particular, the MF limit yields a good approximation of the neural network as long as n−1min log nmax 1 independent of the data dimension, where nmin and nmax are the minimum and maximum of the widths. Step 2 (Theorem 8 in Section 4): We prove that the MF limit, given by our framework, converges to the global optimum under suitable regularity and convergence mode assumptions. Several elements of our proof are inspired by Chizat & Bach (2018); the technique in their work however does not generalize to our three-layer setup. Unlike previous two-layer analyses, we do not exploit convexity; instead we make use of a new element: a universal approximation property. The result turns out to be conceptually new: global convergence can be achieved even when the loss function is non-convex. An important crux of the proof is to show that the universal approximation property holds at any finite training time (but not necessarily at convergence, i.e. at infinite time, since the property may not realistically hold at convergence). Together these two results imply a positive statement on the optimization efficiency of SGD-trained unregularized feedforward three-layer networks (Corollary 10). Our results can be extended to the general multilayer case – with new ideas on top and significantly more technical works – or used to obtain new global convergence guarantees in the two-layer case (Nguyen & Pham (2020); Pham & Nguyen (2020)). We choose to keep the current paper concise with the three-layer case being a prototypical setup that conveys several of the basic ideas. Complete proofs are presented in appendices. Notations. K denotes a generic constant that may change from line to line. |·| denotes the absolute value for a scalar and the Euclidean norm for a vector. For an integer n, we let [n] = {1, ..., n}. 2 THREE-LAYER NEURAL NETWORKS AND THE MEAN FIELD LIMIT 2.1 THREE-LAYER NEURAL NETWORK We consider the following three-layer network at time k ∈ N≥0 that takes as input x ∈ Rd: ŷ (x;W (k)) = ϕ3 (H3 (x;W (k))) , (1) H3 (x;W (k)) = 1 n2 n2∑ j2=1 w3 (k, j2)ϕ2 (H2 (x, j2;W (k))) , H2 (x, j2;W (k)) = 1 n1 n1∑ j1=1 w2 (k, j1, j2)ϕ1 (〈w1 (k, j1) , x〉) , in which W (k) = (w1 (k, ·) ,w2 (k, ·, ·) ,w3 (k, ·)) consists of the weights2 w1 (k, j1) ∈ Rd, w2 (k, j1, j2) ∈ R and w3 (k, j2) ∈ R. Here ϕ1 : R → R, ϕ2 : R → R and ϕ3 : R → R are the activation functions, and the network has widths {n1, n2}. We train the network with SGD w.r.t. the loss L : R × R → R≥0. We assume that at each time k, we draw independently a fresh sample z (k) = (x (k) , y (k)) ∈ Rd ×R from a training distribution P . Given an initialization W (0), we update W (k) according to w3 (k + 1, j2) = w3 (k, j2)− ξ3 (k ) Grad3 (z (k) , j2;W (k)) , w2 (k + 1, j1, j2) = w2 (k, j1, j2)− ξ2 (k ) Grad2 (z (k) , j1, j2;W (k)) , w1 (k + 1, j1) = w1 (k, j1)− ξ1 (k ) Grad1 (z (k) , j1;W (k)) , in which j1 = 1, ..., n1, j2 = 1, ..., n2, ∈ R>0 is the learning rate, ξi : R≥0 7→ R≥0 is the learning rate schedule for wi, and for z = (x, y), we define Grad3 (z, j2;W (k)) = ∂2L (y, ŷ (x;W (k)))ϕ′3 (H3 (x;W (k)))ϕ2 (H2 (x, j2;W (k))) , Grad2 (z, j1, j2;W (k)) = ∆ H 2 (z, j2;W (k))ϕ1 (〈w1 (k, j1) , x〉) , Grad1 (z, j1;W (k)) = ( 1 n2 n2∑ j2=1 ∆H2 (z, j2;W (k))w2 (k, j1, j2) ) ϕ′1 (〈w1 (k, j1) , x〉)x, ∆H2 (z, j2;W (k)) = ∂2L (y, ŷ (x;W (k)))ϕ′3 (H3 (x;W (k)))w3 (k, j2)ϕ′2 (H2 (x, j2;W (k))) . We note that this setup follows the same scaling w.r.t. n1 and n2, which is applied to both the forward pass and the learning rates in the backward pass, as Nguyen (2019). 2.2 MEAN FIELD LIMIT The MF limit is a continuous-time infinite-width analog of the neural network under training. To describe it, we first introduce the concept of a neuronal ensemble. Given a product probability space (Ω,F , P ) = (Ω1 × Ω2,F1 ×F1, P1 × P2), we independently sample Ci ∼ Pi, i = 1, 2. In the following, we use ECi to denote the expectation w.r.t. the random variable Ci ∼ Pi and ci to denote an arbitrary point ci ∈ Ωi. The space (Ω,F , P ) is referred to as a neuronal ensemble. Given a neuronal ensemble (Ω,F , P ), the MF limit is described by a time-evolving system with state/parameter W (t), where the time t ∈ R≥0 and W (t) = (w1 (t, ·) , w2 (t, ·, ·) , w3 (t, ·)) with w1 : R≥0 × Ω1 → Rd, w2 : R≥0 × Ω1 × Ω2 → R and w3 : R≥0 × Ω2 → R. It entails the quantities: ŷ (x;W (t)) = ϕ3 (H3 (x;W (t))) , H3 (x;W (t)) = EC2 [w3 (t, C2)ϕ2 (H2 (x,C2;W (t)))] , H2 (x, c2;W (t)) = EC1 [w2 (t, C1, c2)ϕ1 (〈w1 (t, C1) , x〉)] . Here for each t ∈ R≥0, w1 (t, ·) is (Ω1,F1)-measurable, and similar for w2 (t, ·, ·), w3 (t, ·). The MF limit evolves according to a continuous-time dynamics, described by a system of ODEs, which we refer to as the MF ODEs. Specifically, given an initialization W (0) = (w1 (0, ·) , w2 (0, ·, ·) , w3 (0, ·)), the dynamics solves: ∂tw3 (t, c2) = −ξ3 (t) ∆3 (c2;W (t)) , ∂tw2 (t, c1, c2) = −ξ2 (t) ∆2 (c1, c2;W (t)) , ∂tw1 (t, c1) = −ξ1 (t) ∆1 (c1;W (t)) . Here c1 ∈ Ω1, c2 ∈ Ω2, EZ denotes the expectation w.r.t. the data Z = (X,Y ) ∼ P , and for z = (x, y), we define ∆3 (c2;W (t)) = EZ [∂2L (Y, ŷ (X;W (t)))ϕ′3 (H3 (X;W (t)))ϕ2 (H2 (X, c2;W (t)))] , 2To absorb first layer’s bias term to w1, we assume the input x to have 1 appended to the last entry. ∆2 (c1, c2;W (t)) = EZ [ ∆H2 (Z, c2;W (t))ϕ1 (〈w1 (t, c1) , X〉) ] , ∆1 (c1;W (t)) = EZ [ EC2 [ ∆H2 (Z,C2;W (t))w2 (t, c1, C2) ] ϕ′1 (〈w1 (t, c1) , X〉)X ] , ∆H2 (z, c2;W (t)) = ∂2L (y, ŷ (x;W (t)))ϕ′3 (H3 (x;W (t)))w3 (t, c2)ϕ′2 (H2 (x, c2;W (t))) . In Appendix B, we show well-posedness of MF ODEs under the following regularity conditions. Assumption 1 (Regularity). We assume that ϕ1 and ϕ2 are K-bounded, ϕ′1, ϕ′2 and ϕ′3 are Kbounded and K-Lipschitz, ϕ′2 and ϕ ′ 3 are non-zero everywhere, ∂2L (·, ·) is K-Lipschitz in the second variable and K-bounded, and |X| ≤ K with probability 1. Furthermore ξ1, ξ2 and ξ3 are K-bounded and K-Lipschitz. Theorem 1. Under Assumption 1, given any neuronal ensemble and an initialization W (0) such that3 ess-sup |w2 (0, C1, C2)| , ess-sup |w3 (0, C2)| ≤ K, there exists a unique solution W to the MF ODEs on t ∈ [0,∞). An example of a suitable setup is ϕ1 = ϕ2 = tanh, ϕ3 is the identity, L is the Huber loss, although a non-convex sufficiently smooth loss function suffices. In fact, all of our developments can be easily modified to treat the squared loss with an additional assumption |Y | ≤ K with probability 1. So far, given an arbitrary neuronal ensemble (Ω,F , P ), for each initialization W (0), we have defined a MF limit W (t). The connection with the neural network’s dynamics W (k) is established in the next section. 3 CONNECTION BETWEEN NEURAL NETWORK AND ITS MEAN FIELD LIMIT 3.1 NEURONAL EMBEDDING AND THE COUPLING PROCEDURE To formalize a connection between the neural network and its MF limit, we consider their initializations. In practical scenarios, to set the initial parameters W (0) of the neural network, one typically randomizes W (0) according to some distributional law ρ. We note that since the neural network is defined w.r.t. a set of finite integers {n1, n2}, so is ρ. We consider a family Init of initialization laws, each of which is indexed by the set {n1, n2}: Init = {ρ : ρ is the initialization law of a neural network of size {n1, n2} , n1, n2 ∈ N>0}. This is helpful when one is to take a limit that sends n1, n2 → ∞, in which case the size of this family |Init| is infinite. More generally we allow |Init| <∞ (for example, Init contains a single law ρ of a network of size {n1, n2} and hence |Init| = 1). We make the following crucial definition. Definition 2. Given a family of initialization laws Init, we call (Ω,F , P, { w0i } i=1,2,3 ) a neuronal embedding of Init if the following holds: 1. (Ω,F , P ) = (Ω1 × Ω2,F1 ×F2, P1 × P2) a product measurable space. As a reminder, we call it a neuronal ensemble. 2. The deterministic functions w01 : Ω1 → Rd, w02 : Ω1 × Ω2 → R and w03 : Ω2 → R are such that, for each index {n1, n2} of Init and the law ρ of this index, if — with an abuse of notations — we independently sample {Ci (ji)}ji∈[ni] ∼ Pi i.i.d. for each i = 1, 2, then Law ( w01 (C1 (j1)) , w 0 2 (C1(j1), C2 (j2)) , w 0 3 (C2(j2)) , ji ∈ [ni] , i = 1, 2 ) = ρ. To proceed, given Init and {n1, n2} in its index set, we perform the following coupling procedure: 1. Let (Ω,F , P, { w0i } i=1,2,3 ) be a neuronal embedding of Init. 2. We form the MF limit W (t) (for t ∈ R≥0) associated with the neuronal ensemble (Ω,F , P ) by setting the initialization W (0) to w1 (0, ·) = w01 (·), w2 (0, ·, ·) = w02 (·, ·) and w3 (0, ·) = w03 (·) and running the MF ODEs described in Section 2.2. 3We recall the definition of ess-sup in Appendix A. 3. We independently sample Ci (ji) ∼ Pi for i = 1, 2 and ji = 1, ..., ni. We then form the neural network initialization W (0) with w1 (0, j1) = w01 (C1 (j1)), w2 (0, j1, j2) = w02 (C1 (j1) , C2 (j2)) and w3 (0, j2) = w 0 3 (C2 (j2)) for j1 ∈ [n1], j2 ∈ [n2]. We obtain the network’s trajectory W (k) for k ∈ N≥0 as in Section 2.1, with the data z (k) generated independently of {Ci (ji)}i=1,2 and hence W (0). We can then define a measure of closeness between W (bt/ c) and W (t) for t ∈ [0, T ]: DT (W,W) = sup { |w1 (bt/ c , j1)− w1 (t, C1 (j1))| , |w2 (bt/ c , j1, j2)− w2 (t, C1 (j1) , C2 (j2))| , |w3 (bt/ c , j2)− w3 (t, C2 (j2))| : t ≤ T, j1 ≤ n1, j2 ≤ n2 } . (2) Note that W (t) is a deterministic trajectory independent of {n1, n2}, whereas W (k) is random for all k ∈ N≥0 due to the randomness of {Ci (ji)}i=1,2 and the generation of the training data z (k). Similarly DT (W,W) is a random quantity. The idea of the coupling procedure is closely related to the coupling argument in Sznitman (1991); Mei et al. (2018). Here, instead of playing the role of a proof technique, the coupling serves as a vehicle to establish the connection betweenW and W on the basis of the neuronal embedding. This connection is shown in Theorem 3 below, which gives an upper bound on DT (W,W). We note that the coupling procedure can be carried out to provide a connection between W and W as long as there exists a neuronal embedding for Init. Later in Section 4.1, we show that for a common initialization scheme (in particular, i.i.d. initialization) for Init, there exists a neuronal embedding. Theorem 3 applies to, but is not restricted to, this initialization scheme. 3.2 MAIN RESULT: APPROXIMATION BY THE MF LIMIT Assumption 2 (Initialization of second and third layers). We assume that ess-sup ∣∣w02 (C1, C2)∣∣, ess-sup ∣∣w03 (C2)∣∣ ≤ K, where w02 and w03 are as described in Definition 2. Theorem 3. Given a family Init of initialization laws and a tuple {n1, n2} that is in the index set of Init, perform the coupling procedure as described in Section 3.1. Fix a terminal time T ∈ N≥0. Under Assumptions 1 and 2, for ≤ 1, we have with probability at least 1− 2δ, DT (W,W) ≤ eKT ( 1 √ nmin + √ ) log1/2 ( 3 (T + 1)n2max δ + e ) ≡ errδ,T ( , n1, n2) , in which nmin = min {n1, n2}, nmax = max {n1, n2}, and KT = K ( 1 + TK ) . The theorem gives a connection between W (bt/ c), which is defined upon finite widths n1 and n2, and the MF limit W (t), whose description is independent of n1 and n2. It lends a way to extract properties of the neural network in the large-width regime. Corollary 4. Under the same setting as Theorem 3, consider any test function ψ : R × R → R which is K-Lipschitz in the second variable uniformly in the first variable (an example of ψ is the loss L). For any δ > 0, with probability at least 1− 3δ, sup t≤T |EZ [ψ (Y, ŷ (X;W (bt/ c)))]− EZ [ψ (Y, ŷ (X;W (t)))]| ≤ eKT errδ,T ( , n1, n2) . These bounds hold for any n1 and n2, similar to Mei et al. (2018); Araújo et al. (2019), in contrast with non-quantitative results in Chizat & Bach (2018); Sirignano & Spiliopoulos (2019). These bounds suggest that n1 and n2 can be chosen independent of the data dimension d. This agrees with the experiments in Nguyen (2019), which found width ≈ 1000 to be typically sufficient to observe MF behaviors in networks trained with real-life high-dimensional data. We observe that the MF trajectory W (t) is defined as per the choice of the neuronal embedding (Ω,F , P, { w0i } i=1,2,3 ), which may not be unique. On the other hand, the neural network’s trajectory W (k) depends on the randomization of the initial parameters W (0) according to an initialization law from the family Init (as well as the data z (k)) and hence is independent of this choice. Another corollary of Theorem 3 is that given the same family Init, the law of the MF trajectory is insensitive to the choice of the neuronal embedding of Init. Corollary 5. Consider a family Init of initialization laws, indexed by a set of tuples {m1,m2} that contains a sequence of indices {m1 (m) ,m2 (m) : m ∈ N} in which as m → ∞, min {m1 (m) ,m2 (m)}−1 log (max {m1 (m) ,m2 (m)}) → 0. Let W (t) and Ŵ (t) be two MF trajectories associated with two choices of neuronal embeddings of Init, (Ω,F , P, { w0i } i=1,2,3 ) and (Ω̂, F̂ , P̂ , { ŵ0i } i=1,2,3 ). The following statement holds for any T ≥ 0 and any two positive integers n1 and n2: if we independently sample Ci (ji) ∼ Pi and Ĉi (ji) ∼ P̂i for ji ∈ [ni], i = 1, 2, then Law (W (n1, n2, T )) = Law(Ŵ (n1, n2, T )), where we define W (n1, n2, T ) as the below collection w.r.t. W (t), and similarly define Ŵ (n1, n2, T ) w.r.t. Ŵ (t): W (n1, n2, T ) = { w1 (t, C1 (j1)) , w2 (t, C1 (j1) , C2 (j2)) , w3 (t, C2 (j2)) : j1 ∈ [n1] , j2 ∈ [n2] , t ∈ [0, T ] } . The proofs are deferred to Appendix C. 4 CONVERGENCE TO GLOBAL OPTIMA In this section, we prove a global convergence guarantee for three-layer neural networks via the MF limit. We consider a common class of initialization: i.i.d. initialization. 4.1 I.I.D. INITIALIZATION Definition 6. An initialization law ρ for a neural network of size {n1, n2} is called ( ρ1, ρ2, ρ3 ) - i.i.d. initialization (or i.i.d. initialization, for brevity), where ρ1, ρ2 and ρ3 are probability measures over Rd, R and R respectively, if {w1 (0, j1)}j1∈[n1] are generated i.i.d. according to ρ 1, {w2 (0, j1, j2)}j1∈[n1], j2∈[n2] are generated i.i.d. according to ρ 2 and {w3 (0, j2)}j2∈[n2] are generated i.i.d. according to ρ3, and w1, w2 and w3 are independent. Observe that given ( ρ1, ρ2, ρ3 ) , one can build a family Init of i.i.d. initialization laws that contains any index set {n1, n2}. Furthermore i.i.d. initializations are supported by our framework, as stated in the following proposition and proven in Appendix D. Proposition 7. There exists a neuronal embedding ( Ω,F , P, { w0i } i=1,2,3 ) for any family Init of initialization laws, which are ( ρ1, ρ2, ρ3 ) -i.i.d. 4.2 MAIN RESULT: GLOBAL CONVERGENCE To measure the learning quality, we consider the loss averaged over the data Z ∼ P: L (V ) = EZ [L (Y, ŷ (X;V ))] , where V = (v1, v2, v3) is a set of three measurable functions v1 : Ω1 → Rd, v2 : Ω1 × Ω2 → R, v3 : Ω2 → R. Assumption 3. Consider a neuronal embedding ( Ω,F , P, { w0i } i=1,2,3 ) of the ( ρ1, ρ2, ρ3 ) -i.i.d. initialization, and the associated MF limit with initialization W (0) such that w1 (0, ·) = w01 (·), w2 (0, ·, ·) = w02 (·, ·) and w3 (0, ·) = w03 (·). Assume: 1. Support: The support of ρ1 is Rd. 2. Convergence mode: There exist limits w̄1, w̄2 and w̄3 such that as t→∞, E [(1 + |w̄3(C2)|) |w̄3(C2)| |w̄2(C1, C2)| |w1(t, C1)− w̄1(C1)|]→ 0, (3) E [(1 + |w̄3(C2)|) |w̄3(C2)| |w2(t, C1, C2)− w̄2(C1, C2)|]→ 0, (4) E [(1 + |w̄3(C2)|) |w3(t, C2)− w̄3(C2)|]→ 0, (5) ess-supEC2 [|∂tw2 (t, C1, C2)|]→ 0. (6) 3. Universal approximation: { ϕ1 (〈u, ·〉) : u ∈ Rd } has dense span in L2 (PX) (the space of square integrable functions w.r.t. PX the distribution of the input X). Assumption 3 is inspired by the work Chizat & Bach (2018) on two-layer networks, with certain differences. Assumptions 3.1 and 3.3 are natural in neural network learning (Cybenko (1989); Chen & Chen (1995)), while we note Chizat & Bach (2018) does not utilize universal approximation. Similar to Chizat & Bach (2018), Assumption 3.2 is technical and does not seem removable. Note that this assumption specifies the mode of convergence and is not an assumption on the limits w̄1, w̄2 and w̄3. Specifically conditions (3)-(5) are similar to the convergence assumption in Chizat & Bach (2018). We differ from Chizat & Bach (2018) fundamentally in the essential supremum condition (6). On one hand, this condition helps avoid the Morse-Sard type condition in Chizat & Bach (2018), which is difficult to verify in general and not simple to generalize to the three-layer case. On the other hand, it turns out to be a natural assumption to make, in light of Remark 9 below. We now state the main result of the section. The proof is in Appendix D. Theorem 8. Consider a neuronal embedding ( Ω,F , P, { w0i } i=1,2,3 ) of ( ρ1, ρ2, ρ3 ) -i.i.d. initialization. Consider the MF limit corresponding to the network (1), such that they are coupled together by the coupling procedure in Section 3.1, under Assumptions 1, 2 and 3. For simplicity, assume ξ1 (·) = ξ2 (·) = 1. Further assume either: • (untrained third layer) ξ3 (·) = 0 and w03 (C2) 6= 0 with a positive probability, or • (trained third layer) ξ3 (·) = 1 and L ( w01, w 0 2, w 0 3 ) < EZ [L (Y, ϕ3 (0))]. Then the following hold: • Case 1 (convex loss): If L is convex in the second variable, then lim t→∞ L (W (t)) = inf V L (V ) = inf ỹ: Rd→R EZ [L (Y, ỹ (X))] . • Case 2 (generic non-negative loss): Suppose that ∂2L (y, ŷ) = 0 implies L (y, ŷ) = 0. If y = y(x) is a function of x, then L (W (t))→ 0 as t→∞. Remarkably here the theorem allows for non-convex losses. A further inspection of the proof shows that no convexity-based property is used in Case 2 (see, for instance, the high-level proof sketch in Section 4.3); in Case 1, the key steps in the proof are the same, and the convexity of the loss function serves as a convenient technical assumption to handle the arbitrary extra randomness of Y conditional on X . We also remark that the same proof of global convergence should extend beyond the specific fully-connected architecture considered here. Similar to previous results on SGD-trained two-layer networks Mei et al. (2018); Chizat & Bach (2018), our current result in the three-layer case is non-quantitative. Remark 9. Interestingly there is a converse relation between global convergence and the essential supremum condition (6): under the same setting, global convergence is unattainable if condition (6) does not hold. A similar observation was made in Wojtowytsch (2020) for two-layer ReLU networks. A precise statement and its proof can be found in Appendix E. The following result is straightforward from Theorem 8 and Corollary 4, establishing the optimization efficiency of the neural network with SGD. Corollary 10. Consider the neural network (1). Under the same setting as Theorem 8, in Case 1, lim t→∞ lim n1,n2 lim →0 EZ [L (Y, ŷ (X;W (bt/ c)))] = inf f1,f2,f3 L (f1, f2, f3) = inf ỹ EZ [L (Y, ỹ (X))] in probability, where the limit of the widths is such that min {n1, n2}−1 log (max {n1, n2}) → 0. In Case 2, the same holds with the right-hand side being 0. 4.3 HIGH-LEVEL IDEA OF THE PROOF We give a high-level discussion of the proof. This is meant to provide intuitions and explain the technical crux, so our discussion may simplify and deviate from the actual proof. Our first insight is to look at the second layer’s weight w2. At convergence time t = ∞, we expect to have zero movement and hence, denoting W (∞) = (w̄1, w̄2, w̄3): ∆2 (c1, c2;W (∞)) = EZ [ ∆H2 (Z, c2;W (∞))ϕ1 (〈w̄1 (c1) , X〉) ] = 0, for P -almost every c1, c2. Suppose for the moment that we are allowed to make an additional (strong) assumption on the limit w̄1: supp (w̄1 (C1)) = Rd. It implies that the universal approximation property, described in Assumption 3, holds at t = ∞; more specifically, it implies {ϕ1 (〈w̄1 (c1) , ·〉) : c1 ∈ Ω1} has dense span in L2 (PX). This thus yields EZ [ ∆H2 (Z, c2;W (∞)) ∣∣X = x] = 0, for P-almost every x. Recalling the definition of ∆H2 , one can then easily show that EZ [∂2L (Y, ŷ (X;W (∞)))|X = x] = 0. Global convergence follows immediately; for example, in Case 2 of Theorem 8, this is equivalent to that ∂2L (y (x) , ŷ (x;W (∞))) = 0 and hence L (y (x) , ŷ (x;W (∞))) = 0 for P-almost every x. In short, the gradient flow structure of the dynamics of w2 provides a seamless way to obtain global convergence. Furthermore there is no critical reliance on convexity. However this plan of attack has a potential flaw in the strong assumption that supp (w̄1 (C1)) = Rd, i.e. the universal approximation property holds at convergence time. Indeed there are setups where it is desirable that supp (w̄1 (C1)) 6= Rd (Mei et al. (2018); Chizat (2019)); for instance, it is the case where the neural network is to learn some “sparse and spiky” solution, and hence the weight distribution at convergence time, if successfully trained, cannot have full support. On the other hand, one can entirely expect that if supp (w1 (0, C1)) = Rd initially at t = 0, then supp (w1 (t, C1)) = Rd at any finite t ≥ 0. The crux of our proof is to show the latter without assuming supp (w̄1 (C1)) = Rd. This task is the more major technical step of the proof. To that end, we first show that there exists a mapping (t, u) 7→ M (t, u) that maps from (t, w1 (0, c1)) = (t, u) to w1 (t, c1) via a careful measurability argument. This argument rests on a scheme that exploits the symmetry in the network evolution. Furthermore the map M is shown to be continuous. The desired conclusion then follows from an algebraic topology argument that the map M preserves a homotopic structure through time. 5 DISCUSSION The MF literature is fairly recent. A long line of works (Nitanda & Suzuki (2017); Mei et al. (2018); Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018); Wei et al. (2019); Javanmard et al. (2019); Mei et al. (2019); Shevchenko & Mondelli (2019); Wojtowytsch (2020)) have focused mainly on two-layer neural networks, taking an interacting particle system approach to describe the MF limiting dynamics as Wasserstein gradient flows. The three works Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019) independently develop different formulations for the MF limit in multilayer neural networks, under different assumptions. These works take perspectives that are different from ours. In particular, while the central object in Nguyen (2019) is a new abstract representation of each individual neuron, our neuronal embedding idea instead takes a keen view on a whole ensemble of neurons. Likewise our idea is also distant from Araújo et al. (2019); Sirignano & Spiliopoulos (2019): the central objects in Araújo et al. (2019) are paths over the weights across layers; those in Sirignano & Spiliopoulos (2019) are time-dependent functions of the initialization, which are simplified upon i.i.d. initializations. The result of our perspective is a neuronal embedding framework that allows one to describe the MF limit in a clean and rigorous manner. In particular, it avoids extra assumptions made in Araújo et al. (2019); Sirignano & Spiliopoulos (2019): unlike our work, Araújo et al. (2019) assumes untrained first and last layers and requires non-trivial technical tools; Sirignano & Spiliopoulos (2019) takes an unnatural sequential limit n1 → ∞ before n2 → ∞ and proves a non-quantitative result, unlike Theorem 3 which only requires sufficiently large min {n1, n2}. We note that Theorem 3 can be extended to general multilayer networks using the neuronal embedding idea. The advantages of our framework come from the fact that while MF formulations in Araújo et al. (2019); Sirignano & Spiliopoulos (2019) are specific to and exploit i.i.d. initializations, our formulation does not. Remarkably as shown in Araújo et al. (2019), when there are more than three layers and no biases, i.i.d. initializations lead to a certain simplifying effect on the MF limit. On the other hand, our framework supports non-i.i.d. initializations which avoid the simplifying effect, as long as there exist suitable neuronal embeddings (Nguyen & Pham (2020)). Although our global convergence result in Theorem 8 is proven in the context of i.i.d. initializations for three-layer networks, in the general multilayer case, it turns out that the use of a special type of non-i.i.d. initialization allows one to prove a global convergence guarantee (Pham & Nguyen (2020)). In this aspect, our framework follows closely the spirit of the work Nguyen (2019), whose MF formulation is also not specific to i.i.d. initializations. Yet though similar in the spirit, Nguyen (2019) develops a heuristic formalism and does not prove global convergence. Global convergence in the two-layer case with convex losses has enjoyed multiple efforts with a lot of new and interesting results (Mei et al. (2018); Chizat & Bach (2018); Javanmard et al. (2019); Rotskoff et al. (2019); Wei et al. (2019)). Our work is the first to establish a global convergence guarantee for SGD-trained three-layer networks in the MF regime. Our proof sends a new message that the crucial factor is not necessarily convexity, but rather that the whole learning trajectory maintains the universal approximation property of the function class represented by the first layer’s neurons, together with the gradient flow structure of the second layer’s weights. As a remark, our approach can also be applied to prove a similar global convergence guarantee for two-layer networks, removing the convex loss assumption in previous works (Nguyen & Pham (2020)). The recent work Lu et al. (2020) on a MF resnet model (a composition of many two-layer MF networks) and a recent update of Sirignano & Spiliopoulos (2019) essentially establish conditions of stationary points to be global optima. They however require strong assumptions on the support of the limit point. As explained in Section 4.3, we analyze the training dynamics without such assumption and in fact allow it to be violated. Our global convergence result is non-quantitative. An important, highly challenging future direction is to develop a quantitative version of global convergence; previous works on two-layer networks Javanmard et al. (2019); Wei et al. (2019); Rotskoff et al. (2019); Chizat (2019) have done so under sophisticated modifications of the architecture and training algorithms. Finally we remark that our insights here can be applied to prove similar global convergence guarantees and derive other sufficient conditions for global convergence of two-layer or multilayer networks (Nguyen & Pham (2020); Pham & Nguyen (2020)). ACKNOWLEDGEMENT H. T. Pham would like to thank Jan Vondrak for many helpful discussions and in particular for the shorter proof of Lemma 19. We would like to thank Andrea Montanari for the succinct description of the difficulty in extending the mean field formulation to the multilayer case, in that there are multiple symmetry group actions in a multilayer network. A NOTATIONAL PRELIMINARIES For a real-valued random variable Z defined on a probability space (Ω,F , P ), we recall ess-supZ = inf {z ∈ R : P (Z > z) = 0} . We also introduce some convenient definitions which we use throughout the appendices. For a set of neural network’s parameter W, we define 9W9T = max { max j1≤n1, j2≤n2 sup t≤T |w2 (bt/ c , j1, j2)| , max j2≤n2 sup t≤T |w3 (bt/ c , j2)| } . Similarly for a set of MF parameters W , we define: 9W9T = max { ess-sup sup t≤T |w2 (t, C1, C2)| , ess-sup sup t≤T |w3 (t, C2)| } . For two sets of neural network’s parameters W′,W′′, we define their distance: ‖W′ −W′′‖T = sup { |w′1 (bt/ c , j1)−w′′1 (bt/ c , j1)| , |w′2 (bt/ c , j1, j2)−w′′2 (bt/ c , j1, j2)| , |w′3 (bt/ c , j2)−w′′3 (bt/ c , j2)| : t ∈ [0, T ] , j1 ∈ [n1] , j2 ∈ [n2] } . Similarly for two sets of MF parameters W ′,W ′′, we define their distance: ‖W ′ −W ′′‖T = ess-sup sup t∈[0,T ] { |w′1 (t, C1)− w′′1 (t, C1)| , |w′2 (t, C1, C2)− w′′2 (t, C1, C2)| , |w′3 (t, C2)− w′′3 (t, C2)| } . B EXISTENCE AND UNIQUENESS OF THE SOLUTION TO MF ODES We first collect some a priori estimates. Lemma 11. Under Assumption 1, consider a solution W to the MF ODEs with initialization W (0) such that 9W90 < ∞. If this solution exists, it satisfies the following a priori bounds, for any T ≥ 0: ess-sup sup t≤T |w3 (t, C2)| ≤ 9W 90 +KT ≡ 9W 90 +K0,3 (T ) , ess-sup sup t≤T |w2 (t, C1, C2)| ≤ 9W 90 +KTK0,3 (T ) ≡ 9W 90 +K0,2 (T ) , and consequently, 9W9T ≤ 1 + max {K0,2 (T ) , K0,3 (T )} . Proof. The bounds can be obtained easily by bounding the respective initializations and update quantities separately. In particular, ess-sup sup t≤T |w3 (t, C2)| ≤ ess-sup |w3 (0, C2)|+ T ess-sup sup t≤T ∣∣∣∣ ∂∂tw3 (t, C2) ∣∣∣∣ ≤ 9W 90 +KT, ess-sup sup t≤T |w2 (t, C1, C2)| ≤ ess-sup |w2 (0, C1, C2)|+ T ess-sup sup t≤T ∣∣∣∣ ∂∂tw2 (t, C1, C2) ∣∣∣∣ ≤ ess-sup |w2 (0, C1, C2)|+KT ess-sup sup t≤T |w3 (t, C2)| ≤ 9W 90 +KTK0,3 (T ) . Inspired by the a priori bounds in Lemma 11, given an arbitrary terminal time T and the initialization W (0), let us consider: • for a tuple (a, b) ∈ R2≥0, a space WT (a, b) of W ′ = (W ′ (t))t≤T = (w′1 (t, ·) , w′2 (t, ·, ·) , w′3 (t, ·))t≤T such that ess-sup sup t≤T |w′3 (t, C2)| ≤ b, ess-sup sup t≤T |w′2 (t, C1, C2)| ≤ a, where w′1 : R≥0 × Ω1 → Rd, w′2 : R≥0 × Ω1 × Ω2 7→ R, w′3 : R≥0 × Ω3 7→ R, • for a tuple (a, b) ∈ R2≥0 and W (0), a space W + T (a, b,W (0)) of W ′ ∈ WT (a, b) such that W ′ (0) = W (0) additionally (and hence every W ′ in this space shares the same initialization W (0)). We equip the spaces with the metric ‖W ′ −W ′′‖T . It is easy to see that both spaces are complete. Note that Lemma 11 implies, under Assumption 1 and 9W90 < ∞, we have any MF solution W , if exists, is inWT (9W 90 +K0,2 (T ) ,9W 90 +K0,3 (T )). For the proof of Theorem 1, we work mainly with W+T (9W 90 +K0,2 (T ) ,9W 90 +K0,3 (T ) ,W (0)), although several intermediate lemmas are proven in more generality for other uses. Lemma 12. Under Assumption 1, for T ≥ 0, any W ′,W ′′ ∈ WT (a, b) and almost every z ∼ P: ess-sup sup t≤T ∣∣∆H2 (z, C2;W ′ (t))∣∣ ≤ Ka,b, ess-sup sup t≤T |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ≤ Ka,b ‖W ′ −W ′′‖T , sup t≤T |H3 (x;W ′ (t))−H3 (x;W ′′ (t))| ≤ Ka,b ‖W ′ −W ′′‖T , sup t≤T |∂2L (y, ŷ (x;W ′ (t)))− ∂2L (y, ŷ (x;W ′′ (t)))| ≤ Ka,b ‖W ′ −W ′′‖T , ess-sup sup t≤T ∣∣∆H2 (z, C2;W ′ (t))−∆H2 (z, C2;W ′′ (t))∣∣ ≤ Ka,b ‖W ′ −W ′′‖T , where Ka,b ≥ 1 is a generic constant that grows polynomially with a and b. Proof. The first bound is easy to see: ess-sup sup t≤T ∣∣∆H2 (z, C2;W ′ (t))∣∣ ≤ ess-sup sup t≤T |w′3 (t, C2)| ≤ b. We prove the second bound, invoking Assumption 1: |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ≤ K |w′2 (t, C1, C2)| |ϕ1 (〈w′1 (t, C1) , x〉)− ϕ1 (〈w′′1 (t, C1) , x〉)| +K |w′2 (t, C1, C2)− w′′2 (t, C1, C2)| ≤ K (|w′2 (t, C1, C2)|+ 1) ‖W ′ −W ′′‖T , which yields by the fact W ′ ∈ WT (a, b): ess-sup sup t≤T |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ≤ K (a+ 1) ‖W ′ −W ′′‖T . Consequently, we have: |H3 (x;W ′ (t))−H3 (x;W ′′ (t))| ≤ K |w′3 (t, C2)| |ϕ2 (H2 (x,C2;W ′ (t)))− ϕ2 (H2 (x,C2;W ′′ (t)))| +K |w′3 (t, C2)− w′′3 (t, C2)| ≤ K |w′3 (t, C2)| |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| +K ‖W ′ −W ′′‖T , |∂2L (y, ŷ (x;W ′ (t)))− ∂2L (y, ŷ (x;W ′′ (t)))| ≤ K |ŷ (x;W ′ (t))− ŷ (x;W ′′ (t))| ≤ K |H3 (x;W ′ (t))−H3 (x;W ′′ (t))| , which then yield the third and fourth bounds by the fact W ′,W ′′ ∈ WT (a, b). Using these bounds, we obtain the last bound:∣∣∆H2 (z, C2;W ′ (t))−∆H2 (z, C2;W ′′ (t))∣∣ ≤ K |w′3 (t, C2)| ( |∂2L (y, ŷ (x;W ′ (t)))− ∂2L (y, ŷ (x;W ′′ (t)))| + |H3 (x;W ′ (t))−H3 (x;W ′′ (t))|+ |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ) +K |w′3 (t, C2)− w′′3 (t, C2)| , from which the last bound follows. To prove Theorem 1, for a given W (0), we define a mapping FW (0) that maps from W ′ = (w′1, w ′ 2, w ′ 3) ∈ WT (a, b) to FW (0) (W ′) = W̄ ′ = (w̄′1, w̄′2, w̄′3), defined by W̄ ′ (0) = W (0) and ∂ ∂t w̄′3 (t, c2) = −ξ3 (t) ∆3 (c2;W ′ (t)) , ∂ ∂t w̄′2 (t, c1, c2) = −ξ2 (t) ∆2 (c1, c2;W ′ (t)) , ∂ ∂t w̄′1 (t, c1) = −ξ1 (t) ∆1 (c1;W ′ (t)) . Notice that the right-hand sides do not involve W̄ ′. Note that the MF ODEs’ solution, initialized at W (0), is a fixed point of this mapping. We establish the following estimates for this mapping. Lemma 13. Under Assumption 1, for T ≥ 0, any initialization W (0) and any W ′,W ′′ ∈ WT (a, b), ess-sup sup s≤t |∆3 (C2;W ′ (s))−∆3 (C2;W ′′ (s))| ≤ Ka,b ‖W ′ −W ′′‖t , ess-sup sup s≤t |∆2 (C1, C2;W ′ (s))−∆2 (C1, C2;W ′′ (s))| ≤ Ka,b ‖W ′ −W ′′‖t , ess-sup sup s≤t |∆1 (C1;W ′ (s))−∆1 (C1;W ′′ (s))| ≤ Ka,b ‖W ′ −W ′′‖t , and consequently, if in addition W ′ (0) = W ′′ (0) (not necessarily equal W (0)), then ess-sup sup t≤T |w̄′3 (t, C2)− w̄′′3 (t, C2)| ≤ Ka,b ∫ T 0 ‖W ′ −W ′′‖s ds, ess-sup sup t≤T |w̄′2 (t, C1, C2)− w̄′′2 (t, C1, C2)| ≤ Ka,b ∫ T 0 ‖W ′ −W ′′‖s ds, ess-sup sup t≤T |w̄′1 (t, C1)− w̄′′1 (t, C1)| ≤ Ka,b ∫ T 0 ‖W ′ −W ′′‖s ds, in which W̄ ′ = (w̄′1, w̄ ′ 2, w̄ ′ 3) = FW (0) (W ′), W̄ ′′ = (w̄′′1 , w̄ ′′ 2 , w̄ ′′ 3 ) = FW (0) (W ′′) and Ka,b ≥ 1 is a generic constant that grows polynomially with a and b. Proof. From Assumption 1 and the fact W ′,W ′′ ∈ WT (a, b), we get: |∆3 (C2;W ′ (s))−∆3 (C2;W ′′ (s))| ≤ KEZ [|∂2L (Y, ŷ (X;W ′ (s)))− ∂2L (Y, ŷ (X;W ′′ (s)))|] +KEZ [|H3 (X;W ′ (s))−H3 (X;W ′′ (s))|] +KEZ [|H2 (X,C2;W ′ (s))−H2 (X,C2;W ′′ (s))|] , |∆2 (C1, C2;W ′ (s))−∆2 (C1, C2;W ′′ (s))| ≤ Ka,b |w′1 (s, C1)− w′′1 (s, C1)| +K ∣∣EZ [∆H2 (Z,C2;W ′ (s))−∆H2 (Z,C2;W ′′ (s))]∣∣ , |∆1 (C1;W ′ (s))−∆1 (C1;W ′′ (s))| ≤ Ka,bEZ [∣∣∆H2 (Z,C2;W ′ (s))−∆H2 (Z,C2;W ′′ (s))∣∣] +Ka,b |w′2 (s, C1, C2)− w′′2 (s, C1, C2)| +Ka,b |w′1 (s, C1)− w′′1 (s, C1)| , from which the first three estimates then follow, in light of Lemma 12. The last three estimates then follow from the fact that W̄ ′ (0) = W̄ ′′ (0) and Assumption 1; for instance, ess-sup sup t≤T |w̄′3 (t, C2)− w̄′′3 (t, C2)| ≤ ∫ T 0 ess-sup ∣∣∣∣ ∂∂tw̄′3 (s, C2)− ∂∂tw̄′′3 (s, C2) ∣∣∣∣ ds ≤ K ∫ T 0 ess-sup |∆3 (C2;W ′ (s))−∆3 (C2;W ′′ (s))| ds. We are now ready to prove Theorem 1. Proof of Theorem 1. We will use a Picard-type iteration. To lighten notations: W+T ≡ W + T (9W 90 +K0,2 (T ) ,9W 90 +K0,3 (T ) ,W (0)) , F ≡ FW (0). Since 9W90 ≤ K by assumption, we have 9W 90 +K0,2 (T ) +K0,3 (T ) ≤ KT . Recall thatW+T is complete. For an arbitrary T > 0, consider W ′,W ′′ ∈ W+T . Lemma 13 yields: ‖F (W ′)− F (W ′′)‖T ≤ KT ∫ T 0 ‖W ′ −W ′′‖s ds. Note that F maps toW+T under Assumption 1 by the same argument as Lemma 11. Hence we are allowed to iterating this inequality and get, for an arbitrary T > 0,∥∥∥F (k) (W ′)− F (k) (W ′′)∥∥∥ T ≤ KT ∫ T 0 ∥∥∥F (k−1) (W ′)− F (k−1) (W ′′)∥∥∥ T2 dT2 ≤ K2T ∫ T 0 ∫ T2 0 ∥∥∥F (k−2) (W ′)− F (k−2) (W ′′)∥∥∥ T3 I (T2 ≤ T ) dT3dT2 ... ≤ KkT ∫ T 0 ∫ T2 0 ... ∫ Tk 0 ‖W ′ −W ′′‖Tk+1 I (Tk ≤ ... ≤ T2 ≤ T ) dTk+1...dT2 ≤ 1 k! KkT ‖W ′ −W ′′‖T . By substituting W ′′ = F (W ′), we have: ∞∑ k=1 ∥∥∥F (k+1) (W ′)− F (k) (W ′)∥∥∥ T = ∞∑ k=1 ∥∥∥F (k) (W ′′)− F (k) (W ′)∥∥∥ T ≤ ∞∑ k=1 1 k! KkT ‖W ′ −W ′′‖T <∞. Hence as k → ∞, F (k) (W ′) converges to a limit inW+T , which is a fixed point of F . The uniqueness of a fixed point follows from the above estimate, since if W ′ and W ′′ are fixed points then ‖W ′ −W ′′‖T = ∥∥∥F (k) (W ′)− F (k) (W ′′)∥∥∥ T ≤ 1 k! KkT ‖W ′ −W ′′‖T , while one can take k arbitrarily large. This proves that the solution exists and is unique on t ∈ [0, T ]. Since T is arbitrary, we have existence and uniqueness of the solution on the time interval [0,∞). C CONNECTION BETWEEN THE NEURAL NET AND ITS MF LIMIT: PROOFS FOR SECTION 3 C.1 PROOF OF THEOREM 3 We construct an auxiliary trajectory, which we call the particle ODEs: ∂ ∂t w̃3 (t, j2) = −ξ3 (t)EZ [ ∂2L ( Y, ŷ ( X; W̃ (t) )) ϕ′3 ( H3 ( X; W̃ (t) )) ϕ2 ( H2 ( X, j2; W̃ (t) ))] , ∂ ∂t w̃2 (t, j1, j2) = −ξ2 (t)EZ [ ∆H2 ( Z, j2; W̃ (t) ) ϕ1 (〈w̃1 (t, j1) , X〉) ] , ∂ ∂t w̃1 (t, j1) = −ξ1 (t)EZ 1 n2 n2∑ j2=1 ∆H2 ( Z, j2; W̃ (t) ) w̃2 (t, j1, j2)ϕ ′ 1 (〈w̃1 (t, j1) , X〉)X , in which j1 = 1, ..., n1, j2 = 1, ..., n2, W̃ (t) = (w̃1 (t, ·) , w̃2 (t, ·, ·) , w̃3 (t, ·)), and t ∈ R≥0. We specify the initialization W̃ (0): w̃1 (0, j1) = w01 (C1 (j1)), w̃2 (0, j1, j2) = w 0 2 (C1 (j1) , C2 (j2)) and w̃3 (0, j3) = w03 (C2 (j2)). That is, it shares the same initialization with the neural network one W (0), and hence is coupled with the neural network and the MF ODEs. Roughly speaking, the particle ODEs are continuous-time trajectories of finitely many neurons, averaged over the data distribution. We note that W̃ (t) is random for all t ∈ R≥0 due to the randomness of {Ci (ji)}i=1,2. The existence and uniqueness of the solution to the particle ODEs follows from the same proof as in Theorem 1, which we shall not repeat here. We equip W̃ (t) with the norm 9W̃9T = max { max j1≤n1, j2≤n2 sup t≤T |w̃2 (t, j1, j2)| , max j2≤n2 sup t≤T |w̃3 (t, j2)| } . One can also define the measures DT ( W, W̃ ) and DT ( W̃ ,W ) similar to Eq. (2): DT ( W, W̃ ) = sup { |w1 (t, C1 (j1))− w̃1 (t, C1 (j1))| , |w2 (t, C1 (j1) , C2 (j2))− w̃2 (t, C1 (j1) , C2 (j2))| , |w3 (t, C2 (j2))− w̃3 (t, C2 (j2))| : t ≤ T, j1 ≤ n1, j2 ≤ n2 } , DT ( W̃ ,W ) = sup { |w1 (bt/ c , j1)− w̃1 (t, C1 (j1))| , |w2 (bt/ c , j1, j2)− w̃2 (t, C1 (j1) , C2 (j2))| , |w3 (bt/ c , j2)− w̃3 (t, C2 (j2))| : t ≤ T, j1 ≤ n1, j2 ≤ n2 } . We have the following results: Theorem 14. Under the same setting as Theorem 3, for any δ > 0, with probability at least 1− δ, DT ( W, W̃ ) ≤ 1√ nmin log1/2 ( 3 (T + 1)n2max δ + e ) eKT , in which nmin = min {n1, n2}, nmax = max {n1, n2}, and KT = K ( 1 + TK ) . Theorem 15. Under the same setting as Theorem 3, for any δ > 0 and ≤ 1, with probability at least 1− δ, DT ( W̃ ,W ) ≤ √ log ( 2n1n2 δ + e ) eKT , in which KT = K ( 1 + TK ) . Proof of Theorem 3. Using the fact DT (W,W) ≤ DT ( W, W̃ ) + DT ( W̃ ,W ) , the thesis is immediate from Theorems 14 and 15. C.2 PROOF OF THEOREMS 14 AND 15 Proof of Theorem 14. In the following, let Kt denote an generic positive constant that may change from line to line and takes the form Kt = K ( 1 + tK ) , such that Kt ≥ 1 and Kt ≤ KT for all t ≤ T . We first note that at initialization, D0 ( W, W̃ ) = 0. Since 9W90 ≤ K, 9W9T ≤ KT by Lemma 11. Furthermore it is easy to see that 9W̃90 ≤ 9W90 ≤ K almost surely. By the same argument as in Lemma 11, 9W̃9T ≤ KT almost surely. We shall use all above bounds repeatedly in the proof. We decompose the proof into several steps. Step 1 - Main proof. Let us define, for brevity q3 (t, x) = H3 ( x; W̃ (t) ) −H3 (x;W (t)) , q2 (t, x, j2, c2) = H2 ( x, j2; W̃ (t) ) −H2 (x, c2;W (t)) , q∆ (t, z, j1, j2, c1, c2) = ∆ H 2 ( Z, j2; W̃ (t) ) w̃2 (t, j1, j2)−∆H2 (z, c2;W (t))w2 (t, c1, c2) . Consider t ≥ 0. We first bound the difference in the updates between W and W̃ . Let us start with w3 and w̃3. By Assumption 1, we have:∣∣∣∣ ∂∂tw̃3 (t, j2)− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ ≤ KEZ [|q3 (t,X)|+ |q2 (t,X, j2, C2 (j2))|] . Similarly, for w2 and w̃2,∣∣∣∣ ∂∂tw̃2 (t, j1, j2)− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣ ≤ KEZ [∣∣∣∆H2 (Z, j2; W̃ (t))−∆H2 (Z,C2 (j2) ;W (t))∣∣∣] +K |w3 (t, C2 (j2))| |w̃1 (t, j1)− w1 (t, C1 (j1))| ≤ KtEZ [|q3 (t,X)|+ |q2 (t,X, j2, C2 (j2))|] +Kt (|w̃1 (t, j1)− w1 (t, C1 (j1))|+ |w̃3 (t, j2)− w3 (t, C2 (j2))|) ≤ KtEZ [|q3 (t,X)|+ |q2 (t,X, j2, C2 (j2))|] +KtDt ( W, W̃ ) , and for w1 and w̃1, by Lemma 12,∣∣∣∣ ∂∂tw̃1 (t, j1)− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣ ≤ KEZ ∣∣∣∣∣∣ 1n2 n2∑ j2=1 EC2 [q∆ (t, Z, j1, j2, C1 (j1) , C2)] ∣∣∣∣∣∣ + EC2 [∣∣∆H2 (Z,C2;W (t))∣∣ |w2 (t, C1 (j1) , C2)|] |w̃1 (t, j1)− w1 (t, C1 (j1))| ≤ KEZ ∣∣∣∣∣∣ 1n2 n2∑ j2=1 EC2 [q∆ (t, Z, j1, j2, C1 (j1) , C2)] ∣∣∣∣∣∣ +KtDt ( W, W̃ ) . To further the bounding, we now make the following two claims: • Claim 1: For any ξ > 0, max j2≤n2 ∣∣∣∣ ∂∂tw3 (t+ ξ, C2 (j2))− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ ≤ Kt+ξξ, max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw2 (t+ ξ, C1 (j1) , C2 (j2))− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣ ≤ Kt+ξξ, max j1≤n1 ∣∣∣∣ ∂∂tw1 (t+ ξ, C1 (j1))− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣ ≤ Kt+ξξ, and similarly, max j2≤n2 ∣∣∣∣ ∂∂tw̃3 (t+ ξ, j2)− ∂∂tw̃3 (t, j2) ∣∣∣∣ ≤ Kt+ξξ, max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw̃2 (t+ ξ, j1, j2)− ∂∂tw̃2 (t, j1, j2) ∣∣∣∣ ≤ Kt+ξξ, max j1≤n1 ∣∣∣∣ ∂∂tw̃1 (t+ ξ, j1)− ∂∂tw̃1 (t, j1) ∣∣∣∣ ≤ Kt+ξξ. • Claim 2: For any γ1, γ2, γ3 > 0 and t ≥ 0, max { max j2≤n2 EZ [|q2 (t,X, j2, C2 (j2))|] , EZ [|q3 (t,X)|] , max j1≤n1 EZ ∣∣∣∣∣∣ 1n2 n2∑ j2=1 EC2 [q∆ (t, Z, j1, j2, C1 (j1) , C2)] ∣∣∣∣∣∣ } ≥ Kt ( Dt ( W, W̃ ) + γ1 + γ2 + γ3 ) , with probability at most n1 γ1 exp ( −n2γ 2 1 Kt ) + n2 γ2 exp ( −n1γ 2 2 Kt ) + 1 γ3 exp ( −n2γ 2 3 Kt ) . Combining these claims with the previous bounds, taking a union bound over t ∈ {0, ξ, 2ξ, ..., bT/ξc ξ} for some ξ ∈ (0, 1), we obtain that max { max j2≤n2 ∣∣∣∣ ∂∂tw̃3 (t, j2)− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ , max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw̃2 (t, j1, j2)− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣ , max j1≤n1 ∣∣∣∣ ∂∂tw̃1 (t, j1)− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣ } ≤ KT ( Dt ( W, W̃ ) + γ1 + γ2 + γ3 + ξ ) , ∀t ∈ [0, T ] , with probability at least 1− T + 1 ξ [ n1 γ1 exp ( −n2γ 2 1 KT ) + n2 γ2 exp ( −n1γ 2 2 KT ) + 1 γ3 exp ( −n2γ 2 3 KT )] . The above event in turn implies Dt ( W, W̃ ) ≤ KT ∫ t 0 ( Ds ( W, W̃ ) + γ1 + γ2 + γ3 + ξ ) ds, and hence by Gronwall’s lemma and the fact D0 ( W, W̃ ) = 0, we get DT ( W, W̃ ) ≤ (γ1 + γ2 + γ3 + ξ) eKT . The theorem then follows from the choice ξ = 1 √ nmax , γ2 = KT√ n1 log1/2 ( 3 (T + 1)n2max δ + e ) , γ1 = γ3 = KT√ n2 log1/2 ( 3 (T + 1)n2max δ + e ) . We are left with proving the claims. Step 2 - Proof of Claim 1. We have from Assumption 1, ess-sup |w3 (t+ ξ, C2)− w3 (t, C2)| ≤ K ∫ t+ξ t ess-sup ∣∣∣∣ ∂∂tw3 (s, C2) ∣∣∣∣ ds ≤ Kξ, ess-sup |w2 (t+ ξ, C1, C2)− w2 (t, C1, C2)| ≤ K ∫ t+ξ t ess-sup ∣∣∣∣ ∂∂tw2 (s, C1, C2) ∣∣∣∣ ds ≤ K ∫ t+ξ t ess-sup |w3 (s, C2)| ds ≤ Kt+ξξ, ess-sup |w1 (t+ ξ, C1)− w1 (t, C1)| ≤ K ∫ t+ξ t ess-sup ∣∣∣∣ ∂∂tw1 (s, C1) ∣∣∣∣ ds ≤ K ∫ t+ξ t ess-sup |w3 (s, C2)w2 (s, C1, C2)| ds ≤ Kt+ξξ. By Lemma 12, we then obtain that ess-supEZ [|H2 (X,C2;W (t+ ξ))−H2 (X,C2;W (t))|] ≤ Kt+ξξ, EZ [|H3 (X;W (t+ ξ))−H3 (X;W (t))|] ≤ Kt+ξξ, ess-supEZ [∣∣∆H2 (Z,C2;W (t+ ξ))−∆H2 (Z,C2;W (t))∣∣] ≤ Kt+ξξ. Using these estimates, we thus have, by Assumption 1, max j2≤n2 ∣∣∣∣ ∂∂tw3 (t+ ξ, C2 (j2))− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ ≤ Kt+ξξ +KEZ [|H3 (X;W (t+ ξ))−H3 (X;W (t))|] +Kess-supEZ [|H2 (X,C2;W (t+ ξ))−H2 (X,C2;W (t))|] ≤ Kt+ξξ, max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw2 (t+ ξ, C1 (j1) , C2 (j2))− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣ ≤ Kt+ξξ +Kess-supEZ [∣∣∆H2 (Z,C2;W (t+ ξ))−∆H2 (Z,C2;W (t))∣∣] +Kess-sup |w3 (t, C2)| |w1 (t+ ξ, C1)− w1 (t, C1)| ≤ Kt+ξξ, max j1≤n1 ∣∣∣∣ ∂∂tw1 (t+ ξ, C1 (j1))− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣ ≤ Kt+ξξ +Kess-supEZ [ EC2 [∣∣∆H2 (Z,C2;W (t+ ξ))−∆H2 (Z,C2;W (t))∣∣ |w2 (t, C1, C2)|]] +Kess-supEC2 [|w3 (t, C2)| |w2 (t+ ξ, C1, C2)− w2 (t, C1, C2)|] +Kess-supEC2 [|w3 (t, C2)w2 (t, C1, C2)|] |w1 (t+ ξ, C1)− w1 (t, C1)| ≤ Kt+ξξ. The proof of the rest of the claim is similar. Step 3 - Proof of Claim 2. We recall the definitions of q∆, q2 and q3. Let us decompose them as follows. We start with q2: |q2 (t, x,
1. What is the focus of the paper regarding online stochastic gradient descent? 2. What are the strengths of the proposed approach, particularly in its theoretical analysis? 3. Do you have any questions or concerns regarding the paper's contributions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or restrictions in the data distribution that the authors should consider?
Review
Review This article is concerned with convergence guarantees of online stochastic gradient descent for a rather generic class of three layers neural networks (instead of similar analyses that treated two layers). The main results state that in a proper limit of infinite width + vanishing learning rate, the dynamics of online SGD is proven to be tracked thanks a mean-field description in the form of coupled ordinary differential equations. Once this mean-field description at disposal, the main result is obtained: in the infinite width + vanishing learning rate + infinite time (= number of training samples), the generalization error tends to it minimal value for a broad class of models and losses (not necessarily convex, which is a novelty of the work) as well as generic data distribution. Overall this paper is very well written, enjoyable to read despite the technicality of the results, and understandable even for non-specialists of this line of works (like myself). I did not check the appendices and proofs. In the main part there are no typos, and I have no main concerns to bring about. Yet: I would find interesting to know more details about the differences with the refs Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019); that is not clear. Also I would find useful to have some hints about the meaning of the (trained third layer) hypothesis in Theorem 8. Finally I find a bit surprising that there are nor restrictions whatsoever on the data distribution (or I missed that). The authors may comment on that in the final version. I recommend publication. Even if I'm not a specialist, it is obvious that the authors made a big effort of redaction, that the results are very solid, the proof technique seems original and requires less assumptions than previous works (I liked very much the "idea of proof" part). I have very few doubts about the quality of the paper despite I did not read the proof details, and the fact that I'm not aware of the literature in this specific field.
ICLR
Title Global Convergence of Three-layer Neural Networks in the Mean Field Regime Abstract In the mean field regime, neural networks are appropriately scaled so that as the width tends to infinity, the learning dynamics tends to a nonlinear and nontrivial dynamical limit, known as the mean field limit. This lends a way to study large-width neural networks via analyzing the mean field limit. Recent works have successfully applied such analysis to two-layer networks and provided global convergence guarantees. The extension to multilayer ones however has been a highly challenging puzzle, and little is known about the optimization efficiency in the mean field regime when there are more than two layers. In this work, we prove a global convergence result for unregularized feedforward three-layer networks in the mean field regime. We first develop a rigorous framework to establish the mean field limit of three-layer networks under stochastic gradient descent training. To that end, we propose the idea of a neuronal embedding, which comprises of a fixed probability space that encapsulates neural networks of arbitrary sizes. The identified mean field limit is then used to prove a global convergence guarantee under suitable regularity and convergence mode assumptions, which – unlike previous works on two-layer networks – does not rely critically on convexity. Underlying the result is a universal approximation property, natural of neural networks, which importantly is shown to hold at any finite training time (not necessarily at convergence) via an algebraic topology argument. 1 INTRODUCTION Interests in the theoretical understanding of the training of neural networks have led to the recent discovery of a new operating regime: the neural network and its learning rates are scaled appropriately, such that as the width tends to infinity, the network admits a limiting learning dynamics in which all parameters evolve nonlinearly with time1. This is known as the mean field (MF) limit (Mei et al. (2018); Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018); Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019)). The four works Mei et al. (2018); Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018) led the first wave of efforts in 2018 and analyzed two-layer neural networks. They established a connection between the network under training and its MF limit. They then used the MF limit to prove that two-layer networks could be trained to find (near) global optima using variants of gradient descent, despite non-convexity (Mei et al. (2018); Chizat & Bach (2018)). The MF limit identified by these works assumes the form of gradient flows in the measure space, which factors out the invariance from the action of a symmetry group on the model. Interestingly, by lifting to the measure space, with a convex loss function (e.g. squared loss), one obtains a limiting optimization problem that is convex (Bengio et al. (2006); Bach (2017)). The analyses of Mei et al. (2018); ∗This paper is a conference submission. We refer to the work Nguyen & Pham (2020) and its companion note Pham & Nguyen (2020) for generalizations as well as other conditions for global convergence in the case of multilayer neural networks. †Department of Mathematics, Stanford University. This work was done in parts while H. T. Pham was at the University of Cambridge. ‡The Voleon Group. This work was done while P.-M. Nguyen was at Stanford University. §The author ordering is randomized. 1This is to be contrasted with another major operating regime (the NTK regime) where parameters essentially do not evolve and the model behaves like a kernel method (Jacot et al. (2018); Chizat et al. (2019); Du et al. (2019); Allen-Zhu et al. (2019); Zou et al. (2018); Lee et al. (2019)). Chizat & Bach (2018) utilize convexity, although the mechanisms to attain global convergence in these works are more sophisticated than the usual convex optimization setup in Euclidean spaces. The extension to multilayer networks has enjoyed much less progresses. The works Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019) argued, heuristically or rigorously, for the existence of a MF limiting behavior under gradient descent training with different assumptions. In fact, it has been argued that the difficulty is not simply technical, but rather conceptual (Nguyen (2019)): for instance, the presence of intermediate layers exhibits multiple symmetry groups with intertwined actions on the model. Convergence to the global optimum of the model under gradientbased optimization has not been established when there are more than two layers. In this work, we prove a global convergence guarantee for feedforward three-layer networks trained with unregularized stochastic gradient descent (SGD) in the MF regime. After an introduction of the three-layer setup and its MF limit in Section 2, our development proceeds in two main steps: Step 1 (Theorem 3 in Section 3): We first develop a rigorous framework that describes the MF limit and establishes its connection with a large-width SGD-trained three-layer network. Here we propose the new idea of a neuronal embedding, which comprises of an appropriate non-evolving probability space that encapsulates neural networks of arbitrary sizes. This probability space is in general abstract and is constructed according to the (not necessarily i.i.d.) initialization scheme of the neural network. This idea addresses directly the intertwined action of multiple symmetry groups, which is the aforementioned conceptual obstacle (Nguyen (2019)), thereby covering setups that cannot be handled by formulations in Araújo et al. (2019); Sirignano & Spiliopoulos (2019) (see also Section 5 for a comparison). Our analysis follows the technique from Sznitman (1991); Mei et al. (2018) and gives a quantitative statement: in particular, the MF limit yields a good approximation of the neural network as long as n−1min log nmax 1 independent of the data dimension, where nmin and nmax are the minimum and maximum of the widths. Step 2 (Theorem 8 in Section 4): We prove that the MF limit, given by our framework, converges to the global optimum under suitable regularity and convergence mode assumptions. Several elements of our proof are inspired by Chizat & Bach (2018); the technique in their work however does not generalize to our three-layer setup. Unlike previous two-layer analyses, we do not exploit convexity; instead we make use of a new element: a universal approximation property. The result turns out to be conceptually new: global convergence can be achieved even when the loss function is non-convex. An important crux of the proof is to show that the universal approximation property holds at any finite training time (but not necessarily at convergence, i.e. at infinite time, since the property may not realistically hold at convergence). Together these two results imply a positive statement on the optimization efficiency of SGD-trained unregularized feedforward three-layer networks (Corollary 10). Our results can be extended to the general multilayer case – with new ideas on top and significantly more technical works – or used to obtain new global convergence guarantees in the two-layer case (Nguyen & Pham (2020); Pham & Nguyen (2020)). We choose to keep the current paper concise with the three-layer case being a prototypical setup that conveys several of the basic ideas. Complete proofs are presented in appendices. Notations. K denotes a generic constant that may change from line to line. |·| denotes the absolute value for a scalar and the Euclidean norm for a vector. For an integer n, we let [n] = {1, ..., n}. 2 THREE-LAYER NEURAL NETWORKS AND THE MEAN FIELD LIMIT 2.1 THREE-LAYER NEURAL NETWORK We consider the following three-layer network at time k ∈ N≥0 that takes as input x ∈ Rd: ŷ (x;W (k)) = ϕ3 (H3 (x;W (k))) , (1) H3 (x;W (k)) = 1 n2 n2∑ j2=1 w3 (k, j2)ϕ2 (H2 (x, j2;W (k))) , H2 (x, j2;W (k)) = 1 n1 n1∑ j1=1 w2 (k, j1, j2)ϕ1 (〈w1 (k, j1) , x〉) , in which W (k) = (w1 (k, ·) ,w2 (k, ·, ·) ,w3 (k, ·)) consists of the weights2 w1 (k, j1) ∈ Rd, w2 (k, j1, j2) ∈ R and w3 (k, j2) ∈ R. Here ϕ1 : R → R, ϕ2 : R → R and ϕ3 : R → R are the activation functions, and the network has widths {n1, n2}. We train the network with SGD w.r.t. the loss L : R × R → R≥0. We assume that at each time k, we draw independently a fresh sample z (k) = (x (k) , y (k)) ∈ Rd ×R from a training distribution P . Given an initialization W (0), we update W (k) according to w3 (k + 1, j2) = w3 (k, j2)− ξ3 (k ) Grad3 (z (k) , j2;W (k)) , w2 (k + 1, j1, j2) = w2 (k, j1, j2)− ξ2 (k ) Grad2 (z (k) , j1, j2;W (k)) , w1 (k + 1, j1) = w1 (k, j1)− ξ1 (k ) Grad1 (z (k) , j1;W (k)) , in which j1 = 1, ..., n1, j2 = 1, ..., n2, ∈ R>0 is the learning rate, ξi : R≥0 7→ R≥0 is the learning rate schedule for wi, and for z = (x, y), we define Grad3 (z, j2;W (k)) = ∂2L (y, ŷ (x;W (k)))ϕ′3 (H3 (x;W (k)))ϕ2 (H2 (x, j2;W (k))) , Grad2 (z, j1, j2;W (k)) = ∆ H 2 (z, j2;W (k))ϕ1 (〈w1 (k, j1) , x〉) , Grad1 (z, j1;W (k)) = ( 1 n2 n2∑ j2=1 ∆H2 (z, j2;W (k))w2 (k, j1, j2) ) ϕ′1 (〈w1 (k, j1) , x〉)x, ∆H2 (z, j2;W (k)) = ∂2L (y, ŷ (x;W (k)))ϕ′3 (H3 (x;W (k)))w3 (k, j2)ϕ′2 (H2 (x, j2;W (k))) . We note that this setup follows the same scaling w.r.t. n1 and n2, which is applied to both the forward pass and the learning rates in the backward pass, as Nguyen (2019). 2.2 MEAN FIELD LIMIT The MF limit is a continuous-time infinite-width analog of the neural network under training. To describe it, we first introduce the concept of a neuronal ensemble. Given a product probability space (Ω,F , P ) = (Ω1 × Ω2,F1 ×F1, P1 × P2), we independently sample Ci ∼ Pi, i = 1, 2. In the following, we use ECi to denote the expectation w.r.t. the random variable Ci ∼ Pi and ci to denote an arbitrary point ci ∈ Ωi. The space (Ω,F , P ) is referred to as a neuronal ensemble. Given a neuronal ensemble (Ω,F , P ), the MF limit is described by a time-evolving system with state/parameter W (t), where the time t ∈ R≥0 and W (t) = (w1 (t, ·) , w2 (t, ·, ·) , w3 (t, ·)) with w1 : R≥0 × Ω1 → Rd, w2 : R≥0 × Ω1 × Ω2 → R and w3 : R≥0 × Ω2 → R. It entails the quantities: ŷ (x;W (t)) = ϕ3 (H3 (x;W (t))) , H3 (x;W (t)) = EC2 [w3 (t, C2)ϕ2 (H2 (x,C2;W (t)))] , H2 (x, c2;W (t)) = EC1 [w2 (t, C1, c2)ϕ1 (〈w1 (t, C1) , x〉)] . Here for each t ∈ R≥0, w1 (t, ·) is (Ω1,F1)-measurable, and similar for w2 (t, ·, ·), w3 (t, ·). The MF limit evolves according to a continuous-time dynamics, described by a system of ODEs, which we refer to as the MF ODEs. Specifically, given an initialization W (0) = (w1 (0, ·) , w2 (0, ·, ·) , w3 (0, ·)), the dynamics solves: ∂tw3 (t, c2) = −ξ3 (t) ∆3 (c2;W (t)) , ∂tw2 (t, c1, c2) = −ξ2 (t) ∆2 (c1, c2;W (t)) , ∂tw1 (t, c1) = −ξ1 (t) ∆1 (c1;W (t)) . Here c1 ∈ Ω1, c2 ∈ Ω2, EZ denotes the expectation w.r.t. the data Z = (X,Y ) ∼ P , and for z = (x, y), we define ∆3 (c2;W (t)) = EZ [∂2L (Y, ŷ (X;W (t)))ϕ′3 (H3 (X;W (t)))ϕ2 (H2 (X, c2;W (t)))] , 2To absorb first layer’s bias term to w1, we assume the input x to have 1 appended to the last entry. ∆2 (c1, c2;W (t)) = EZ [ ∆H2 (Z, c2;W (t))ϕ1 (〈w1 (t, c1) , X〉) ] , ∆1 (c1;W (t)) = EZ [ EC2 [ ∆H2 (Z,C2;W (t))w2 (t, c1, C2) ] ϕ′1 (〈w1 (t, c1) , X〉)X ] , ∆H2 (z, c2;W (t)) = ∂2L (y, ŷ (x;W (t)))ϕ′3 (H3 (x;W (t)))w3 (t, c2)ϕ′2 (H2 (x, c2;W (t))) . In Appendix B, we show well-posedness of MF ODEs under the following regularity conditions. Assumption 1 (Regularity). We assume that ϕ1 and ϕ2 are K-bounded, ϕ′1, ϕ′2 and ϕ′3 are Kbounded and K-Lipschitz, ϕ′2 and ϕ ′ 3 are non-zero everywhere, ∂2L (·, ·) is K-Lipschitz in the second variable and K-bounded, and |X| ≤ K with probability 1. Furthermore ξ1, ξ2 and ξ3 are K-bounded and K-Lipschitz. Theorem 1. Under Assumption 1, given any neuronal ensemble and an initialization W (0) such that3 ess-sup |w2 (0, C1, C2)| , ess-sup |w3 (0, C2)| ≤ K, there exists a unique solution W to the MF ODEs on t ∈ [0,∞). An example of a suitable setup is ϕ1 = ϕ2 = tanh, ϕ3 is the identity, L is the Huber loss, although a non-convex sufficiently smooth loss function suffices. In fact, all of our developments can be easily modified to treat the squared loss with an additional assumption |Y | ≤ K with probability 1. So far, given an arbitrary neuronal ensemble (Ω,F , P ), for each initialization W (0), we have defined a MF limit W (t). The connection with the neural network’s dynamics W (k) is established in the next section. 3 CONNECTION BETWEEN NEURAL NETWORK AND ITS MEAN FIELD LIMIT 3.1 NEURONAL EMBEDDING AND THE COUPLING PROCEDURE To formalize a connection between the neural network and its MF limit, we consider their initializations. In practical scenarios, to set the initial parameters W (0) of the neural network, one typically randomizes W (0) according to some distributional law ρ. We note that since the neural network is defined w.r.t. a set of finite integers {n1, n2}, so is ρ. We consider a family Init of initialization laws, each of which is indexed by the set {n1, n2}: Init = {ρ : ρ is the initialization law of a neural network of size {n1, n2} , n1, n2 ∈ N>0}. This is helpful when one is to take a limit that sends n1, n2 → ∞, in which case the size of this family |Init| is infinite. More generally we allow |Init| <∞ (for example, Init contains a single law ρ of a network of size {n1, n2} and hence |Init| = 1). We make the following crucial definition. Definition 2. Given a family of initialization laws Init, we call (Ω,F , P, { w0i } i=1,2,3 ) a neuronal embedding of Init if the following holds: 1. (Ω,F , P ) = (Ω1 × Ω2,F1 ×F2, P1 × P2) a product measurable space. As a reminder, we call it a neuronal ensemble. 2. The deterministic functions w01 : Ω1 → Rd, w02 : Ω1 × Ω2 → R and w03 : Ω2 → R are such that, for each index {n1, n2} of Init and the law ρ of this index, if — with an abuse of notations — we independently sample {Ci (ji)}ji∈[ni] ∼ Pi i.i.d. for each i = 1, 2, then Law ( w01 (C1 (j1)) , w 0 2 (C1(j1), C2 (j2)) , w 0 3 (C2(j2)) , ji ∈ [ni] , i = 1, 2 ) = ρ. To proceed, given Init and {n1, n2} in its index set, we perform the following coupling procedure: 1. Let (Ω,F , P, { w0i } i=1,2,3 ) be a neuronal embedding of Init. 2. We form the MF limit W (t) (for t ∈ R≥0) associated with the neuronal ensemble (Ω,F , P ) by setting the initialization W (0) to w1 (0, ·) = w01 (·), w2 (0, ·, ·) = w02 (·, ·) and w3 (0, ·) = w03 (·) and running the MF ODEs described in Section 2.2. 3We recall the definition of ess-sup in Appendix A. 3. We independently sample Ci (ji) ∼ Pi for i = 1, 2 and ji = 1, ..., ni. We then form the neural network initialization W (0) with w1 (0, j1) = w01 (C1 (j1)), w2 (0, j1, j2) = w02 (C1 (j1) , C2 (j2)) and w3 (0, j2) = w 0 3 (C2 (j2)) for j1 ∈ [n1], j2 ∈ [n2]. We obtain the network’s trajectory W (k) for k ∈ N≥0 as in Section 2.1, with the data z (k) generated independently of {Ci (ji)}i=1,2 and hence W (0). We can then define a measure of closeness between W (bt/ c) and W (t) for t ∈ [0, T ]: DT (W,W) = sup { |w1 (bt/ c , j1)− w1 (t, C1 (j1))| , |w2 (bt/ c , j1, j2)− w2 (t, C1 (j1) , C2 (j2))| , |w3 (bt/ c , j2)− w3 (t, C2 (j2))| : t ≤ T, j1 ≤ n1, j2 ≤ n2 } . (2) Note that W (t) is a deterministic trajectory independent of {n1, n2}, whereas W (k) is random for all k ∈ N≥0 due to the randomness of {Ci (ji)}i=1,2 and the generation of the training data z (k). Similarly DT (W,W) is a random quantity. The idea of the coupling procedure is closely related to the coupling argument in Sznitman (1991); Mei et al. (2018). Here, instead of playing the role of a proof technique, the coupling serves as a vehicle to establish the connection betweenW and W on the basis of the neuronal embedding. This connection is shown in Theorem 3 below, which gives an upper bound on DT (W,W). We note that the coupling procedure can be carried out to provide a connection between W and W as long as there exists a neuronal embedding for Init. Later in Section 4.1, we show that for a common initialization scheme (in particular, i.i.d. initialization) for Init, there exists a neuronal embedding. Theorem 3 applies to, but is not restricted to, this initialization scheme. 3.2 MAIN RESULT: APPROXIMATION BY THE MF LIMIT Assumption 2 (Initialization of second and third layers). We assume that ess-sup ∣∣w02 (C1, C2)∣∣, ess-sup ∣∣w03 (C2)∣∣ ≤ K, where w02 and w03 are as described in Definition 2. Theorem 3. Given a family Init of initialization laws and a tuple {n1, n2} that is in the index set of Init, perform the coupling procedure as described in Section 3.1. Fix a terminal time T ∈ N≥0. Under Assumptions 1 and 2, for ≤ 1, we have with probability at least 1− 2δ, DT (W,W) ≤ eKT ( 1 √ nmin + √ ) log1/2 ( 3 (T + 1)n2max δ + e ) ≡ errδ,T ( , n1, n2) , in which nmin = min {n1, n2}, nmax = max {n1, n2}, and KT = K ( 1 + TK ) . The theorem gives a connection between W (bt/ c), which is defined upon finite widths n1 and n2, and the MF limit W (t), whose description is independent of n1 and n2. It lends a way to extract properties of the neural network in the large-width regime. Corollary 4. Under the same setting as Theorem 3, consider any test function ψ : R × R → R which is K-Lipschitz in the second variable uniformly in the first variable (an example of ψ is the loss L). For any δ > 0, with probability at least 1− 3δ, sup t≤T |EZ [ψ (Y, ŷ (X;W (bt/ c)))]− EZ [ψ (Y, ŷ (X;W (t)))]| ≤ eKT errδ,T ( , n1, n2) . These bounds hold for any n1 and n2, similar to Mei et al. (2018); Araújo et al. (2019), in contrast with non-quantitative results in Chizat & Bach (2018); Sirignano & Spiliopoulos (2019). These bounds suggest that n1 and n2 can be chosen independent of the data dimension d. This agrees with the experiments in Nguyen (2019), which found width ≈ 1000 to be typically sufficient to observe MF behaviors in networks trained with real-life high-dimensional data. We observe that the MF trajectory W (t) is defined as per the choice of the neuronal embedding (Ω,F , P, { w0i } i=1,2,3 ), which may not be unique. On the other hand, the neural network’s trajectory W (k) depends on the randomization of the initial parameters W (0) according to an initialization law from the family Init (as well as the data z (k)) and hence is independent of this choice. Another corollary of Theorem 3 is that given the same family Init, the law of the MF trajectory is insensitive to the choice of the neuronal embedding of Init. Corollary 5. Consider a family Init of initialization laws, indexed by a set of tuples {m1,m2} that contains a sequence of indices {m1 (m) ,m2 (m) : m ∈ N} in which as m → ∞, min {m1 (m) ,m2 (m)}−1 log (max {m1 (m) ,m2 (m)}) → 0. Let W (t) and Ŵ (t) be two MF trajectories associated with two choices of neuronal embeddings of Init, (Ω,F , P, { w0i } i=1,2,3 ) and (Ω̂, F̂ , P̂ , { ŵ0i } i=1,2,3 ). The following statement holds for any T ≥ 0 and any two positive integers n1 and n2: if we independently sample Ci (ji) ∼ Pi and Ĉi (ji) ∼ P̂i for ji ∈ [ni], i = 1, 2, then Law (W (n1, n2, T )) = Law(Ŵ (n1, n2, T )), where we define W (n1, n2, T ) as the below collection w.r.t. W (t), and similarly define Ŵ (n1, n2, T ) w.r.t. Ŵ (t): W (n1, n2, T ) = { w1 (t, C1 (j1)) , w2 (t, C1 (j1) , C2 (j2)) , w3 (t, C2 (j2)) : j1 ∈ [n1] , j2 ∈ [n2] , t ∈ [0, T ] } . The proofs are deferred to Appendix C. 4 CONVERGENCE TO GLOBAL OPTIMA In this section, we prove a global convergence guarantee for three-layer neural networks via the MF limit. We consider a common class of initialization: i.i.d. initialization. 4.1 I.I.D. INITIALIZATION Definition 6. An initialization law ρ for a neural network of size {n1, n2} is called ( ρ1, ρ2, ρ3 ) - i.i.d. initialization (or i.i.d. initialization, for brevity), where ρ1, ρ2 and ρ3 are probability measures over Rd, R and R respectively, if {w1 (0, j1)}j1∈[n1] are generated i.i.d. according to ρ 1, {w2 (0, j1, j2)}j1∈[n1], j2∈[n2] are generated i.i.d. according to ρ 2 and {w3 (0, j2)}j2∈[n2] are generated i.i.d. according to ρ3, and w1, w2 and w3 are independent. Observe that given ( ρ1, ρ2, ρ3 ) , one can build a family Init of i.i.d. initialization laws that contains any index set {n1, n2}. Furthermore i.i.d. initializations are supported by our framework, as stated in the following proposition and proven in Appendix D. Proposition 7. There exists a neuronal embedding ( Ω,F , P, { w0i } i=1,2,3 ) for any family Init of initialization laws, which are ( ρ1, ρ2, ρ3 ) -i.i.d. 4.2 MAIN RESULT: GLOBAL CONVERGENCE To measure the learning quality, we consider the loss averaged over the data Z ∼ P: L (V ) = EZ [L (Y, ŷ (X;V ))] , where V = (v1, v2, v3) is a set of three measurable functions v1 : Ω1 → Rd, v2 : Ω1 × Ω2 → R, v3 : Ω2 → R. Assumption 3. Consider a neuronal embedding ( Ω,F , P, { w0i } i=1,2,3 ) of the ( ρ1, ρ2, ρ3 ) -i.i.d. initialization, and the associated MF limit with initialization W (0) such that w1 (0, ·) = w01 (·), w2 (0, ·, ·) = w02 (·, ·) and w3 (0, ·) = w03 (·). Assume: 1. Support: The support of ρ1 is Rd. 2. Convergence mode: There exist limits w̄1, w̄2 and w̄3 such that as t→∞, E [(1 + |w̄3(C2)|) |w̄3(C2)| |w̄2(C1, C2)| |w1(t, C1)− w̄1(C1)|]→ 0, (3) E [(1 + |w̄3(C2)|) |w̄3(C2)| |w2(t, C1, C2)− w̄2(C1, C2)|]→ 0, (4) E [(1 + |w̄3(C2)|) |w3(t, C2)− w̄3(C2)|]→ 0, (5) ess-supEC2 [|∂tw2 (t, C1, C2)|]→ 0. (6) 3. Universal approximation: { ϕ1 (〈u, ·〉) : u ∈ Rd } has dense span in L2 (PX) (the space of square integrable functions w.r.t. PX the distribution of the input X). Assumption 3 is inspired by the work Chizat & Bach (2018) on two-layer networks, with certain differences. Assumptions 3.1 and 3.3 are natural in neural network learning (Cybenko (1989); Chen & Chen (1995)), while we note Chizat & Bach (2018) does not utilize universal approximation. Similar to Chizat & Bach (2018), Assumption 3.2 is technical and does not seem removable. Note that this assumption specifies the mode of convergence and is not an assumption on the limits w̄1, w̄2 and w̄3. Specifically conditions (3)-(5) are similar to the convergence assumption in Chizat & Bach (2018). We differ from Chizat & Bach (2018) fundamentally in the essential supremum condition (6). On one hand, this condition helps avoid the Morse-Sard type condition in Chizat & Bach (2018), which is difficult to verify in general and not simple to generalize to the three-layer case. On the other hand, it turns out to be a natural assumption to make, in light of Remark 9 below. We now state the main result of the section. The proof is in Appendix D. Theorem 8. Consider a neuronal embedding ( Ω,F , P, { w0i } i=1,2,3 ) of ( ρ1, ρ2, ρ3 ) -i.i.d. initialization. Consider the MF limit corresponding to the network (1), such that they are coupled together by the coupling procedure in Section 3.1, under Assumptions 1, 2 and 3. For simplicity, assume ξ1 (·) = ξ2 (·) = 1. Further assume either: • (untrained third layer) ξ3 (·) = 0 and w03 (C2) 6= 0 with a positive probability, or • (trained third layer) ξ3 (·) = 1 and L ( w01, w 0 2, w 0 3 ) < EZ [L (Y, ϕ3 (0))]. Then the following hold: • Case 1 (convex loss): If L is convex in the second variable, then lim t→∞ L (W (t)) = inf V L (V ) = inf ỹ: Rd→R EZ [L (Y, ỹ (X))] . • Case 2 (generic non-negative loss): Suppose that ∂2L (y, ŷ) = 0 implies L (y, ŷ) = 0. If y = y(x) is a function of x, then L (W (t))→ 0 as t→∞. Remarkably here the theorem allows for non-convex losses. A further inspection of the proof shows that no convexity-based property is used in Case 2 (see, for instance, the high-level proof sketch in Section 4.3); in Case 1, the key steps in the proof are the same, and the convexity of the loss function serves as a convenient technical assumption to handle the arbitrary extra randomness of Y conditional on X . We also remark that the same proof of global convergence should extend beyond the specific fully-connected architecture considered here. Similar to previous results on SGD-trained two-layer networks Mei et al. (2018); Chizat & Bach (2018), our current result in the three-layer case is non-quantitative. Remark 9. Interestingly there is a converse relation between global convergence and the essential supremum condition (6): under the same setting, global convergence is unattainable if condition (6) does not hold. A similar observation was made in Wojtowytsch (2020) for two-layer ReLU networks. A precise statement and its proof can be found in Appendix E. The following result is straightforward from Theorem 8 and Corollary 4, establishing the optimization efficiency of the neural network with SGD. Corollary 10. Consider the neural network (1). Under the same setting as Theorem 8, in Case 1, lim t→∞ lim n1,n2 lim →0 EZ [L (Y, ŷ (X;W (bt/ c)))] = inf f1,f2,f3 L (f1, f2, f3) = inf ỹ EZ [L (Y, ỹ (X))] in probability, where the limit of the widths is such that min {n1, n2}−1 log (max {n1, n2}) → 0. In Case 2, the same holds with the right-hand side being 0. 4.3 HIGH-LEVEL IDEA OF THE PROOF We give a high-level discussion of the proof. This is meant to provide intuitions and explain the technical crux, so our discussion may simplify and deviate from the actual proof. Our first insight is to look at the second layer’s weight w2. At convergence time t = ∞, we expect to have zero movement and hence, denoting W (∞) = (w̄1, w̄2, w̄3): ∆2 (c1, c2;W (∞)) = EZ [ ∆H2 (Z, c2;W (∞))ϕ1 (〈w̄1 (c1) , X〉) ] = 0, for P -almost every c1, c2. Suppose for the moment that we are allowed to make an additional (strong) assumption on the limit w̄1: supp (w̄1 (C1)) = Rd. It implies that the universal approximation property, described in Assumption 3, holds at t = ∞; more specifically, it implies {ϕ1 (〈w̄1 (c1) , ·〉) : c1 ∈ Ω1} has dense span in L2 (PX). This thus yields EZ [ ∆H2 (Z, c2;W (∞)) ∣∣X = x] = 0, for P-almost every x. Recalling the definition of ∆H2 , one can then easily show that EZ [∂2L (Y, ŷ (X;W (∞)))|X = x] = 0. Global convergence follows immediately; for example, in Case 2 of Theorem 8, this is equivalent to that ∂2L (y (x) , ŷ (x;W (∞))) = 0 and hence L (y (x) , ŷ (x;W (∞))) = 0 for P-almost every x. In short, the gradient flow structure of the dynamics of w2 provides a seamless way to obtain global convergence. Furthermore there is no critical reliance on convexity. However this plan of attack has a potential flaw in the strong assumption that supp (w̄1 (C1)) = Rd, i.e. the universal approximation property holds at convergence time. Indeed there are setups where it is desirable that supp (w̄1 (C1)) 6= Rd (Mei et al. (2018); Chizat (2019)); for instance, it is the case where the neural network is to learn some “sparse and spiky” solution, and hence the weight distribution at convergence time, if successfully trained, cannot have full support. On the other hand, one can entirely expect that if supp (w1 (0, C1)) = Rd initially at t = 0, then supp (w1 (t, C1)) = Rd at any finite t ≥ 0. The crux of our proof is to show the latter without assuming supp (w̄1 (C1)) = Rd. This task is the more major technical step of the proof. To that end, we first show that there exists a mapping (t, u) 7→ M (t, u) that maps from (t, w1 (0, c1)) = (t, u) to w1 (t, c1) via a careful measurability argument. This argument rests on a scheme that exploits the symmetry in the network evolution. Furthermore the map M is shown to be continuous. The desired conclusion then follows from an algebraic topology argument that the map M preserves a homotopic structure through time. 5 DISCUSSION The MF literature is fairly recent. A long line of works (Nitanda & Suzuki (2017); Mei et al. (2018); Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018); Wei et al. (2019); Javanmard et al. (2019); Mei et al. (2019); Shevchenko & Mondelli (2019); Wojtowytsch (2020)) have focused mainly on two-layer neural networks, taking an interacting particle system approach to describe the MF limiting dynamics as Wasserstein gradient flows. The three works Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019) independently develop different formulations for the MF limit in multilayer neural networks, under different assumptions. These works take perspectives that are different from ours. In particular, while the central object in Nguyen (2019) is a new abstract representation of each individual neuron, our neuronal embedding idea instead takes a keen view on a whole ensemble of neurons. Likewise our idea is also distant from Araújo et al. (2019); Sirignano & Spiliopoulos (2019): the central objects in Araújo et al. (2019) are paths over the weights across layers; those in Sirignano & Spiliopoulos (2019) are time-dependent functions of the initialization, which are simplified upon i.i.d. initializations. The result of our perspective is a neuronal embedding framework that allows one to describe the MF limit in a clean and rigorous manner. In particular, it avoids extra assumptions made in Araújo et al. (2019); Sirignano & Spiliopoulos (2019): unlike our work, Araújo et al. (2019) assumes untrained first and last layers and requires non-trivial technical tools; Sirignano & Spiliopoulos (2019) takes an unnatural sequential limit n1 → ∞ before n2 → ∞ and proves a non-quantitative result, unlike Theorem 3 which only requires sufficiently large min {n1, n2}. We note that Theorem 3 can be extended to general multilayer networks using the neuronal embedding idea. The advantages of our framework come from the fact that while MF formulations in Araújo et al. (2019); Sirignano & Spiliopoulos (2019) are specific to and exploit i.i.d. initializations, our formulation does not. Remarkably as shown in Araújo et al. (2019), when there are more than three layers and no biases, i.i.d. initializations lead to a certain simplifying effect on the MF limit. On the other hand, our framework supports non-i.i.d. initializations which avoid the simplifying effect, as long as there exist suitable neuronal embeddings (Nguyen & Pham (2020)). Although our global convergence result in Theorem 8 is proven in the context of i.i.d. initializations for three-layer networks, in the general multilayer case, it turns out that the use of a special type of non-i.i.d. initialization allows one to prove a global convergence guarantee (Pham & Nguyen (2020)). In this aspect, our framework follows closely the spirit of the work Nguyen (2019), whose MF formulation is also not specific to i.i.d. initializations. Yet though similar in the spirit, Nguyen (2019) develops a heuristic formalism and does not prove global convergence. Global convergence in the two-layer case with convex losses has enjoyed multiple efforts with a lot of new and interesting results (Mei et al. (2018); Chizat & Bach (2018); Javanmard et al. (2019); Rotskoff et al. (2019); Wei et al. (2019)). Our work is the first to establish a global convergence guarantee for SGD-trained three-layer networks in the MF regime. Our proof sends a new message that the crucial factor is not necessarily convexity, but rather that the whole learning trajectory maintains the universal approximation property of the function class represented by the first layer’s neurons, together with the gradient flow structure of the second layer’s weights. As a remark, our approach can also be applied to prove a similar global convergence guarantee for two-layer networks, removing the convex loss assumption in previous works (Nguyen & Pham (2020)). The recent work Lu et al. (2020) on a MF resnet model (a composition of many two-layer MF networks) and a recent update of Sirignano & Spiliopoulos (2019) essentially establish conditions of stationary points to be global optima. They however require strong assumptions on the support of the limit point. As explained in Section 4.3, we analyze the training dynamics without such assumption and in fact allow it to be violated. Our global convergence result is non-quantitative. An important, highly challenging future direction is to develop a quantitative version of global convergence; previous works on two-layer networks Javanmard et al. (2019); Wei et al. (2019); Rotskoff et al. (2019); Chizat (2019) have done so under sophisticated modifications of the architecture and training algorithms. Finally we remark that our insights here can be applied to prove similar global convergence guarantees and derive other sufficient conditions for global convergence of two-layer or multilayer networks (Nguyen & Pham (2020); Pham & Nguyen (2020)). ACKNOWLEDGEMENT H. T. Pham would like to thank Jan Vondrak for many helpful discussions and in particular for the shorter proof of Lemma 19. We would like to thank Andrea Montanari for the succinct description of the difficulty in extending the mean field formulation to the multilayer case, in that there are multiple symmetry group actions in a multilayer network. A NOTATIONAL PRELIMINARIES For a real-valued random variable Z defined on a probability space (Ω,F , P ), we recall ess-supZ = inf {z ∈ R : P (Z > z) = 0} . We also introduce some convenient definitions which we use throughout the appendices. For a set of neural network’s parameter W, we define 9W9T = max { max j1≤n1, j2≤n2 sup t≤T |w2 (bt/ c , j1, j2)| , max j2≤n2 sup t≤T |w3 (bt/ c , j2)| } . Similarly for a set of MF parameters W , we define: 9W9T = max { ess-sup sup t≤T |w2 (t, C1, C2)| , ess-sup sup t≤T |w3 (t, C2)| } . For two sets of neural network’s parameters W′,W′′, we define their distance: ‖W′ −W′′‖T = sup { |w′1 (bt/ c , j1)−w′′1 (bt/ c , j1)| , |w′2 (bt/ c , j1, j2)−w′′2 (bt/ c , j1, j2)| , |w′3 (bt/ c , j2)−w′′3 (bt/ c , j2)| : t ∈ [0, T ] , j1 ∈ [n1] , j2 ∈ [n2] } . Similarly for two sets of MF parameters W ′,W ′′, we define their distance: ‖W ′ −W ′′‖T = ess-sup sup t∈[0,T ] { |w′1 (t, C1)− w′′1 (t, C1)| , |w′2 (t, C1, C2)− w′′2 (t, C1, C2)| , |w′3 (t, C2)− w′′3 (t, C2)| } . B EXISTENCE AND UNIQUENESS OF THE SOLUTION TO MF ODES We first collect some a priori estimates. Lemma 11. Under Assumption 1, consider a solution W to the MF ODEs with initialization W (0) such that 9W90 < ∞. If this solution exists, it satisfies the following a priori bounds, for any T ≥ 0: ess-sup sup t≤T |w3 (t, C2)| ≤ 9W 90 +KT ≡ 9W 90 +K0,3 (T ) , ess-sup sup t≤T |w2 (t, C1, C2)| ≤ 9W 90 +KTK0,3 (T ) ≡ 9W 90 +K0,2 (T ) , and consequently, 9W9T ≤ 1 + max {K0,2 (T ) , K0,3 (T )} . Proof. The bounds can be obtained easily by bounding the respective initializations and update quantities separately. In particular, ess-sup sup t≤T |w3 (t, C2)| ≤ ess-sup |w3 (0, C2)|+ T ess-sup sup t≤T ∣∣∣∣ ∂∂tw3 (t, C2) ∣∣∣∣ ≤ 9W 90 +KT, ess-sup sup t≤T |w2 (t, C1, C2)| ≤ ess-sup |w2 (0, C1, C2)|+ T ess-sup sup t≤T ∣∣∣∣ ∂∂tw2 (t, C1, C2) ∣∣∣∣ ≤ ess-sup |w2 (0, C1, C2)|+KT ess-sup sup t≤T |w3 (t, C2)| ≤ 9W 90 +KTK0,3 (T ) . Inspired by the a priori bounds in Lemma 11, given an arbitrary terminal time T and the initialization W (0), let us consider: • for a tuple (a, b) ∈ R2≥0, a space WT (a, b) of W ′ = (W ′ (t))t≤T = (w′1 (t, ·) , w′2 (t, ·, ·) , w′3 (t, ·))t≤T such that ess-sup sup t≤T |w′3 (t, C2)| ≤ b, ess-sup sup t≤T |w′2 (t, C1, C2)| ≤ a, where w′1 : R≥0 × Ω1 → Rd, w′2 : R≥0 × Ω1 × Ω2 7→ R, w′3 : R≥0 × Ω3 7→ R, • for a tuple (a, b) ∈ R2≥0 and W (0), a space W + T (a, b,W (0)) of W ′ ∈ WT (a, b) such that W ′ (0) = W (0) additionally (and hence every W ′ in this space shares the same initialization W (0)). We equip the spaces with the metric ‖W ′ −W ′′‖T . It is easy to see that both spaces are complete. Note that Lemma 11 implies, under Assumption 1 and 9W90 < ∞, we have any MF solution W , if exists, is inWT (9W 90 +K0,2 (T ) ,9W 90 +K0,3 (T )). For the proof of Theorem 1, we work mainly with W+T (9W 90 +K0,2 (T ) ,9W 90 +K0,3 (T ) ,W (0)), although several intermediate lemmas are proven in more generality for other uses. Lemma 12. Under Assumption 1, for T ≥ 0, any W ′,W ′′ ∈ WT (a, b) and almost every z ∼ P: ess-sup sup t≤T ∣∣∆H2 (z, C2;W ′ (t))∣∣ ≤ Ka,b, ess-sup sup t≤T |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ≤ Ka,b ‖W ′ −W ′′‖T , sup t≤T |H3 (x;W ′ (t))−H3 (x;W ′′ (t))| ≤ Ka,b ‖W ′ −W ′′‖T , sup t≤T |∂2L (y, ŷ (x;W ′ (t)))− ∂2L (y, ŷ (x;W ′′ (t)))| ≤ Ka,b ‖W ′ −W ′′‖T , ess-sup sup t≤T ∣∣∆H2 (z, C2;W ′ (t))−∆H2 (z, C2;W ′′ (t))∣∣ ≤ Ka,b ‖W ′ −W ′′‖T , where Ka,b ≥ 1 is a generic constant that grows polynomially with a and b. Proof. The first bound is easy to see: ess-sup sup t≤T ∣∣∆H2 (z, C2;W ′ (t))∣∣ ≤ ess-sup sup t≤T |w′3 (t, C2)| ≤ b. We prove the second bound, invoking Assumption 1: |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ≤ K |w′2 (t, C1, C2)| |ϕ1 (〈w′1 (t, C1) , x〉)− ϕ1 (〈w′′1 (t, C1) , x〉)| +K |w′2 (t, C1, C2)− w′′2 (t, C1, C2)| ≤ K (|w′2 (t, C1, C2)|+ 1) ‖W ′ −W ′′‖T , which yields by the fact W ′ ∈ WT (a, b): ess-sup sup t≤T |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ≤ K (a+ 1) ‖W ′ −W ′′‖T . Consequently, we have: |H3 (x;W ′ (t))−H3 (x;W ′′ (t))| ≤ K |w′3 (t, C2)| |ϕ2 (H2 (x,C2;W ′ (t)))− ϕ2 (H2 (x,C2;W ′′ (t)))| +K |w′3 (t, C2)− w′′3 (t, C2)| ≤ K |w′3 (t, C2)| |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| +K ‖W ′ −W ′′‖T , |∂2L (y, ŷ (x;W ′ (t)))− ∂2L (y, ŷ (x;W ′′ (t)))| ≤ K |ŷ (x;W ′ (t))− ŷ (x;W ′′ (t))| ≤ K |H3 (x;W ′ (t))−H3 (x;W ′′ (t))| , which then yield the third and fourth bounds by the fact W ′,W ′′ ∈ WT (a, b). Using these bounds, we obtain the last bound:∣∣∆H2 (z, C2;W ′ (t))−∆H2 (z, C2;W ′′ (t))∣∣ ≤ K |w′3 (t, C2)| ( |∂2L (y, ŷ (x;W ′ (t)))− ∂2L (y, ŷ (x;W ′′ (t)))| + |H3 (x;W ′ (t))−H3 (x;W ′′ (t))|+ |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ) +K |w′3 (t, C2)− w′′3 (t, C2)| , from which the last bound follows. To prove Theorem 1, for a given W (0), we define a mapping FW (0) that maps from W ′ = (w′1, w ′ 2, w ′ 3) ∈ WT (a, b) to FW (0) (W ′) = W̄ ′ = (w̄′1, w̄′2, w̄′3), defined by W̄ ′ (0) = W (0) and ∂ ∂t w̄′3 (t, c2) = −ξ3 (t) ∆3 (c2;W ′ (t)) , ∂ ∂t w̄′2 (t, c1, c2) = −ξ2 (t) ∆2 (c1, c2;W ′ (t)) , ∂ ∂t w̄′1 (t, c1) = −ξ1 (t) ∆1 (c1;W ′ (t)) . Notice that the right-hand sides do not involve W̄ ′. Note that the MF ODEs’ solution, initialized at W (0), is a fixed point of this mapping. We establish the following estimates for this mapping. Lemma 13. Under Assumption 1, for T ≥ 0, any initialization W (0) and any W ′,W ′′ ∈ WT (a, b), ess-sup sup s≤t |∆3 (C2;W ′ (s))−∆3 (C2;W ′′ (s))| ≤ Ka,b ‖W ′ −W ′′‖t , ess-sup sup s≤t |∆2 (C1, C2;W ′ (s))−∆2 (C1, C2;W ′′ (s))| ≤ Ka,b ‖W ′ −W ′′‖t , ess-sup sup s≤t |∆1 (C1;W ′ (s))−∆1 (C1;W ′′ (s))| ≤ Ka,b ‖W ′ −W ′′‖t , and consequently, if in addition W ′ (0) = W ′′ (0) (not necessarily equal W (0)), then ess-sup sup t≤T |w̄′3 (t, C2)− w̄′′3 (t, C2)| ≤ Ka,b ∫ T 0 ‖W ′ −W ′′‖s ds, ess-sup sup t≤T |w̄′2 (t, C1, C2)− w̄′′2 (t, C1, C2)| ≤ Ka,b ∫ T 0 ‖W ′ −W ′′‖s ds, ess-sup sup t≤T |w̄′1 (t, C1)− w̄′′1 (t, C1)| ≤ Ka,b ∫ T 0 ‖W ′ −W ′′‖s ds, in which W̄ ′ = (w̄′1, w̄ ′ 2, w̄ ′ 3) = FW (0) (W ′), W̄ ′′ = (w̄′′1 , w̄ ′′ 2 , w̄ ′′ 3 ) = FW (0) (W ′′) and Ka,b ≥ 1 is a generic constant that grows polynomially with a and b. Proof. From Assumption 1 and the fact W ′,W ′′ ∈ WT (a, b), we get: |∆3 (C2;W ′ (s))−∆3 (C2;W ′′ (s))| ≤ KEZ [|∂2L (Y, ŷ (X;W ′ (s)))− ∂2L (Y, ŷ (X;W ′′ (s)))|] +KEZ [|H3 (X;W ′ (s))−H3 (X;W ′′ (s))|] +KEZ [|H2 (X,C2;W ′ (s))−H2 (X,C2;W ′′ (s))|] , |∆2 (C1, C2;W ′ (s))−∆2 (C1, C2;W ′′ (s))| ≤ Ka,b |w′1 (s, C1)− w′′1 (s, C1)| +K ∣∣EZ [∆H2 (Z,C2;W ′ (s))−∆H2 (Z,C2;W ′′ (s))]∣∣ , |∆1 (C1;W ′ (s))−∆1 (C1;W ′′ (s))| ≤ Ka,bEZ [∣∣∆H2 (Z,C2;W ′ (s))−∆H2 (Z,C2;W ′′ (s))∣∣] +Ka,b |w′2 (s, C1, C2)− w′′2 (s, C1, C2)| +Ka,b |w′1 (s, C1)− w′′1 (s, C1)| , from which the first three estimates then follow, in light of Lemma 12. The last three estimates then follow from the fact that W̄ ′ (0) = W̄ ′′ (0) and Assumption 1; for instance, ess-sup sup t≤T |w̄′3 (t, C2)− w̄′′3 (t, C2)| ≤ ∫ T 0 ess-sup ∣∣∣∣ ∂∂tw̄′3 (s, C2)− ∂∂tw̄′′3 (s, C2) ∣∣∣∣ ds ≤ K ∫ T 0 ess-sup |∆3 (C2;W ′ (s))−∆3 (C2;W ′′ (s))| ds. We are now ready to prove Theorem 1. Proof of Theorem 1. We will use a Picard-type iteration. To lighten notations: W+T ≡ W + T (9W 90 +K0,2 (T ) ,9W 90 +K0,3 (T ) ,W (0)) , F ≡ FW (0). Since 9W90 ≤ K by assumption, we have 9W 90 +K0,2 (T ) +K0,3 (T ) ≤ KT . Recall thatW+T is complete. For an arbitrary T > 0, consider W ′,W ′′ ∈ W+T . Lemma 13 yields: ‖F (W ′)− F (W ′′)‖T ≤ KT ∫ T 0 ‖W ′ −W ′′‖s ds. Note that F maps toW+T under Assumption 1 by the same argument as Lemma 11. Hence we are allowed to iterating this inequality and get, for an arbitrary T > 0,∥∥∥F (k) (W ′)− F (k) (W ′′)∥∥∥ T ≤ KT ∫ T 0 ∥∥∥F (k−1) (W ′)− F (k−1) (W ′′)∥∥∥ T2 dT2 ≤ K2T ∫ T 0 ∫ T2 0 ∥∥∥F (k−2) (W ′)− F (k−2) (W ′′)∥∥∥ T3 I (T2 ≤ T ) dT3dT2 ... ≤ KkT ∫ T 0 ∫ T2 0 ... ∫ Tk 0 ‖W ′ −W ′′‖Tk+1 I (Tk ≤ ... ≤ T2 ≤ T ) dTk+1...dT2 ≤ 1 k! KkT ‖W ′ −W ′′‖T . By substituting W ′′ = F (W ′), we have: ∞∑ k=1 ∥∥∥F (k+1) (W ′)− F (k) (W ′)∥∥∥ T = ∞∑ k=1 ∥∥∥F (k) (W ′′)− F (k) (W ′)∥∥∥ T ≤ ∞∑ k=1 1 k! KkT ‖W ′ −W ′′‖T <∞. Hence as k → ∞, F (k) (W ′) converges to a limit inW+T , which is a fixed point of F . The uniqueness of a fixed point follows from the above estimate, since if W ′ and W ′′ are fixed points then ‖W ′ −W ′′‖T = ∥∥∥F (k) (W ′)− F (k) (W ′′)∥∥∥ T ≤ 1 k! KkT ‖W ′ −W ′′‖T , while one can take k arbitrarily large. This proves that the solution exists and is unique on t ∈ [0, T ]. Since T is arbitrary, we have existence and uniqueness of the solution on the time interval [0,∞). C CONNECTION BETWEEN THE NEURAL NET AND ITS MF LIMIT: PROOFS FOR SECTION 3 C.1 PROOF OF THEOREM 3 We construct an auxiliary trajectory, which we call the particle ODEs: ∂ ∂t w̃3 (t, j2) = −ξ3 (t)EZ [ ∂2L ( Y, ŷ ( X; W̃ (t) )) ϕ′3 ( H3 ( X; W̃ (t) )) ϕ2 ( H2 ( X, j2; W̃ (t) ))] , ∂ ∂t w̃2 (t, j1, j2) = −ξ2 (t)EZ [ ∆H2 ( Z, j2; W̃ (t) ) ϕ1 (〈w̃1 (t, j1) , X〉) ] , ∂ ∂t w̃1 (t, j1) = −ξ1 (t)EZ 1 n2 n2∑ j2=1 ∆H2 ( Z, j2; W̃ (t) ) w̃2 (t, j1, j2)ϕ ′ 1 (〈w̃1 (t, j1) , X〉)X , in which j1 = 1, ..., n1, j2 = 1, ..., n2, W̃ (t) = (w̃1 (t, ·) , w̃2 (t, ·, ·) , w̃3 (t, ·)), and t ∈ R≥0. We specify the initialization W̃ (0): w̃1 (0, j1) = w01 (C1 (j1)), w̃2 (0, j1, j2) = w 0 2 (C1 (j1) , C2 (j2)) and w̃3 (0, j3) = w03 (C2 (j2)). That is, it shares the same initialization with the neural network one W (0), and hence is coupled with the neural network and the MF ODEs. Roughly speaking, the particle ODEs are continuous-time trajectories of finitely many neurons, averaged over the data distribution. We note that W̃ (t) is random for all t ∈ R≥0 due to the randomness of {Ci (ji)}i=1,2. The existence and uniqueness of the solution to the particle ODEs follows from the same proof as in Theorem 1, which we shall not repeat here. We equip W̃ (t) with the norm 9W̃9T = max { max j1≤n1, j2≤n2 sup t≤T |w̃2 (t, j1, j2)| , max j2≤n2 sup t≤T |w̃3 (t, j2)| } . One can also define the measures DT ( W, W̃ ) and DT ( W̃ ,W ) similar to Eq. (2): DT ( W, W̃ ) = sup { |w1 (t, C1 (j1))− w̃1 (t, C1 (j1))| , |w2 (t, C1 (j1) , C2 (j2))− w̃2 (t, C1 (j1) , C2 (j2))| , |w3 (t, C2 (j2))− w̃3 (t, C2 (j2))| : t ≤ T, j1 ≤ n1, j2 ≤ n2 } , DT ( W̃ ,W ) = sup { |w1 (bt/ c , j1)− w̃1 (t, C1 (j1))| , |w2 (bt/ c , j1, j2)− w̃2 (t, C1 (j1) , C2 (j2))| , |w3 (bt/ c , j2)− w̃3 (t, C2 (j2))| : t ≤ T, j1 ≤ n1, j2 ≤ n2 } . We have the following results: Theorem 14. Under the same setting as Theorem 3, for any δ > 0, with probability at least 1− δ, DT ( W, W̃ ) ≤ 1√ nmin log1/2 ( 3 (T + 1)n2max δ + e ) eKT , in which nmin = min {n1, n2}, nmax = max {n1, n2}, and KT = K ( 1 + TK ) . Theorem 15. Under the same setting as Theorem 3, for any δ > 0 and ≤ 1, with probability at least 1− δ, DT ( W̃ ,W ) ≤ √ log ( 2n1n2 δ + e ) eKT , in which KT = K ( 1 + TK ) . Proof of Theorem 3. Using the fact DT (W,W) ≤ DT ( W, W̃ ) + DT ( W̃ ,W ) , the thesis is immediate from Theorems 14 and 15. C.2 PROOF OF THEOREMS 14 AND 15 Proof of Theorem 14. In the following, let Kt denote an generic positive constant that may change from line to line and takes the form Kt = K ( 1 + tK ) , such that Kt ≥ 1 and Kt ≤ KT for all t ≤ T . We first note that at initialization, D0 ( W, W̃ ) = 0. Since 9W90 ≤ K, 9W9T ≤ KT by Lemma 11. Furthermore it is easy to see that 9W̃90 ≤ 9W90 ≤ K almost surely. By the same argument as in Lemma 11, 9W̃9T ≤ KT almost surely. We shall use all above bounds repeatedly in the proof. We decompose the proof into several steps. Step 1 - Main proof. Let us define, for brevity q3 (t, x) = H3 ( x; W̃ (t) ) −H3 (x;W (t)) , q2 (t, x, j2, c2) = H2 ( x, j2; W̃ (t) ) −H2 (x, c2;W (t)) , q∆ (t, z, j1, j2, c1, c2) = ∆ H 2 ( Z, j2; W̃ (t) ) w̃2 (t, j1, j2)−∆H2 (z, c2;W (t))w2 (t, c1, c2) . Consider t ≥ 0. We first bound the difference in the updates between W and W̃ . Let us start with w3 and w̃3. By Assumption 1, we have:∣∣∣∣ ∂∂tw̃3 (t, j2)− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ ≤ KEZ [|q3 (t,X)|+ |q2 (t,X, j2, C2 (j2))|] . Similarly, for w2 and w̃2,∣∣∣∣ ∂∂tw̃2 (t, j1, j2)− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣ ≤ KEZ [∣∣∣∆H2 (Z, j2; W̃ (t))−∆H2 (Z,C2 (j2) ;W (t))∣∣∣] +K |w3 (t, C2 (j2))| |w̃1 (t, j1)− w1 (t, C1 (j1))| ≤ KtEZ [|q3 (t,X)|+ |q2 (t,X, j2, C2 (j2))|] +Kt (|w̃1 (t, j1)− w1 (t, C1 (j1))|+ |w̃3 (t, j2)− w3 (t, C2 (j2))|) ≤ KtEZ [|q3 (t,X)|+ |q2 (t,X, j2, C2 (j2))|] +KtDt ( W, W̃ ) , and for w1 and w̃1, by Lemma 12,∣∣∣∣ ∂∂tw̃1 (t, j1)− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣ ≤ KEZ ∣∣∣∣∣∣ 1n2 n2∑ j2=1 EC2 [q∆ (t, Z, j1, j2, C1 (j1) , C2)] ∣∣∣∣∣∣ + EC2 [∣∣∆H2 (Z,C2;W (t))∣∣ |w2 (t, C1 (j1) , C2)|] |w̃1 (t, j1)− w1 (t, C1 (j1))| ≤ KEZ ∣∣∣∣∣∣ 1n2 n2∑ j2=1 EC2 [q∆ (t, Z, j1, j2, C1 (j1) , C2)] ∣∣∣∣∣∣ +KtDt ( W, W̃ ) . To further the bounding, we now make the following two claims: • Claim 1: For any ξ > 0, max j2≤n2 ∣∣∣∣ ∂∂tw3 (t+ ξ, C2 (j2))− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ ≤ Kt+ξξ, max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw2 (t+ ξ, C1 (j1) , C2 (j2))− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣ ≤ Kt+ξξ, max j1≤n1 ∣∣∣∣ ∂∂tw1 (t+ ξ, C1 (j1))− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣ ≤ Kt+ξξ, and similarly, max j2≤n2 ∣∣∣∣ ∂∂tw̃3 (t+ ξ, j2)− ∂∂tw̃3 (t, j2) ∣∣∣∣ ≤ Kt+ξξ, max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw̃2 (t+ ξ, j1, j2)− ∂∂tw̃2 (t, j1, j2) ∣∣∣∣ ≤ Kt+ξξ, max j1≤n1 ∣∣∣∣ ∂∂tw̃1 (t+ ξ, j1)− ∂∂tw̃1 (t, j1) ∣∣∣∣ ≤ Kt+ξξ. • Claim 2: For any γ1, γ2, γ3 > 0 and t ≥ 0, max { max j2≤n2 EZ [|q2 (t,X, j2, C2 (j2))|] , EZ [|q3 (t,X)|] , max j1≤n1 EZ ∣∣∣∣∣∣ 1n2 n2∑ j2=1 EC2 [q∆ (t, Z, j1, j2, C1 (j1) , C2)] ∣∣∣∣∣∣ } ≥ Kt ( Dt ( W, W̃ ) + γ1 + γ2 + γ3 ) , with probability at most n1 γ1 exp ( −n2γ 2 1 Kt ) + n2 γ2 exp ( −n1γ 2 2 Kt ) + 1 γ3 exp ( −n2γ 2 3 Kt ) . Combining these claims with the previous bounds, taking a union bound over t ∈ {0, ξ, 2ξ, ..., bT/ξc ξ} for some ξ ∈ (0, 1), we obtain that max { max j2≤n2 ∣∣∣∣ ∂∂tw̃3 (t, j2)− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ , max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw̃2 (t, j1, j2)− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣ , max j1≤n1 ∣∣∣∣ ∂∂tw̃1 (t, j1)− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣ } ≤ KT ( Dt ( W, W̃ ) + γ1 + γ2 + γ3 + ξ ) , ∀t ∈ [0, T ] , with probability at least 1− T + 1 ξ [ n1 γ1 exp ( −n2γ 2 1 KT ) + n2 γ2 exp ( −n1γ 2 2 KT ) + 1 γ3 exp ( −n2γ 2 3 KT )] . The above event in turn implies Dt ( W, W̃ ) ≤ KT ∫ t 0 ( Ds ( W, W̃ ) + γ1 + γ2 + γ3 + ξ ) ds, and hence by Gronwall’s lemma and the fact D0 ( W, W̃ ) = 0, we get DT ( W, W̃ ) ≤ (γ1 + γ2 + γ3 + ξ) eKT . The theorem then follows from the choice ξ = 1 √ nmax , γ2 = KT√ n1 log1/2 ( 3 (T + 1)n2max δ + e ) , γ1 = γ3 = KT√ n2 log1/2 ( 3 (T + 1)n2max δ + e ) . We are left with proving the claims. Step 2 - Proof of Claim 1. We have from Assumption 1, ess-sup |w3 (t+ ξ, C2)− w3 (t, C2)| ≤ K ∫ t+ξ t ess-sup ∣∣∣∣ ∂∂tw3 (s, C2) ∣∣∣∣ ds ≤ Kξ, ess-sup |w2 (t+ ξ, C1, C2)− w2 (t, C1, C2)| ≤ K ∫ t+ξ t ess-sup ∣∣∣∣ ∂∂tw2 (s, C1, C2) ∣∣∣∣ ds ≤ K ∫ t+ξ t ess-sup |w3 (s, C2)| ds ≤ Kt+ξξ, ess-sup |w1 (t+ ξ, C1)− w1 (t, C1)| ≤ K ∫ t+ξ t ess-sup ∣∣∣∣ ∂∂tw1 (s, C1) ∣∣∣∣ ds ≤ K ∫ t+ξ t ess-sup |w3 (s, C2)w2 (s, C1, C2)| ds ≤ Kt+ξξ. By Lemma 12, we then obtain that ess-supEZ [|H2 (X,C2;W (t+ ξ))−H2 (X,C2;W (t))|] ≤ Kt+ξξ, EZ [|H3 (X;W (t+ ξ))−H3 (X;W (t))|] ≤ Kt+ξξ, ess-supEZ [∣∣∆H2 (Z,C2;W (t+ ξ))−∆H2 (Z,C2;W (t))∣∣] ≤ Kt+ξξ. Using these estimates, we thus have, by Assumption 1, max j2≤n2 ∣∣∣∣ ∂∂tw3 (t+ ξ, C2 (j2))− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ ≤ Kt+ξξ +KEZ [|H3 (X;W (t+ ξ))−H3 (X;W (t))|] +Kess-supEZ [|H2 (X,C2;W (t+ ξ))−H2 (X,C2;W (t))|] ≤ Kt+ξξ, max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw2 (t+ ξ, C1 (j1) , C2 (j2))− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣ ≤ Kt+ξξ +Kess-supEZ [∣∣∆H2 (Z,C2;W (t+ ξ))−∆H2 (Z,C2;W (t))∣∣] +Kess-sup |w3 (t, C2)| |w1 (t+ ξ, C1)− w1 (t, C1)| ≤ Kt+ξξ, max j1≤n1 ∣∣∣∣ ∂∂tw1 (t+ ξ, C1 (j1))− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣ ≤ Kt+ξξ +Kess-supEZ [ EC2 [∣∣∆H2 (Z,C2;W (t+ ξ))−∆H2 (Z,C2;W (t))∣∣ |w2 (t, C1, C2)|]] +Kess-supEC2 [|w3 (t, C2)| |w2 (t+ ξ, C1, C2)− w2 (t, C1, C2)|] +Kess-supEC2 [|w3 (t, C2)w2 (t, C1, C2)|] |w1 (t+ ξ, C1)− w1 (t, C1)| ≤ Kt+ξξ. The proof of the rest of the claim is similar. Step 3 - Proof of Claim 2. We recall the definitions of q∆, q2 and q3. Let us decompose them as follows. We start with q2: |q2 (t, x,
1. What is the focus of the paper regarding neural networks in the mean field regime? 2. What are the strengths of the proposed approach, particularly in terms of global convergence guarantees? 3. What are the weaknesses of the paper regarding the writing style and notation usage? 4. How does the reviewer assess the significance and impact of the results on future research? 5. Are there any minor suggestions for improving the clarity and readability of the paper?
Review
Review Summary: Analysis of neural networks in the mean field regime gains more and more attention as it helps to study the dynamics in the wide regime. The paper extends the recent studies and provides global convergence guarantees for an unregularized feedforward three-layer NN. This is the first time global convergence is established for neural networks of more than two layers in the mean-field regime. I find the writing a bit chaotic and overdosed with notations. But, overall, I think the results are significant and will help to extend the line of research in the mean-field regime applied to neural networks. Questions: In the paper, it is said several times that the convergence result “does not rely critically on convexity”. What do you mean by “critically”? You still assume the convexity. Do you mean that it can be easily relaxed in future works? I think it should be better stated throughout the paper. Minor suggestions: Mean field -> mean-field Section 2.1 “k” is introduced at the very end. Maybe it would be better to add in the beginning, e.g. “the following network at time k”? I would say “W(k) consist of the weights …” instead of “W(k) is the weight with ….” It perturbs me a bit that the difference between NN notations and MF is its boldness. Boldness usually means a vector while in authors’ notations w 2 is an element. But I guess with superscripts or symbols like w ^ it would be heavier in notations... Section 2.2. I would like to see the description of \Omega, F and P in the beginning. Like “Given a NN \Omega_i would be a space of…”. Just to connect it from the start and to easify the following read. Definition 2. “The following hold” -> holds Section 4.2 “where V a set…” -> is “Helps avoiding” -> helps to avoid